I0109 10:47:04.545224 9 e2e.go:224] Starting e2e run "5f2df87f-32cd-11ea-ac2d-0242ac110005" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1578566823 - Will randomize all specs Will run 201 of 2164 specs Jan 9 10:47:04.865: INFO: >>> kubeConfig: /root/.kube/config Jan 9 10:47:04.868: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 9 10:47:04.888: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 9 10:47:04.922: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 9 10:47:04.922: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 9 10:47:04.922: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 9 10:47:04.931: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 9 10:47:04.931: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 9 10:47:04.931: INFO: e2e test version: v1.13.12 Jan 9 10:47:04.932: INFO: kube-apiserver version: v1.13.8 [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 10:47:04.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook Jan 9 10:47:05.107: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 9 10:47:25.227: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 9 10:47:25.249: INFO: Pod pod-with-prestop-http-hook still exists Jan 9 10:47:27.249: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 9 10:47:27.288: INFO: Pod pod-with-prestop-http-hook still exists Jan 9 10:47:29.249: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 9 10:47:29.265: INFO: Pod pod-with-prestop-http-hook still exists Jan 9 10:47:31.249: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 9 10:47:31.407: INFO: Pod pod-with-prestop-http-hook still exists Jan 9 10:47:33.249: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 9 10:47:33.266: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 10:47:33.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-q2htl" for this suite. Jan 9 10:47:57.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 10:47:57.642: INFO: namespace: e2e-tests-container-lifecycle-hook-q2htl, resource: bindings, ignored listing per whitelist Jan 9 10:47:57.661: INFO: namespace e2e-tests-container-lifecycle-hook-q2htl deletion completed in 24.343469218s • [SLOW TEST:52.729 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 10:47:57.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-ss5n STEP: Creating a pod to test atomic-volume-subpath Jan 9 10:47:58.018: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-ss5n" in namespace "e2e-tests-subpath-r8ggx" to be "success or failure" Jan 9 10:47:58.033: INFO: Pod "pod-subpath-test-downwardapi-ss5n": Phase="Pending", Reason="", readiness=false. Elapsed: 14.5242ms Jan 9 10:48:00.765: INFO: Pod "pod-subpath-test-downwardapi-ss5n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.74692371s Jan 9 10:48:02.780: INFO: Pod "pod-subpath-test-downwardapi-ss5n": Phase="Pending", Reason="", readiness=false. Elapsed: 4.761624362s Jan 9 10:48:04.800: INFO: Pod "pod-subpath-test-downwardapi-ss5n": Phase="Pending", Reason="", readiness=false. Elapsed: 6.782083702s Jan 9 10:48:06.870: INFO: Pod "pod-subpath-test-downwardapi-ss5n": Phase="Pending", Reason="", readiness=false. Elapsed: 8.852140727s Jan 9 10:48:08.889: INFO: Pod "pod-subpath-test-downwardapi-ss5n": Phase="Pending", Reason="", readiness=false. Elapsed: 10.871174299s Jan 9 10:48:10.912: INFO: Pod "pod-subpath-test-downwardapi-ss5n": Phase="Pending", Reason="", readiness=false. Elapsed: 12.893466758s Jan 9 10:48:12.918: INFO: Pod "pod-subpath-test-downwardapi-ss5n": Phase="Pending", Reason="", readiness=false. Elapsed: 14.900114138s Jan 9 10:48:14.934: INFO: Pod "pod-subpath-test-downwardapi-ss5n": Phase="Pending", Reason="", readiness=false. Elapsed: 16.916170741s Jan 9 10:48:16.954: INFO: Pod "pod-subpath-test-downwardapi-ss5n": Phase="Running", Reason="", readiness=false. Elapsed: 18.935736821s Jan 9 10:48:18.973: INFO: Pod "pod-subpath-test-downwardapi-ss5n": Phase="Running", Reason="", readiness=false. Elapsed: 20.954425172s Jan 9 10:48:21.002: INFO: Pod "pod-subpath-test-downwardapi-ss5n": Phase="Running", Reason="", readiness=false. Elapsed: 22.983742329s Jan 9 10:48:23.017: INFO: Pod "pod-subpath-test-downwardapi-ss5n": Phase="Running", Reason="", readiness=false. Elapsed: 24.998543029s Jan 9 10:48:25.032: INFO: Pod "pod-subpath-test-downwardapi-ss5n": Phase="Running", Reason="", readiness=false. Elapsed: 27.013780378s Jan 9 10:48:27.049: INFO: Pod "pod-subpath-test-downwardapi-ss5n": Phase="Running", Reason="", readiness=false. Elapsed: 29.030517219s Jan 9 10:48:29.068: INFO: Pod "pod-subpath-test-downwardapi-ss5n": Phase="Running", Reason="", readiness=false. Elapsed: 31.049975546s Jan 9 10:48:31.098: INFO: Pod "pod-subpath-test-downwardapi-ss5n": Phase="Running", Reason="", readiness=false. Elapsed: 33.079543569s Jan 9 10:48:33.110: INFO: Pod "pod-subpath-test-downwardapi-ss5n": Phase="Running", Reason="", readiness=false. Elapsed: 35.091888607s Jan 9 10:48:35.381: INFO: Pod "pod-subpath-test-downwardapi-ss5n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.362551054s STEP: Saw pod success Jan 9 10:48:35.381: INFO: Pod "pod-subpath-test-downwardapi-ss5n" satisfied condition "success or failure" Jan 9 10:48:35.395: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-ss5n container test-container-subpath-downwardapi-ss5n: STEP: delete the pod Jan 9 10:48:35.597: INFO: Waiting for pod pod-subpath-test-downwardapi-ss5n to disappear Jan 9 10:48:35.612: INFO: Pod pod-subpath-test-downwardapi-ss5n no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-ss5n Jan 9 10:48:35.612: INFO: Deleting pod "pod-subpath-test-downwardapi-ss5n" in namespace "e2e-tests-subpath-r8ggx" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 10:48:35.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-r8ggx" for this suite. Jan 9 10:48:43.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 10:48:43.689: INFO: namespace: e2e-tests-subpath-r8ggx, resource: bindings, ignored listing per whitelist Jan 9 10:48:43.885: INFO: namespace e2e-tests-subpath-r8ggx deletion completed in 8.251931527s • [SLOW TEST:46.223 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 10:48:43.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jan 9 10:48:44.098: INFO: PodSpec: initContainers in spec.initContainers Jan 9 10:49:55.305: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-9b27d617-32cd-11ea-ac2d-0242ac110005", GenerateName:"", Namespace:"e2e-tests-init-container-t5tf4", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-t5tf4/pods/pod-init-9b27d617-32cd-11ea-ac2d-0242ac110005", UID:"9b3a6a7c-32cd-11ea-a994-fa163e34d433", ResourceVersion:"17687495", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63714163724, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"98532672"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-qr75s", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc000a77200), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-qr75s", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-qr75s", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-qr75s", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0015fb968), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00193cfc0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0015fb9e0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0015fba00)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0015fba08), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0015fba0c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714163724, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714163724, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714163724, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714163724, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc0014c8380), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000758770)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0007587e0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://eff6ca976c457bfd167379c31918d738fd259a8102245b84a08ce8a60a106cf2"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0014c83e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0014c83a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 10:49:55.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-t5tf4" for this suite. Jan 9 10:50:19.526: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 10:50:19.683: INFO: namespace: e2e-tests-init-container-t5tf4, resource: bindings, ignored listing per whitelist Jan 9 10:50:19.697: INFO: namespace e2e-tests-init-container-t5tf4 deletion completed in 24.250765162s • [SLOW TEST:95.811 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 10:50:19.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jan 9 10:50:31.843: INFO: 10 pods remaining Jan 9 10:50:31.843: INFO: 10 pods has nil DeletionTimestamp Jan 9 10:50:31.843: INFO: Jan 9 10:50:32.944: INFO: 10 pods remaining Jan 9 10:50:32.944: INFO: 0 pods has nil DeletionTimestamp Jan 9 10:50:32.944: INFO: STEP: Gathering metrics W0109 10:50:33.619462 9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 9 10:50:33.619: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 10:50:33.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-d58qj" for this suite. Jan 9 10:50:49.671: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 10:50:49.711: INFO: namespace: e2e-tests-gc-d58qj, resource: bindings, ignored listing per whitelist Jan 9 10:50:49.974: INFO: namespace e2e-tests-gc-d58qj deletion completed in 16.348588463s • [SLOW TEST:30.276 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 10:50:49.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions Jan 9 10:50:50.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Jan 9 10:50:50.641: INFO: stderr: "" Jan 9 10:50:50.642: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 10:50:50.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-ndwnr" for this suite. Jan 9 10:50:58.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 10:50:58.719: INFO: namespace: e2e-tests-kubectl-ndwnr, resource: bindings, ignored listing per whitelist Jan 9 10:50:58.871: INFO: namespace e2e-tests-kubectl-ndwnr deletion completed in 8.212180624s • [SLOW TEST:8.897 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 10:50:58.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin Jan 9 10:50:59.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-hgwrf run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Jan 9 10:51:14.759: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0109 10:51:12.920301 60 log.go:172] (0xc0001380b0) (0xc000a3c320) Create stream\nI0109 10:51:12.920552 60 log.go:172] (0xc0001380b0) (0xc000a3c320) Stream added, broadcasting: 1\nI0109 10:51:12.935633 60 log.go:172] (0xc0001380b0) Reply frame received for 1\nI0109 10:51:12.935671 60 log.go:172] (0xc0001380b0) (0xc0003e01e0) Create stream\nI0109 10:51:12.935677 60 log.go:172] (0xc0001380b0) (0xc0003e01e0) Stream added, broadcasting: 3\nI0109 10:51:12.937586 60 log.go:172] (0xc0001380b0) Reply frame received for 3\nI0109 10:51:12.937631 60 log.go:172] (0xc0001380b0) (0xc000a3c000) Create stream\nI0109 10:51:12.937643 60 log.go:172] (0xc0001380b0) (0xc000a3c000) Stream added, broadcasting: 5\nI0109 10:51:12.938742 60 log.go:172] (0xc0001380b0) Reply frame received for 5\nI0109 10:51:12.938767 60 log.go:172] (0xc0001380b0) (0xc000676f00) Create stream\nI0109 10:51:12.938778 60 log.go:172] (0xc0001380b0) (0xc000676f00) Stream added, broadcasting: 7\nI0109 10:51:12.940846 60 log.go:172] (0xc0001380b0) Reply frame received for 7\nI0109 10:51:12.941258 60 log.go:172] (0xc0003e01e0) (3) Writing data frame\nI0109 10:51:12.941382 60 log.go:172] (0xc0003e01e0) (3) Writing data frame\nI0109 10:51:12.956235 60 log.go:172] (0xc0001380b0) Data frame received for 5\nI0109 10:51:12.956276 60 log.go:172] (0xc000a3c000) (5) Data frame handling\nI0109 10:51:12.956285 60 log.go:172] (0xc000a3c000) (5) Data frame sent\nI0109 10:51:12.963714 60 log.go:172] (0xc0001380b0) Data frame received for 5\nI0109 10:51:12.963780 60 log.go:172] (0xc000a3c000) (5) Data frame handling\nI0109 10:51:12.963803 60 log.go:172] (0xc000a3c000) (5) Data frame sent\nI0109 10:51:14.654349 60 log.go:172] (0xc0001380b0) (0xc000a3c000) Stream removed, broadcasting: 5\nI0109 10:51:14.654436 60 log.go:172] (0xc0001380b0) Data frame received for 1\nI0109 10:51:14.654458 60 log.go:172] (0xc0001380b0) (0xc0003e01e0) Stream removed, broadcasting: 3\nI0109 10:51:14.654541 60 log.go:172] (0xc000a3c320) (1) Data frame handling\nI0109 10:51:14.654567 60 log.go:172] (0xc000a3c320) (1) Data frame sent\nI0109 10:51:14.654582 60 log.go:172] (0xc0001380b0) (0xc000a3c320) Stream removed, broadcasting: 1\nI0109 10:51:14.654768 60 log.go:172] (0xc0001380b0) (0xc000676f00) Stream removed, broadcasting: 7\nI0109 10:51:14.654783 60 log.go:172] (0xc0001380b0) Go away received\nI0109 10:51:14.655120 60 log.go:172] (0xc0001380b0) (0xc000a3c320) Stream removed, broadcasting: 1\nI0109 10:51:14.655135 60 log.go:172] (0xc0001380b0) (0xc0003e01e0) Stream removed, broadcasting: 3\nI0109 10:51:14.655147 60 log.go:172] (0xc0001380b0) (0xc000a3c000) Stream removed, broadcasting: 5\nI0109 10:51:14.655154 60 log.go:172] (0xc0001380b0) (0xc000676f00) Stream removed, broadcasting: 7\n" Jan 9 10:51:14.759: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 10:51:16.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-hgwrf" for this suite. Jan 9 10:51:22.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 10:51:22.965: INFO: namespace: e2e-tests-kubectl-hgwrf, resource: bindings, ignored listing per whitelist Jan 9 10:51:23.096: INFO: namespace e2e-tests-kubectl-hgwrf deletion completed in 6.29566981s • [SLOW TEST:24.225 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 10:51:23.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 9 10:51:23.349: INFO: Waiting up to 5m0s for pod "pod-fa0ce0f0-32cd-11ea-ac2d-0242ac110005" in namespace "e2e-tests-emptydir-c44ht" to be "success or failure" Jan 9 10:51:23.446: INFO: Pod "pod-fa0ce0f0-32cd-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 97.017752ms Jan 9 10:51:25.458: INFO: Pod "pod-fa0ce0f0-32cd-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108313803s Jan 9 10:51:27.470: INFO: Pod "pod-fa0ce0f0-32cd-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.121186917s Jan 9 10:51:29.534: INFO: Pod "pod-fa0ce0f0-32cd-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.184959871s Jan 9 10:51:31.548: INFO: Pod "pod-fa0ce0f0-32cd-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.198834486s Jan 9 10:51:33.570: INFO: Pod "pod-fa0ce0f0-32cd-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.221139655s STEP: Saw pod success Jan 9 10:51:33.571: INFO: Pod "pod-fa0ce0f0-32cd-11ea-ac2d-0242ac110005" satisfied condition "success or failure" Jan 9 10:51:33.577: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-fa0ce0f0-32cd-11ea-ac2d-0242ac110005 container test-container: STEP: delete the pod Jan 9 10:51:34.400: INFO: Waiting for pod pod-fa0ce0f0-32cd-11ea-ac2d-0242ac110005 to disappear Jan 9 10:51:34.411: INFO: Pod pod-fa0ce0f0-32cd-11ea-ac2d-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 10:51:34.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-c44ht" for this suite. Jan 9 10:51:40.620: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 10:51:40.910: INFO: namespace: e2e-tests-emptydir-c44ht, resource: bindings, ignored listing per whitelist Jan 9 10:51:40.987: INFO: namespace e2e-tests-emptydir-c44ht deletion completed in 6.56206727s • [SLOW TEST:17.891 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 10:51:40.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-n945d A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-n945d;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-n945d A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-n945d;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-n945d.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-n945d.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-n945d.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-n945d.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-n945d.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-n945d.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-n945d.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-n945d.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-n945d.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-n945d.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-n945d.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-n945d.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-n945d.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 165.53.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.53.165_udp@PTR;check="$$(dig +tcp +noall +answer +search 165.53.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.53.165_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-n945d A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-n945d;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-n945d A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-n945d;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-n945d.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-n945d.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-n945d.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-n945d.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-n945d.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-n945d.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-n945d.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-n945d.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-n945d.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-n945d.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-n945d.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-n945d.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-n945d.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 165.53.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.53.165_udp@PTR;check="$$(dig +tcp +noall +answer +search 165.53.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.53.165_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 9 10:51:59.433: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-n945d/dns-test-04b8112b-32ce-11ea-ac2d-0242ac110005: the server could not find the requested resource (get pods dns-test-04b8112b-32ce-11ea-ac2d-0242ac110005) Jan 9 10:51:59.437: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-n945d/dns-test-04b8112b-32ce-11ea-ac2d-0242ac110005: the server could not find the requested resource (get pods dns-test-04b8112b-32ce-11ea-ac2d-0242ac110005) Jan 9 10:51:59.441: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-n945d from pod e2e-tests-dns-n945d/dns-test-04b8112b-32ce-11ea-ac2d-0242ac110005: the server could not find the requested resource (get pods dns-test-04b8112b-32ce-11ea-ac2d-0242ac110005) Jan 9 10:51:59.447: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-n945d from pod e2e-tests-dns-n945d/dns-test-04b8112b-32ce-11ea-ac2d-0242ac110005: the server could not find the requested resource (get pods dns-test-04b8112b-32ce-11ea-ac2d-0242ac110005) Jan 9 10:51:59.454: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-n945d.svc from pod e2e-tests-dns-n945d/dns-test-04b8112b-32ce-11ea-ac2d-0242ac110005: the server could not find the requested resource (get pods dns-test-04b8112b-32ce-11ea-ac2d-0242ac110005) Jan 9 10:51:59.460: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-n945d.svc from pod e2e-tests-dns-n945d/dns-test-04b8112b-32ce-11ea-ac2d-0242ac110005: the server could not find the requested resource (get pods dns-test-04b8112b-32ce-11ea-ac2d-0242ac110005) Jan 9 10:51:59.464: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-n945d.svc from pod e2e-tests-dns-n945d/dns-test-04b8112b-32ce-11ea-ac2d-0242ac110005: the server could not find the requested resource (get pods dns-test-04b8112b-32ce-11ea-ac2d-0242ac110005) Jan 9 10:51:59.467: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-n945d.svc from pod e2e-tests-dns-n945d/dns-test-04b8112b-32ce-11ea-ac2d-0242ac110005: the server could not find the requested resource (get pods dns-test-04b8112b-32ce-11ea-ac2d-0242ac110005) Jan 9 10:51:59.469: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-n945d.svc from pod e2e-tests-dns-n945d/dns-test-04b8112b-32ce-11ea-ac2d-0242ac110005: the server could not find the requested resource (get pods dns-test-04b8112b-32ce-11ea-ac2d-0242ac110005) Jan 9 10:51:59.472: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-n945d.svc from pod e2e-tests-dns-n945d/dns-test-04b8112b-32ce-11ea-ac2d-0242ac110005: the server could not find the requested resource (get pods dns-test-04b8112b-32ce-11ea-ac2d-0242ac110005) Jan 9 10:51:59.475: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-n945d/dns-test-04b8112b-32ce-11ea-ac2d-0242ac110005: the server could not find the requested resource (get pods dns-test-04b8112b-32ce-11ea-ac2d-0242ac110005) Jan 9 10:51:59.477: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-n945d/dns-test-04b8112b-32ce-11ea-ac2d-0242ac110005: the server could not find the requested resource (get pods dns-test-04b8112b-32ce-11ea-ac2d-0242ac110005) Jan 9 10:51:59.484: INFO: Lookups using e2e-tests-dns-n945d/dns-test-04b8112b-32ce-11ea-ac2d-0242ac110005 failed for: [jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-n945d jessie_tcp@dns-test-service.e2e-tests-dns-n945d jessie_udp@dns-test-service.e2e-tests-dns-n945d.svc jessie_tcp@dns-test-service.e2e-tests-dns-n945d.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-n945d.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-n945d.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-n945d.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-n945d.svc jessie_udp@PodARecord jessie_tcp@PodARecord] Jan 9 10:52:04.614: INFO: DNS probes using e2e-tests-dns-n945d/dns-test-04b8112b-32ce-11ea-ac2d-0242ac110005 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 10:52:04.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-n945d" for this suite. Jan 9 10:52:13.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 10:52:13.957: INFO: namespace: e2e-tests-dns-n945d, resource: bindings, ignored listing per whitelist Jan 9 10:52:13.976: INFO: namespace e2e-tests-dns-n945d deletion completed in 8.995772777s • [SLOW TEST:32.989 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 10:52:13.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace Jan 9 10:52:24.825: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 10:52:50.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-hd7rz" for this suite. Jan 9 10:52:56.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 10:52:56.331: INFO: namespace: e2e-tests-namespaces-hd7rz, resource: bindings, ignored listing per whitelist Jan 9 10:52:56.356: INFO: namespace e2e-tests-namespaces-hd7rz deletion completed in 6.337892088s STEP: Destroying namespace "e2e-tests-nsdeletetest-wvx8q" for this suite. Jan 9 10:52:56.361: INFO: Namespace e2e-tests-nsdeletetest-wvx8q was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-lld5v" for this suite. Jan 9 10:53:02.404: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 10:53:02.524: INFO: namespace: e2e-tests-nsdeletetest-lld5v, resource: bindings, ignored listing per whitelist Jan 9 10:53:02.777: INFO: namespace e2e-tests-nsdeletetest-lld5v deletion completed in 6.416535043s • [SLOW TEST:48.801 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 10:53:02.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 9 10:53:02.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-lsgbf' Jan 9 10:53:03.085: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 9 10:53:03.085: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Jan 9 10:53:03.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-lsgbf' Jan 9 10:53:03.438: INFO: stderr: "" Jan 9 10:53:03.438: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 10:53:03.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-lsgbf" for this suite. Jan 9 10:53:27.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 10:53:27.875: INFO: namespace: e2e-tests-kubectl-lsgbf, resource: bindings, ignored listing per whitelist Jan 9 10:53:27.902: INFO: namespace e2e-tests-kubectl-lsgbf deletion completed in 24.378687672s • [SLOW TEST:25.125 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 10:53:27.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jan 9 10:53:28.298: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-68xmm,SelfLink:/api/v1/namespaces/e2e-tests-watch-68xmm/configmaps/e2e-watch-test-label-changed,UID:4480295c-32ce-11ea-a994-fa163e34d433,ResourceVersion:17688045,Generation:0,CreationTimestamp:2020-01-09 10:53:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 9 10:53:28.298: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-68xmm,SelfLink:/api/v1/namespaces/e2e-tests-watch-68xmm/configmaps/e2e-watch-test-label-changed,UID:4480295c-32ce-11ea-a994-fa163e34d433,ResourceVersion:17688046,Generation:0,CreationTimestamp:2020-01-09 10:53:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jan 9 10:53:28.298: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-68xmm,SelfLink:/api/v1/namespaces/e2e-tests-watch-68xmm/configmaps/e2e-watch-test-label-changed,UID:4480295c-32ce-11ea-a994-fa163e34d433,ResourceVersion:17688047,Generation:0,CreationTimestamp:2020-01-09 10:53:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jan 9 10:53:39.051: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-68xmm,SelfLink:/api/v1/namespaces/e2e-tests-watch-68xmm/configmaps/e2e-watch-test-label-changed,UID:4480295c-32ce-11ea-a994-fa163e34d433,ResourceVersion:17688061,Generation:0,CreationTimestamp:2020-01-09 10:53:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 9 10:53:39.051: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-68xmm,SelfLink:/api/v1/namespaces/e2e-tests-watch-68xmm/configmaps/e2e-watch-test-label-changed,UID:4480295c-32ce-11ea-a994-fa163e34d433,ResourceVersion:17688062,Generation:0,CreationTimestamp:2020-01-09 10:53:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Jan 9 10:53:39.051: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-68xmm,SelfLink:/api/v1/namespaces/e2e-tests-watch-68xmm/configmaps/e2e-watch-test-label-changed,UID:4480295c-32ce-11ea-a994-fa163e34d433,ResourceVersion:17688063,Generation:0,CreationTimestamp:2020-01-09 10:53:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 10:53:39.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-68xmm" for this suite. Jan 9 10:53:45.094: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 10:53:45.264: INFO: namespace: e2e-tests-watch-68xmm, resource: bindings, ignored listing per whitelist Jan 9 10:53:45.326: INFO: namespace e2e-tests-watch-68xmm deletion completed in 6.264479819s • [SLOW TEST:17.423 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 10:53:45.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Jan 9 10:53:45.630: INFO: Waiting up to 5m0s for pod "pod-4ede9b69-32ce-11ea-ac2d-0242ac110005" in namespace "e2e-tests-emptydir-f8f96" to be "success or failure" Jan 9 10:53:45.644: INFO: Pod "pod-4ede9b69-32ce-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.747867ms Jan 9 10:53:47.704: INFO: Pod "pod-4ede9b69-32ce-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072979568s Jan 9 10:53:49.761: INFO: Pod "pod-4ede9b69-32ce-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.130278934s Jan 9 10:53:51.769: INFO: Pod "pod-4ede9b69-32ce-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.138923153s Jan 9 10:53:53.937: INFO: Pod "pod-4ede9b69-32ce-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.306131558s Jan 9 10:53:55.949: INFO: Pod "pod-4ede9b69-32ce-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.318726751s STEP: Saw pod success Jan 9 10:53:55.949: INFO: Pod "pod-4ede9b69-32ce-11ea-ac2d-0242ac110005" satisfied condition "success or failure" Jan 9 10:53:55.954: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-4ede9b69-32ce-11ea-ac2d-0242ac110005 container test-container: STEP: delete the pod Jan 9 10:53:56.980: INFO: Waiting for pod pod-4ede9b69-32ce-11ea-ac2d-0242ac110005 to disappear Jan 9 10:53:57.210: INFO: Pod pod-4ede9b69-32ce-11ea-ac2d-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 10:53:57.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-f8f96" for this suite. Jan 9 10:54:03.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 10:54:03.335: INFO: namespace: e2e-tests-emptydir-f8f96, resource: bindings, ignored listing per whitelist Jan 9 10:54:03.374: INFO: namespace e2e-tests-emptydir-f8f96 deletion completed in 6.152038907s • [SLOW TEST:18.048 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 10:54:03.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-599e75a3-32ce-11ea-ac2d-0242ac110005 STEP: Creating a pod to test consume secrets Jan 9 10:54:03.674: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-599f485a-32ce-11ea-ac2d-0242ac110005" in namespace "e2e-tests-projected-fmhbb" to be "success or failure" Jan 9 10:54:03.846: INFO: Pod "pod-projected-secrets-599f485a-32ce-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 172.045505ms Jan 9 10:54:05.862: INFO: Pod "pod-projected-secrets-599f485a-32ce-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.187519574s Jan 9 10:54:07.912: INFO: Pod "pod-projected-secrets-599f485a-32ce-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.237631513s Jan 9 10:54:09.932: INFO: Pod "pod-projected-secrets-599f485a-32ce-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.257962043s Jan 9 10:54:14.089: INFO: Pod "pod-projected-secrets-599f485a-32ce-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.414365061s Jan 9 10:54:16.102: INFO: Pod "pod-projected-secrets-599f485a-32ce-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.428108697s STEP: Saw pod success Jan 9 10:54:16.102: INFO: Pod "pod-projected-secrets-599f485a-32ce-11ea-ac2d-0242ac110005" satisfied condition "success or failure" Jan 9 10:54:16.107: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-599f485a-32ce-11ea-ac2d-0242ac110005 container projected-secret-volume-test: STEP: delete the pod Jan 9 10:54:17.036: INFO: Waiting for pod pod-projected-secrets-599f485a-32ce-11ea-ac2d-0242ac110005 to disappear Jan 9 10:54:17.043: INFO: Pod pod-projected-secrets-599f485a-32ce-11ea-ac2d-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 10:54:17.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-fmhbb" for this suite. Jan 9 10:54:23.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 10:54:23.373: INFO: namespace: e2e-tests-projected-fmhbb, resource: bindings, ignored listing per whitelist Jan 9 10:54:23.397: INFO: namespace e2e-tests-projected-fmhbb deletion completed in 6.343166847s • [SLOW TEST:20.023 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 10:54:23.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller Jan 9 10:54:23.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4th2f' Jan 9 10:54:24.204: INFO: stderr: "" Jan 9 10:54:24.204: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 9 10:54:24.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-4th2f' Jan 9 10:54:24.338: INFO: stderr: "" Jan 9 10:54:24.338: INFO: stdout: "update-demo-nautilus-tsvmf " STEP: Replicas for name=update-demo: expected=2 actual=1 Jan 9 10:54:29.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-4th2f' Jan 9 10:54:29.498: INFO: stderr: "" Jan 9 10:54:29.498: INFO: stdout: "update-demo-nautilus-82pdg update-demo-nautilus-tsvmf " Jan 9 10:54:29.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-82pdg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4th2f' Jan 9 10:54:29.601: INFO: stderr: "" Jan 9 10:54:29.601: INFO: stdout: "" Jan 9 10:54:29.601: INFO: update-demo-nautilus-82pdg is created but not running Jan 9 10:54:34.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-4th2f' Jan 9 10:54:34.736: INFO: stderr: "" Jan 9 10:54:34.736: INFO: stdout: "update-demo-nautilus-82pdg update-demo-nautilus-tsvmf " Jan 9 10:54:34.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-82pdg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4th2f' Jan 9 10:54:34.868: INFO: stderr: "" Jan 9 10:54:34.868: INFO: stdout: "" Jan 9 10:54:34.868: INFO: update-demo-nautilus-82pdg is created but not running Jan 9 10:54:39.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-4th2f' Jan 9 10:54:40.045: INFO: stderr: "" Jan 9 10:54:40.045: INFO: stdout: "update-demo-nautilus-82pdg update-demo-nautilus-tsvmf " Jan 9 10:54:40.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-82pdg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4th2f' Jan 9 10:54:40.205: INFO: stderr: "" Jan 9 10:54:40.205: INFO: stdout: "true" Jan 9 10:54:40.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-82pdg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4th2f' Jan 9 10:54:40.381: INFO: stderr: "" Jan 9 10:54:40.381: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 9 10:54:40.381: INFO: validating pod update-demo-nautilus-82pdg Jan 9 10:54:40.407: INFO: got data: { "image": "nautilus.jpg" } Jan 9 10:54:40.407: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 9 10:54:40.407: INFO: update-demo-nautilus-82pdg is verified up and running Jan 9 10:54:40.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tsvmf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4th2f' Jan 9 10:54:40.597: INFO: stderr: "" Jan 9 10:54:40.598: INFO: stdout: "true" Jan 9 10:54:40.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tsvmf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4th2f' Jan 9 10:54:40.741: INFO: stderr: "" Jan 9 10:54:40.741: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 9 10:54:40.741: INFO: validating pod update-demo-nautilus-tsvmf Jan 9 10:54:40.758: INFO: got data: { "image": "nautilus.jpg" } Jan 9 10:54:40.758: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 9 10:54:40.758: INFO: update-demo-nautilus-tsvmf is verified up and running STEP: rolling-update to new replication controller Jan 9 10:54:40.767: INFO: scanned /root for discovery docs: Jan 9 10:54:40.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-4th2f' Jan 9 10:55:19.290: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jan 9 10:55:19.290: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 9 10:55:19.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-4th2f' Jan 9 10:55:19.436: INFO: stderr: "" Jan 9 10:55:19.436: INFO: stdout: "update-demo-kitten-jsqkw update-demo-kitten-rgq9j update-demo-nautilus-tsvmf " STEP: Replicas for name=update-demo: expected=2 actual=3 Jan 9 10:55:24.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-4th2f' Jan 9 10:55:24.664: INFO: stderr: "" Jan 9 10:55:24.665: INFO: stdout: "update-demo-kitten-jsqkw update-demo-kitten-rgq9j " Jan 9 10:55:24.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-jsqkw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4th2f' Jan 9 10:55:24.790: INFO: stderr: "" Jan 9 10:55:24.790: INFO: stdout: "true" Jan 9 10:55:24.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-jsqkw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4th2f' Jan 9 10:55:24.878: INFO: stderr: "" Jan 9 10:55:24.878: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jan 9 10:55:24.878: INFO: validating pod update-demo-kitten-jsqkw Jan 9 10:55:24.894: INFO: got data: { "image": "kitten.jpg" } Jan 9 10:55:24.894: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jan 9 10:55:24.894: INFO: update-demo-kitten-jsqkw is verified up and running Jan 9 10:55:24.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-rgq9j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4th2f' Jan 9 10:55:25.017: INFO: stderr: "" Jan 9 10:55:25.017: INFO: stdout: "true" Jan 9 10:55:25.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-rgq9j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4th2f' Jan 9 10:55:25.096: INFO: stderr: "" Jan 9 10:55:25.096: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jan 9 10:55:25.096: INFO: validating pod update-demo-kitten-rgq9j Jan 9 10:55:25.105: INFO: got data: { "image": "kitten.jpg" } Jan 9 10:55:25.105: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jan 9 10:55:25.105: INFO: update-demo-kitten-rgq9j is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 10:55:25.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-4th2f" for this suite. Jan 9 10:55:53.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 10:55:53.154: INFO: namespace: e2e-tests-kubectl-4th2f, resource: bindings, ignored listing per whitelist Jan 9 10:55:53.254: INFO: namespace e2e-tests-kubectl-4th2f deletion completed in 28.141777937s • [SLOW TEST:89.857 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 10:55:53.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 10:56:01.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-4sxl8" for this suite. Jan 9 10:56:07.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 10:56:07.693: INFO: namespace: e2e-tests-emptydir-wrapper-4sxl8, resource: bindings, ignored listing per whitelist Jan 9 10:56:07.738: INFO: namespace e2e-tests-emptydir-wrapper-4sxl8 deletion completed in 6.125308792s • [SLOW TEST:14.484 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 10:56:07.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jan 9 10:56:18.618: INFO: Successfully updated pod "labelsupdatea3bbd18f-32ce-11ea-ac2d-0242ac110005" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 10:56:22.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-jxdjg" for this suite. Jan 9 10:56:46.974: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 10:56:47.095: INFO: namespace: e2e-tests-downward-api-jxdjg, resource: bindings, ignored listing per whitelist Jan 9 10:56:47.124: INFO: namespace e2e-tests-downward-api-jxdjg deletion completed in 24.246119589s • [SLOW TEST:39.386 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 10:56:47.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 9 10:56:47.888: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bb82970a-32ce-11ea-ac2d-0242ac110005" in namespace "e2e-tests-projected-cgjgm" to be "success or failure" Jan 9 10:56:47.988: INFO: Pod "downwardapi-volume-bb82970a-32ce-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 99.387037ms Jan 9 10:56:50.196: INFO: Pod "downwardapi-volume-bb82970a-32ce-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.307882934s Jan 9 10:56:52.219: INFO: Pod "downwardapi-volume-bb82970a-32ce-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330077192s Jan 9 10:56:54.848: INFO: Pod "downwardapi-volume-bb82970a-32ce-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.959646232s Jan 9 10:56:56.872: INFO: Pod "downwardapi-volume-bb82970a-32ce-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.983044969s Jan 9 10:56:58.989: INFO: Pod "downwardapi-volume-bb82970a-32ce-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.100542639s STEP: Saw pod success Jan 9 10:56:58.989: INFO: Pod "downwardapi-volume-bb82970a-32ce-11ea-ac2d-0242ac110005" satisfied condition "success or failure" Jan 9 10:56:59.033: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-bb82970a-32ce-11ea-ac2d-0242ac110005 container client-container: STEP: delete the pod Jan 9 10:56:59.316: INFO: Waiting for pod downwardapi-volume-bb82970a-32ce-11ea-ac2d-0242ac110005 to disappear Jan 9 10:56:59.340: INFO: Pod downwardapi-volume-bb82970a-32ce-11ea-ac2d-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 10:56:59.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-cgjgm" for this suite. Jan 9 10:57:05.467: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 10:57:05.558: INFO: namespace: e2e-tests-projected-cgjgm, resource: bindings, ignored listing per whitelist Jan 9 10:57:05.603: INFO: namespace e2e-tests-projected-cgjgm deletion completed in 6.240105638s • [SLOW TEST:18.479 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 10:57:05.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-4wqjc STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-4wqjc to expose endpoints map[] Jan 9 10:57:06.011: INFO: Get endpoints failed (76.770028ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Jan 9 10:57:07.028: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-4wqjc exposes endpoints map[] (1.094440514s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-4wqjc STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-4wqjc to expose endpoints map[pod1:[80]] Jan 9 10:57:12.971: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (5.903456841s elapsed, will retry) Jan 9 10:57:17.596: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-4wqjc exposes endpoints map[pod1:[80]] (10.528117173s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-4wqjc STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-4wqjc to expose endpoints map[pod1:[80] pod2:[80]] Jan 9 10:57:24.651: INFO: Unexpected endpoints: found map[c6ee7865-32ce-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (7.043587936s elapsed, will retry) Jan 9 10:57:30.139: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-4wqjc exposes endpoints map[pod1:[80] pod2:[80]] (12.532117818s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-4wqjc STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-4wqjc to expose endpoints map[pod2:[80]] Jan 9 10:57:31.214: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-4wqjc exposes endpoints map[pod2:[80]] (1.047959634s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-4wqjc STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-4wqjc to expose endpoints map[] Jan 9 10:57:32.904: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-4wqjc exposes endpoints map[] (1.67997149s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 10:57:33.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-4wqjc" for this suite. Jan 9 10:57:55.727: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 10:57:56.061: INFO: namespace: e2e-tests-services-4wqjc, resource: bindings, ignored listing per whitelist Jan 9 10:57:56.062: INFO: namespace e2e-tests-services-4wqjc deletion completed in 22.726617959s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:50.459 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 10:57:56.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 9 10:57:56.481: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jan 9 10:58:01.574: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 9 10:58:07.616: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jan 9 10:58:09.625: INFO: Creating deployment "test-rollover-deployment" Jan 9 10:58:09.656: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jan 9 10:58:11.671: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jan 9 10:58:11.681: INFO: Ensure that both replica sets have 1 created replica Jan 9 10:58:11.687: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jan 9 10:58:11.696: INFO: Updating deployment test-rollover-deployment Jan 9 10:58:11.696: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jan 9 10:58:14.046: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jan 9 10:58:14.058: INFO: Make sure deployment "test-rollover-deployment" is complete Jan 9 10:58:14.067: INFO: all replica sets need to contain the pod-template-hash label Jan 9 10:58:14.067: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714164289, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714164289, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714164292, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714164289, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 9 10:58:16.084: INFO: all replica sets need to contain the pod-template-hash label Jan 9 10:58:16.084: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714164289, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714164289, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714164292, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714164289, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 9 10:58:18.116: INFO: all replica sets need to contain the pod-template-hash label Jan 9 10:58:18.116: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714164289, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714164289, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714164292, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714164289, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 9 10:58:20.085: INFO: all replica sets need to contain the pod-template-hash label Jan 9 10:58:20.085: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714164289, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714164289, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714164292, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714164289, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 9 10:58:22.081: INFO: all replica sets need to contain the pod-template-hash label Jan 9 10:58:22.081: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714164289, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714164289, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714164292, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714164289, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 9 10:58:24.095: INFO: all replica sets need to contain the pod-template-hash label Jan 9 10:58:24.096: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714164289, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714164289, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714164303, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714164289, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 9 10:58:26.094: INFO: all replica sets need to contain the pod-template-hash label Jan 9 10:58:26.094: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714164289, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714164289, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714164303, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714164289, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 9 10:58:28.095: INFO: all replica sets need to contain the pod-template-hash label Jan 9 10:58:28.095: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714164289, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714164289, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714164303, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714164289, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 9 10:58:30.115: INFO: all replica sets need to contain the pod-template-hash label Jan 9 10:58:30.115: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714164289, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714164289, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714164303, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714164289, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 9 10:58:32.081: INFO: all replica sets need to contain the pod-template-hash label Jan 9 10:58:32.081: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714164289, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714164289, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714164303, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714164289, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 9 10:58:34.656: INFO: Jan 9 10:58:34.656: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jan 9 10:58:34.666: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-bkfrp,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-bkfrp/deployments/test-rollover-deployment,UID:ec3ccafe-32ce-11ea-a994-fa163e34d433,ResourceVersion:17688786,Generation:2,CreationTimestamp:2020-01-09 10:58:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-09 10:58:09 +0000 UTC 2020-01-09 10:58:09 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-09 10:58:33 +0000 UTC 2020-01-09 10:58:09 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jan 9 10:58:34.670: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-bkfrp,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-bkfrp/replicasets/test-rollover-deployment-5b8479fdb6,UID:ed7950ed-32ce-11ea-a994-fa163e34d433,ResourceVersion:17688777,Generation:2,CreationTimestamp:2020-01-09 10:58:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment ec3ccafe-32ce-11ea-a994-fa163e34d433 0xc001551f57 0xc001551f58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jan 9 10:58:34.670: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jan 9 10:58:34.670: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-bkfrp,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-bkfrp/replicasets/test-rollover-controller,UID:e4500c26-32ce-11ea-a994-fa163e34d433,ResourceVersion:17688785,Generation:2,CreationTimestamp:2020-01-09 10:57:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment ec3ccafe-32ce-11ea-a994-fa163e34d433 0xc001551dc7 0xc001551dc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 9 10:58:34.671: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-bkfrp,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-bkfrp/replicasets/test-rollover-deployment-58494b7559,UID:ec448c39-32ce-11ea-a994-fa163e34d433,ResourceVersion:17688743,Generation:2,CreationTimestamp:2020-01-09 10:58:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment ec3ccafe-32ce-11ea-a994-fa163e34d433 0xc001551e87 0xc001551e88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 9 10:58:34.682: INFO: Pod "test-rollover-deployment-5b8479fdb6-cdbcz" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-cdbcz,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-bkfrp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bkfrp/pods/test-rollover-deployment-5b8479fdb6-cdbcz,UID:edcbe2dd-32ce-11ea-a994-fa163e34d433,ResourceVersion:17688762,Generation:0,CreationTimestamp:2020-01-09 10:58:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 ed7950ed-32ce-11ea-a994-fa163e34d433 0xc001e482a7 0xc001e482a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ggrwm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ggrwm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-ggrwm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e48310} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e48330}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 10:58:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 10:58:22 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 10:58:22 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 10:58:12 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-09 10:58:12 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-09 10:58:21 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://0c92f80ef680f5144e035f95fa7d0135aae144ec7ea62e87d4a98a21c23d135a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 10:58:34.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-bkfrp" for this suite. Jan 9 10:58:44.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 10:58:45.176: INFO: namespace: e2e-tests-deployment-bkfrp, resource: bindings, ignored listing per whitelist Jan 9 10:58:45.201: INFO: namespace e2e-tests-deployment-bkfrp deletion completed in 10.514202181s • [SLOW TEST:49.139 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 10:58:45.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 9 10:58:45.437: INFO: Waiting up to 5m0s for pod "downwardapi-volume-01917cf9-32cf-11ea-ac2d-0242ac110005" in namespace "e2e-tests-projected-vggf6" to be "success or failure" Jan 9 10:58:45.454: INFO: Pod "downwardapi-volume-01917cf9-32cf-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.777178ms Jan 9 10:58:47.656: INFO: Pod "downwardapi-volume-01917cf9-32cf-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218864094s Jan 9 10:58:49.678: INFO: Pod "downwardapi-volume-01917cf9-32cf-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.240770239s Jan 9 10:58:51.690: INFO: Pod "downwardapi-volume-01917cf9-32cf-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.252917705s Jan 9 10:58:53.920: INFO: Pod "downwardapi-volume-01917cf9-32cf-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.483089422s Jan 9 10:58:55.963: INFO: Pod "downwardapi-volume-01917cf9-32cf-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.525835851s Jan 9 10:58:57.975: INFO: Pod "downwardapi-volume-01917cf9-32cf-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.538281912s Jan 9 10:59:00.562: INFO: Pod "downwardapi-volume-01917cf9-32cf-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.125637971s STEP: Saw pod success Jan 9 10:59:00.563: INFO: Pod "downwardapi-volume-01917cf9-32cf-11ea-ac2d-0242ac110005" satisfied condition "success or failure" Jan 9 10:59:00.598: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-01917cf9-32cf-11ea-ac2d-0242ac110005 container client-container: STEP: delete the pod Jan 9 10:59:00.687: INFO: Waiting for pod downwardapi-volume-01917cf9-32cf-11ea-ac2d-0242ac110005 to disappear Jan 9 10:59:00.700: INFO: Pod downwardapi-volume-01917cf9-32cf-11ea-ac2d-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 10:59:00.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vggf6" for this suite. Jan 9 10:59:06.858: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 10:59:06.916: INFO: namespace: e2e-tests-projected-vggf6, resource: bindings, ignored listing per whitelist Jan 9 10:59:06.958: INFO: namespace e2e-tests-projected-vggf6 deletion completed in 6.253366977s • [SLOW TEST:21.757 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 10:59:06.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jan 9 10:59:17.714: INFO: Successfully updated pod "labelsupdate0e75e787-32cf-11ea-ac2d-0242ac110005" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 10:59:19.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-z6srw" for this suite. Jan 9 10:59:55.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 10:59:55.910: INFO: namespace: e2e-tests-projected-z6srw, resource: bindings, ignored listing per whitelist Jan 9 10:59:55.953: INFO: namespace e2e-tests-projected-z6srw deletion completed in 36.175826076s • [SLOW TEST:48.995 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 10:59:55.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 9 10:59:56.263: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2bc9ae08-32cf-11ea-ac2d-0242ac110005" in namespace "e2e-tests-projected-s22ns" to be "success or failure" Jan 9 10:59:56.276: INFO: Pod "downwardapi-volume-2bc9ae08-32cf-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.783857ms Jan 9 10:59:58.436: INFO: Pod "downwardapi-volume-2bc9ae08-32cf-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.173432891s Jan 9 11:00:00.453: INFO: Pod "downwardapi-volume-2bc9ae08-32cf-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.190344534s Jan 9 11:00:03.422: INFO: Pod "downwardapi-volume-2bc9ae08-32cf-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.158661014s Jan 9 11:00:05.437: INFO: Pod "downwardapi-volume-2bc9ae08-32cf-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.173766s Jan 9 11:00:07.453: INFO: Pod "downwardapi-volume-2bc9ae08-32cf-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.190279884s STEP: Saw pod success Jan 9 11:00:07.453: INFO: Pod "downwardapi-volume-2bc9ae08-32cf-11ea-ac2d-0242ac110005" satisfied condition "success or failure" Jan 9 11:00:07.457: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-2bc9ae08-32cf-11ea-ac2d-0242ac110005 container client-container: STEP: delete the pod Jan 9 11:00:07.578: INFO: Waiting for pod downwardapi-volume-2bc9ae08-32cf-11ea-ac2d-0242ac110005 to disappear Jan 9 11:00:08.317: INFO: Pod downwardapi-volume-2bc9ae08-32cf-11ea-ac2d-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:00:08.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-s22ns" for this suite. Jan 9 11:00:14.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:00:15.100: INFO: namespace: e2e-tests-projected-s22ns, resource: bindings, ignored listing per whitelist Jan 9 11:00:15.110: INFO: namespace e2e-tests-projected-s22ns deletion completed in 6.401150118s • [SLOW TEST:19.157 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:00:15.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jan 9 11:00:27.940: INFO: Successfully updated pod "annotationupdate37292edd-32cf-11ea-ac2d-0242ac110005" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:00:30.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-ph7z5" for this suite. Jan 9 11:00:56.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:00:56.077: INFO: namespace: e2e-tests-downward-api-ph7z5, resource: bindings, ignored listing per whitelist Jan 9 11:00:56.200: INFO: namespace e2e-tests-downward-api-ph7z5 deletion completed in 26.1655373s • [SLOW TEST:41.089 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:00:56.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-4fa5199e-32cf-11ea-ac2d-0242ac110005 STEP: Creating a pod to test consume secrets Jan 9 11:00:56.437: INFO: Waiting up to 5m0s for pod "pod-secrets-4fa74cd3-32cf-11ea-ac2d-0242ac110005" in namespace "e2e-tests-secrets-lgw54" to be "success or failure" Jan 9 11:00:56.477: INFO: Pod "pod-secrets-4fa74cd3-32cf-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 39.57664ms Jan 9 11:00:59.380: INFO: Pod "pod-secrets-4fa74cd3-32cf-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.94253944s Jan 9 11:01:01.401: INFO: Pod "pod-secrets-4fa74cd3-32cf-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.963242112s Jan 9 11:01:03.411: INFO: Pod "pod-secrets-4fa74cd3-32cf-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.972946968s Jan 9 11:01:05.419: INFO: Pod "pod-secrets-4fa74cd3-32cf-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.981456011s Jan 9 11:01:07.430: INFO: Pod "pod-secrets-4fa74cd3-32cf-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.992699133s Jan 9 11:01:10.700: INFO: Pod "pod-secrets-4fa74cd3-32cf-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.262674886s STEP: Saw pod success Jan 9 11:01:10.700: INFO: Pod "pod-secrets-4fa74cd3-32cf-11ea-ac2d-0242ac110005" satisfied condition "success or failure" Jan 9 11:01:11.302: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-4fa74cd3-32cf-11ea-ac2d-0242ac110005 container secret-volume-test: STEP: delete the pod Jan 9 11:01:11.700: INFO: Waiting for pod pod-secrets-4fa74cd3-32cf-11ea-ac2d-0242ac110005 to disappear Jan 9 11:01:11.751: INFO: Pod pod-secrets-4fa74cd3-32cf-11ea-ac2d-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:01:11.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-lgw54" for this suite. Jan 9 11:01:17.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:01:17.860: INFO: namespace: e2e-tests-secrets-lgw54, resource: bindings, ignored listing per whitelist Jan 9 11:01:17.933: INFO: namespace e2e-tests-secrets-lgw54 deletion completed in 6.17781605s • [SLOW TEST:21.733 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:01:17.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-jn6kf STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 9 11:01:18.086: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 9 11:01:53.106: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-jn6kf PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 9 11:01:53.106: INFO: >>> kubeConfig: /root/.kube/config I0109 11:01:53.184978 9 log.go:172] (0xc000176b00) (0xc001d77220) Create stream I0109 11:01:53.185048 9 log.go:172] (0xc000176b00) (0xc001d77220) Stream added, broadcasting: 1 I0109 11:01:53.189509 9 log.go:172] (0xc000176b00) Reply frame received for 1 I0109 11:01:53.189550 9 log.go:172] (0xc000176b00) (0xc000e8ed20) Create stream I0109 11:01:53.189565 9 log.go:172] (0xc000176b00) (0xc000e8ed20) Stream added, broadcasting: 3 I0109 11:01:53.190870 9 log.go:172] (0xc000176b00) Reply frame received for 3 I0109 11:01:53.190887 9 log.go:172] (0xc000176b00) (0xc001d772c0) Create stream I0109 11:01:53.190895 9 log.go:172] (0xc000176b00) (0xc001d772c0) Stream added, broadcasting: 5 I0109 11:01:53.192301 9 log.go:172] (0xc000176b00) Reply frame received for 5 I0109 11:01:53.349782 9 log.go:172] (0xc000176b00) Data frame received for 3 I0109 11:01:53.349842 9 log.go:172] (0xc000e8ed20) (3) Data frame handling I0109 11:01:53.349869 9 log.go:172] (0xc000e8ed20) (3) Data frame sent I0109 11:01:53.477653 9 log.go:172] (0xc000176b00) Data frame received for 1 I0109 11:01:53.477706 9 log.go:172] (0xc001d77220) (1) Data frame handling I0109 11:01:53.477729 9 log.go:172] (0xc001d77220) (1) Data frame sent I0109 11:01:53.479747 9 log.go:172] (0xc000176b00) (0xc001d77220) Stream removed, broadcasting: 1 I0109 11:01:53.479909 9 log.go:172] (0xc000176b00) (0xc000e8ed20) Stream removed, broadcasting: 3 I0109 11:01:53.479986 9 log.go:172] (0xc000176b00) (0xc001d772c0) Stream removed, broadcasting: 5 I0109 11:01:53.480023 9 log.go:172] (0xc000176b00) Go away received I0109 11:01:53.480154 9 log.go:172] (0xc000176b00) (0xc001d77220) Stream removed, broadcasting: 1 I0109 11:01:53.480166 9 log.go:172] (0xc000176b00) (0xc000e8ed20) Stream removed, broadcasting: 3 I0109 11:01:53.480175 9 log.go:172] (0xc000176b00) (0xc001d772c0) Stream removed, broadcasting: 5 Jan 9 11:01:53.480: INFO: Found all expected endpoints: [netserver-0] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:01:53.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-jn6kf" for this suite. Jan 9 11:02:17.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:02:17.644: INFO: namespace: e2e-tests-pod-network-test-jn6kf, resource: bindings, ignored listing per whitelist Jan 9 11:02:17.651: INFO: namespace e2e-tests-pod-network-test-jn6kf deletion completed in 24.159837605s • [SLOW TEST:59.718 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:02:17.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0109 11:02:28.004004 9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 9 11:02:28.004: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:02:28.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-rklzv" for this suite. Jan 9 11:02:34.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:02:34.597: INFO: namespace: e2e-tests-gc-rklzv, resource: bindings, ignored listing per whitelist Jan 9 11:02:34.664: INFO: namespace e2e-tests-gc-rklzv deletion completed in 6.657436459s • [SLOW TEST:17.013 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:02:34.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-bx8l STEP: Creating a pod to test atomic-volume-subpath Jan 9 11:02:34.942: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-bx8l" in namespace "e2e-tests-subpath-z456m" to be "success or failure" Jan 9 11:02:34.997: INFO: Pod "pod-subpath-test-configmap-bx8l": Phase="Pending", Reason="", readiness=false. Elapsed: 54.674949ms Jan 9 11:02:37.737: INFO: Pod "pod-subpath-test-configmap-bx8l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.794555092s Jan 9 11:02:39.744: INFO: Pod "pod-subpath-test-configmap-bx8l": Phase="Pending", Reason="", readiness=false. Elapsed: 4.801722429s Jan 9 11:02:41.754: INFO: Pod "pod-subpath-test-configmap-bx8l": Phase="Pending", Reason="", readiness=false. Elapsed: 6.812094318s Jan 9 11:02:43.793: INFO: Pod "pod-subpath-test-configmap-bx8l": Phase="Pending", Reason="", readiness=false. Elapsed: 8.8508558s Jan 9 11:02:45.806: INFO: Pod "pod-subpath-test-configmap-bx8l": Phase="Pending", Reason="", readiness=false. Elapsed: 10.863606577s Jan 9 11:02:49.915: INFO: Pod "pod-subpath-test-configmap-bx8l": Phase="Pending", Reason="", readiness=false. Elapsed: 14.972417948s Jan 9 11:02:51.929: INFO: Pod "pod-subpath-test-configmap-bx8l": Phase="Pending", Reason="", readiness=false. Elapsed: 16.987231335s Jan 9 11:02:53.963: INFO: Pod "pod-subpath-test-configmap-bx8l": Phase="Running", Reason="", readiness=false. Elapsed: 19.020349081s Jan 9 11:02:55.970: INFO: Pod "pod-subpath-test-configmap-bx8l": Phase="Running", Reason="", readiness=false. Elapsed: 21.028224946s Jan 9 11:02:57.983: INFO: Pod "pod-subpath-test-configmap-bx8l": Phase="Running", Reason="", readiness=false. Elapsed: 23.040860523s Jan 9 11:03:00.004: INFO: Pod "pod-subpath-test-configmap-bx8l": Phase="Running", Reason="", readiness=false. Elapsed: 25.061420743s Jan 9 11:03:02.032: INFO: Pod "pod-subpath-test-configmap-bx8l": Phase="Running", Reason="", readiness=false. Elapsed: 27.089442089s Jan 9 11:03:04.056: INFO: Pod "pod-subpath-test-configmap-bx8l": Phase="Running", Reason="", readiness=false. Elapsed: 29.113792423s Jan 9 11:03:06.066: INFO: Pod "pod-subpath-test-configmap-bx8l": Phase="Running", Reason="", readiness=false. Elapsed: 31.123282355s Jan 9 11:03:08.076: INFO: Pod "pod-subpath-test-configmap-bx8l": Phase="Running", Reason="", readiness=false. Elapsed: 33.133645575s Jan 9 11:03:10.097: INFO: Pod "pod-subpath-test-configmap-bx8l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.154967189s STEP: Saw pod success Jan 9 11:03:10.097: INFO: Pod "pod-subpath-test-configmap-bx8l" satisfied condition "success or failure" Jan 9 11:03:10.102: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-bx8l container test-container-subpath-configmap-bx8l: STEP: delete the pod Jan 9 11:03:10.203: INFO: Waiting for pod pod-subpath-test-configmap-bx8l to disappear Jan 9 11:03:10.273: INFO: Pod pod-subpath-test-configmap-bx8l no longer exists STEP: Deleting pod pod-subpath-test-configmap-bx8l Jan 9 11:03:10.273: INFO: Deleting pod "pod-subpath-test-configmap-bx8l" in namespace "e2e-tests-subpath-z456m" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:03:10.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-z456m" for this suite. Jan 9 11:03:16.441: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:03:16.557: INFO: namespace: e2e-tests-subpath-z456m, resource: bindings, ignored listing per whitelist Jan 9 11:03:16.636: INFO: namespace e2e-tests-subpath-z456m deletion completed in 6.306480161s • [SLOW TEST:41.972 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:03:16.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0109 11:03:47.654112 9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 9 11:03:47.654: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:03:47.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-ljq6j" for this suite. Jan 9 11:03:58.021: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:03:58.949: INFO: namespace: e2e-tests-gc-ljq6j, resource: bindings, ignored listing per whitelist Jan 9 11:03:59.785: INFO: namespace e2e-tests-gc-ljq6j deletion completed in 12.127460857s • [SLOW TEST:43.149 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:03:59.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy Jan 9 11:04:00.594: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix118405784/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:04:00.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-l8dx5" for this suite. Jan 9 11:04:06.757: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:04:06.885: INFO: namespace: e2e-tests-kubectl-l8dx5, resource: bindings, ignored listing per whitelist Jan 9 11:04:06.938: INFO: namespace e2e-tests-kubectl-l8dx5 deletion completed in 6.203516033s • [SLOW TEST:7.152 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:04:06.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info Jan 9 11:04:07.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Jan 9 11:04:09.907: INFO: stderr: "" Jan 9 11:04:09.907: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:04:09.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-jttdf" for this suite. Jan 9 11:04:15.971: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:04:15.995: INFO: namespace: e2e-tests-kubectl-jttdf, resource: bindings, ignored listing per whitelist Jan 9 11:04:16.083: INFO: namespace e2e-tests-kubectl-jttdf deletion completed in 6.160172436s • [SLOW TEST:9.145 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:04:16.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-c6d9e511-32cf-11ea-ac2d-0242ac110005 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:04:28.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-sn9nm" for this suite. Jan 9 11:04:54.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:04:54.903: INFO: namespace: e2e-tests-configmap-sn9nm, resource: bindings, ignored listing per whitelist Jan 9 11:04:54.963: INFO: namespace e2e-tests-configmap-sn9nm deletion completed in 26.302739525s • [SLOW TEST:38.880 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:04:54.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 9 11:04:55.320: INFO: Number of nodes with available pods: 0 Jan 9 11:04:55.320: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:04:56.350: INFO: Number of nodes with available pods: 0 Jan 9 11:04:56.350: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:04:57.373: INFO: Number of nodes with available pods: 0 Jan 9 11:04:57.373: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:04:58.361: INFO: Number of nodes with available pods: 0 Jan 9 11:04:58.361: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:04:59.336: INFO: Number of nodes with available pods: 0 Jan 9 11:04:59.336: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:05:01.637: INFO: Number of nodes with available pods: 0 Jan 9 11:05:01.637: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:05:02.387: INFO: Number of nodes with available pods: 0 Jan 9 11:05:02.387: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:05:03.338: INFO: Number of nodes with available pods: 0 Jan 9 11:05:03.338: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:05:04.334: INFO: Number of nodes with available pods: 0 Jan 9 11:05:04.334: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:05:05.347: INFO: Number of nodes with available pods: 1 Jan 9 11:05:05.347: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Stop a daemon pod, check that the daemon pod is revived. Jan 9 11:05:05.408: INFO: Number of nodes with available pods: 0 Jan 9 11:05:05.408: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:05:06.423: INFO: Number of nodes with available pods: 0 Jan 9 11:05:06.423: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:05:07.425: INFO: Number of nodes with available pods: 0 Jan 9 11:05:07.425: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:05:08.441: INFO: Number of nodes with available pods: 0 Jan 9 11:05:08.441: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:05:09.431: INFO: Number of nodes with available pods: 0 Jan 9 11:05:09.431: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:05:10.431: INFO: Number of nodes with available pods: 0 Jan 9 11:05:10.431: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:05:11.575: INFO: Number of nodes with available pods: 0 Jan 9 11:05:11.575: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:05:12.436: INFO: Number of nodes with available pods: 0 Jan 9 11:05:12.436: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:05:13.424: INFO: Number of nodes with available pods: 0 Jan 9 11:05:13.424: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:05:14.491: INFO: Number of nodes with available pods: 0 Jan 9 11:05:14.491: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:05:15.447: INFO: Number of nodes with available pods: 0 Jan 9 11:05:15.447: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:05:16.720: INFO: Number of nodes with available pods: 0 Jan 9 11:05:16.720: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:05:17.424: INFO: Number of nodes with available pods: 0 Jan 9 11:05:17.424: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:05:18.454: INFO: Number of nodes with available pods: 0 Jan 9 11:05:18.454: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:05:19.431: INFO: Number of nodes with available pods: 0 Jan 9 11:05:19.431: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:05:20.491: INFO: Number of nodes with available pods: 0 Jan 9 11:05:20.491: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:05:21.423: INFO: Number of nodes with available pods: 0 Jan 9 11:05:21.423: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:05:22.532: INFO: Number of nodes with available pods: 0 Jan 9 11:05:22.533: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:05:23.425: INFO: Number of nodes with available pods: 0 Jan 9 11:05:23.425: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:05:24.600: INFO: Number of nodes with available pods: 0 Jan 9 11:05:24.601: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:05:25.425: INFO: Number of nodes with available pods: 0 Jan 9 11:05:25.425: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:05:26.438: INFO: Number of nodes with available pods: 0 Jan 9 11:05:26.438: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:05:27.430: INFO: Number of nodes with available pods: 0 Jan 9 11:05:27.430: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:05:30.066: INFO: Number of nodes with available pods: 0 Jan 9 11:05:30.066: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:05:30.495: INFO: Number of nodes with available pods: 0 Jan 9 11:05:30.495: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:05:31.421: INFO: Number of nodes with available pods: 0 Jan 9 11:05:31.422: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:05:32.430: INFO: Number of nodes with available pods: 0 Jan 9 11:05:32.430: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:05:35.054: INFO: Number of nodes with available pods: 1 Jan 9 11:05:35.054: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-g6zxh, will wait for the garbage collector to delete the pods Jan 9 11:05:35.486: INFO: Deleting DaemonSet.extensions daemon-set took: 138.696533ms Jan 9 11:05:35.586: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.490475ms Jan 9 11:05:42.639: INFO: Number of nodes with available pods: 0 Jan 9 11:05:42.639: INFO: Number of running nodes: 0, number of available pods: 0 Jan 9 11:05:42.654: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-g6zxh/daemonsets","resourceVersion":"17689712"},"items":null} Jan 9 11:05:42.667: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-g6zxh/pods","resourceVersion":"17689712"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:05:42.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-g6zxh" for this suite. Jan 9 11:05:48.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:05:48.832: INFO: namespace: e2e-tests-daemonsets-g6zxh, resource: bindings, ignored listing per whitelist Jan 9 11:05:48.889: INFO: namespace e2e-tests-daemonsets-g6zxh deletion completed in 6.198208266s • [SLOW TEST:53.926 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:05:48.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-fe0ba419-32cf-11ea-ac2d-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 9 11:05:49.055: INFO: Waiting up to 5m0s for pod "pod-configmaps-fe11b4cb-32cf-11ea-ac2d-0242ac110005" in namespace "e2e-tests-configmap-r49m5" to be "success or failure" Jan 9 11:05:49.080: INFO: Pod "pod-configmaps-fe11b4cb-32cf-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.758923ms Jan 9 11:05:54.054: INFO: Pod "pod-configmaps-fe11b4cb-32cf-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.999498214s Jan 9 11:05:56.072: INFO: Pod "pod-configmaps-fe11b4cb-32cf-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.017489365s Jan 9 11:05:58.090: INFO: Pod "pod-configmaps-fe11b4cb-32cf-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.035371469s Jan 9 11:06:00.420: INFO: Pod "pod-configmaps-fe11b4cb-32cf-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.364898259s Jan 9 11:06:02.429: INFO: Pod "pod-configmaps-fe11b4cb-32cf-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.37458902s STEP: Saw pod success Jan 9 11:06:02.429: INFO: Pod "pod-configmaps-fe11b4cb-32cf-11ea-ac2d-0242ac110005" satisfied condition "success or failure" Jan 9 11:06:02.440: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-fe11b4cb-32cf-11ea-ac2d-0242ac110005 container configmap-volume-test: STEP: delete the pod Jan 9 11:06:02.693: INFO: Waiting for pod pod-configmaps-fe11b4cb-32cf-11ea-ac2d-0242ac110005 to disappear Jan 9 11:06:02.705: INFO: Pod pod-configmaps-fe11b4cb-32cf-11ea-ac2d-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:06:02.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-r49m5" for this suite. Jan 9 11:06:09.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:06:09.847: INFO: namespace: e2e-tests-configmap-r49m5, resource: bindings, ignored listing per whitelist Jan 9 11:06:09.924: INFO: namespace e2e-tests-configmap-r49m5 deletion completed in 7.208613876s • [SLOW TEST:21.036 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:06:09.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-0afc1c82-32d0-11ea-ac2d-0242ac110005 STEP: Creating secret with name s-test-opt-upd-0afc1d9d-32d0-11ea-ac2d-0242ac110005 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-0afc1c82-32d0-11ea-ac2d-0242ac110005 STEP: Updating secret s-test-opt-upd-0afc1d9d-32d0-11ea-ac2d-0242ac110005 STEP: Creating secret with name s-test-opt-create-0afc1de8-32d0-11ea-ac2d-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:07:51.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-mtt7r" for this suite. Jan 9 11:08:16.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:08:16.284: INFO: namespace: e2e-tests-secrets-mtt7r, resource: bindings, ignored listing per whitelist Jan 9 11:08:16.312: INFO: namespace e2e-tests-secrets-mtt7r deletion completed in 24.309877322s • [SLOW TEST:126.387 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:08:16.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-679pp [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet Jan 9 11:08:16.631: INFO: Found 0 stateful pods, waiting for 3 Jan 9 11:08:26.648: INFO: Found 1 stateful pods, waiting for 3 Jan 9 11:08:36.892: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 9 11:08:36.892: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 9 11:08:36.892: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 9 11:08:46.648: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 9 11:08:46.648: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 9 11:08:46.648: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jan 9 11:08:46.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-679pp ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 9 11:08:47.350: INFO: stderr: "I0109 11:08:46.822905 577 log.go:172] (0xc00015c840) (0xc00064f400) Create stream\nI0109 11:08:46.823106 577 log.go:172] (0xc00015c840) (0xc00064f400) Stream added, broadcasting: 1\nI0109 11:08:46.827540 577 log.go:172] (0xc00015c840) Reply frame received for 1\nI0109 11:08:46.827570 577 log.go:172] (0xc00015c840) (0xc00057a000) Create stream\nI0109 11:08:46.827581 577 log.go:172] (0xc00015c840) (0xc00057a000) Stream added, broadcasting: 3\nI0109 11:08:46.828482 577 log.go:172] (0xc00015c840) Reply frame received for 3\nI0109 11:08:46.828503 577 log.go:172] (0xc00015c840) (0xc00064f4a0) Create stream\nI0109 11:08:46.828513 577 log.go:172] (0xc00015c840) (0xc00064f4a0) Stream added, broadcasting: 5\nI0109 11:08:46.829363 577 log.go:172] (0xc00015c840) Reply frame received for 5\nI0109 11:08:47.185316 577 log.go:172] (0xc00015c840) Data frame received for 3\nI0109 11:08:47.185508 577 log.go:172] (0xc00057a000) (3) Data frame handling\nI0109 11:08:47.185529 577 log.go:172] (0xc00057a000) (3) Data frame sent\nI0109 11:08:47.340992 577 log.go:172] (0xc00015c840) Data frame received for 1\nI0109 11:08:47.341236 577 log.go:172] (0xc00015c840) (0xc00064f4a0) Stream removed, broadcasting: 5\nI0109 11:08:47.341299 577 log.go:172] (0xc00064f400) (1) Data frame handling\nI0109 11:08:47.341308 577 log.go:172] (0xc00064f400) (1) Data frame sent\nI0109 11:08:47.341404 577 log.go:172] (0xc00015c840) (0xc00057a000) Stream removed, broadcasting: 3\nI0109 11:08:47.341524 577 log.go:172] (0xc00015c840) (0xc00064f400) Stream removed, broadcasting: 1\nI0109 11:08:47.341553 577 log.go:172] (0xc00015c840) Go away received\nI0109 11:08:47.342320 577 log.go:172] (0xc00015c840) (0xc00064f400) Stream removed, broadcasting: 1\nI0109 11:08:47.342374 577 log.go:172] (0xc00015c840) (0xc00057a000) Stream removed, broadcasting: 3\nI0109 11:08:47.342387 577 log.go:172] (0xc00015c840) (0xc00064f4a0) Stream removed, broadcasting: 5\n" Jan 9 11:08:47.350: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 9 11:08:47.350: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jan 9 11:08:47.421: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jan 9 11:08:57.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-679pp ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:08:58.010: INFO: stderr: "I0109 11:08:57.737374 598 log.go:172] (0xc0006ea370) (0xc0005a32c0) Create stream\nI0109 11:08:57.737579 598 log.go:172] (0xc0006ea370) (0xc0005a32c0) Stream added, broadcasting: 1\nI0109 11:08:57.742775 598 log.go:172] (0xc0006ea370) Reply frame received for 1\nI0109 11:08:57.742820 598 log.go:172] (0xc0006ea370) (0xc0006a8000) Create stream\nI0109 11:08:57.742835 598 log.go:172] (0xc0006ea370) (0xc0006a8000) Stream added, broadcasting: 3\nI0109 11:08:57.744869 598 log.go:172] (0xc0006ea370) Reply frame received for 3\nI0109 11:08:57.744939 598 log.go:172] (0xc0006ea370) (0xc0006a80a0) Create stream\nI0109 11:08:57.744964 598 log.go:172] (0xc0006ea370) (0xc0006a80a0) Stream added, broadcasting: 5\nI0109 11:08:57.748906 598 log.go:172] (0xc0006ea370) Reply frame received for 5\nI0109 11:08:57.885918 598 log.go:172] (0xc0006ea370) Data frame received for 3\nI0109 11:08:57.886396 598 log.go:172] (0xc0006a8000) (3) Data frame handling\nI0109 11:08:57.886459 598 log.go:172] (0xc0006a8000) (3) Data frame sent\nI0109 11:08:58.005224 598 log.go:172] (0xc0006ea370) (0xc0006a8000) Stream removed, broadcasting: 3\nI0109 11:08:58.005310 598 log.go:172] (0xc0006ea370) Data frame received for 1\nI0109 11:08:58.005321 598 log.go:172] (0xc0005a32c0) (1) Data frame handling\nI0109 11:08:58.005332 598 log.go:172] (0xc0005a32c0) (1) Data frame sent\nI0109 11:08:58.005342 598 log.go:172] (0xc0006ea370) (0xc0005a32c0) Stream removed, broadcasting: 1\nI0109 11:08:58.005392 598 log.go:172] (0xc0006ea370) (0xc0006a80a0) Stream removed, broadcasting: 5\nI0109 11:08:58.005464 598 log.go:172] (0xc0006ea370) Go away received\nI0109 11:08:58.005642 598 log.go:172] (0xc0006ea370) (0xc0005a32c0) Stream removed, broadcasting: 1\nI0109 11:08:58.005665 598 log.go:172] (0xc0006ea370) (0xc0006a8000) Stream removed, broadcasting: 3\nI0109 11:08:58.005682 598 log.go:172] (0xc0006ea370) (0xc0006a80a0) Stream removed, broadcasting: 5\n" Jan 9 11:08:58.011: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 9 11:08:58.011: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 9 11:09:08.058: INFO: Waiting for StatefulSet e2e-tests-statefulset-679pp/ss2 to complete update Jan 9 11:09:08.058: INFO: Waiting for Pod e2e-tests-statefulset-679pp/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 9 11:09:08.058: INFO: Waiting for Pod e2e-tests-statefulset-679pp/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 9 11:09:08.058: INFO: Waiting for Pod e2e-tests-statefulset-679pp/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 9 11:09:18.083: INFO: Waiting for StatefulSet e2e-tests-statefulset-679pp/ss2 to complete update Jan 9 11:09:18.083: INFO: Waiting for Pod e2e-tests-statefulset-679pp/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 9 11:09:18.083: INFO: Waiting for Pod e2e-tests-statefulset-679pp/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 9 11:09:29.081: INFO: Waiting for StatefulSet e2e-tests-statefulset-679pp/ss2 to complete update Jan 9 11:09:29.081: INFO: Waiting for Pod e2e-tests-statefulset-679pp/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 9 11:09:29.081: INFO: Waiting for Pod e2e-tests-statefulset-679pp/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 9 11:09:38.073: INFO: Waiting for StatefulSet e2e-tests-statefulset-679pp/ss2 to complete update Jan 9 11:09:38.073: INFO: Waiting for Pod e2e-tests-statefulset-679pp/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 9 11:09:48.075: INFO: Waiting for StatefulSet e2e-tests-statefulset-679pp/ss2 to complete update STEP: Rolling back to a previous revision Jan 9 11:09:58.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-679pp ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 9 11:09:58.905: INFO: stderr: "I0109 11:09:58.373481 620 log.go:172] (0xc00014c840) (0xc00063f220) Create stream\nI0109 11:09:58.373657 620 log.go:172] (0xc00014c840) (0xc00063f220) Stream added, broadcasting: 1\nI0109 11:09:58.380621 620 log.go:172] (0xc00014c840) Reply frame received for 1\nI0109 11:09:58.380655 620 log.go:172] (0xc00014c840) (0xc000754000) Create stream\nI0109 11:09:58.380686 620 log.go:172] (0xc00014c840) (0xc000754000) Stream added, broadcasting: 3\nI0109 11:09:58.381781 620 log.go:172] (0xc00014c840) Reply frame received for 3\nI0109 11:09:58.381815 620 log.go:172] (0xc00014c840) (0xc00063f2c0) Create stream\nI0109 11:09:58.381834 620 log.go:172] (0xc00014c840) (0xc00063f2c0) Stream added, broadcasting: 5\nI0109 11:09:58.382937 620 log.go:172] (0xc00014c840) Reply frame received for 5\nI0109 11:09:58.733839 620 log.go:172] (0xc00014c840) Data frame received for 3\nI0109 11:09:58.733893 620 log.go:172] (0xc000754000) (3) Data frame handling\nI0109 11:09:58.733903 620 log.go:172] (0xc000754000) (3) Data frame sent\nI0109 11:09:58.886972 620 log.go:172] (0xc00014c840) (0xc000754000) Stream removed, broadcasting: 3\nI0109 11:09:58.887779 620 log.go:172] (0xc00014c840) Data frame received for 1\nI0109 11:09:58.887840 620 log.go:172] (0xc00014c840) (0xc00063f2c0) Stream removed, broadcasting: 5\nI0109 11:09:58.887911 620 log.go:172] (0xc00063f220) (1) Data frame handling\nI0109 11:09:58.887939 620 log.go:172] (0xc00063f220) (1) Data frame sent\nI0109 11:09:58.887956 620 log.go:172] (0xc00014c840) (0xc00063f220) Stream removed, broadcasting: 1\nI0109 11:09:58.887983 620 log.go:172] (0xc00014c840) Go away received\nI0109 11:09:58.889985 620 log.go:172] (0xc00014c840) (0xc00063f220) Stream removed, broadcasting: 1\nI0109 11:09:58.890437 620 log.go:172] (0xc00014c840) (0xc000754000) Stream removed, broadcasting: 3\nI0109 11:09:58.890481 620 log.go:172] (0xc00014c840) (0xc00063f2c0) Stream removed, broadcasting: 5\n" Jan 9 11:09:58.905: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 9 11:09:58.905: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 9 11:10:09.033: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jan 9 11:10:19.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-679pp ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:10:19.870: INFO: stderr: "I0109 11:10:19.403323 642 log.go:172] (0xc000734160) (0xc0005d4640) Create stream\nI0109 11:10:19.403602 642 log.go:172] (0xc000734160) (0xc0005d4640) Stream added, broadcasting: 1\nI0109 11:10:19.408566 642 log.go:172] (0xc000734160) Reply frame received for 1\nI0109 11:10:19.408602 642 log.go:172] (0xc000734160) (0xc0004f0d20) Create stream\nI0109 11:10:19.408612 642 log.go:172] (0xc000734160) (0xc0004f0d20) Stream added, broadcasting: 3\nI0109 11:10:19.409466 642 log.go:172] (0xc000734160) Reply frame received for 3\nI0109 11:10:19.409516 642 log.go:172] (0xc000734160) (0xc000512000) Create stream\nI0109 11:10:19.409528 642 log.go:172] (0xc000734160) (0xc000512000) Stream added, broadcasting: 5\nI0109 11:10:19.411252 642 log.go:172] (0xc000734160) Reply frame received for 5\nI0109 11:10:19.514225 642 log.go:172] (0xc000734160) Data frame received for 3\nI0109 11:10:19.514310 642 log.go:172] (0xc0004f0d20) (3) Data frame handling\nI0109 11:10:19.514335 642 log.go:172] (0xc0004f0d20) (3) Data frame sent\nI0109 11:10:19.851071 642 log.go:172] (0xc000734160) (0xc0004f0d20) Stream removed, broadcasting: 3\nI0109 11:10:19.851448 642 log.go:172] (0xc000734160) Data frame received for 1\nI0109 11:10:19.851511 642 log.go:172] (0xc0005d4640) (1) Data frame handling\nI0109 11:10:19.851550 642 log.go:172] (0xc000734160) (0xc000512000) Stream removed, broadcasting: 5\nI0109 11:10:19.851712 642 log.go:172] (0xc0005d4640) (1) Data frame sent\nI0109 11:10:19.851731 642 log.go:172] (0xc000734160) (0xc0005d4640) Stream removed, broadcasting: 1\nI0109 11:10:19.851764 642 log.go:172] (0xc000734160) Go away received\nI0109 11:10:19.852877 642 log.go:172] (0xc000734160) (0xc0005d4640) Stream removed, broadcasting: 1\nI0109 11:10:19.852967 642 log.go:172] (0xc000734160) (0xc0004f0d20) Stream removed, broadcasting: 3\nI0109 11:10:19.853002 642 log.go:172] (0xc000734160) (0xc000512000) Stream removed, broadcasting: 5\n" Jan 9 11:10:19.871: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 9 11:10:19.871: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 9 11:10:19.931: INFO: Waiting for StatefulSet e2e-tests-statefulset-679pp/ss2 to complete update Jan 9 11:10:19.931: INFO: Waiting for Pod e2e-tests-statefulset-679pp/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 9 11:10:19.931: INFO: Waiting for Pod e2e-tests-statefulset-679pp/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 9 11:10:19.931: INFO: Waiting for Pod e2e-tests-statefulset-679pp/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 9 11:10:29.986: INFO: Waiting for StatefulSet e2e-tests-statefulset-679pp/ss2 to complete update Jan 9 11:10:29.986: INFO: Waiting for Pod e2e-tests-statefulset-679pp/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 9 11:10:29.986: INFO: Waiting for Pod e2e-tests-statefulset-679pp/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 9 11:10:29.986: INFO: Waiting for Pod e2e-tests-statefulset-679pp/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 9 11:10:39.972: INFO: Waiting for StatefulSet e2e-tests-statefulset-679pp/ss2 to complete update Jan 9 11:10:39.972: INFO: Waiting for Pod e2e-tests-statefulset-679pp/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 9 11:10:39.972: INFO: Waiting for Pod e2e-tests-statefulset-679pp/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 9 11:10:49.966: INFO: Waiting for StatefulSet e2e-tests-statefulset-679pp/ss2 to complete update Jan 9 11:10:49.966: INFO: Waiting for Pod e2e-tests-statefulset-679pp/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 9 11:10:49.966: INFO: Waiting for Pod e2e-tests-statefulset-679pp/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 9 11:10:59.976: INFO: Waiting for StatefulSet e2e-tests-statefulset-679pp/ss2 to complete update Jan 9 11:10:59.976: INFO: Waiting for Pod e2e-tests-statefulset-679pp/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 9 11:11:09.957: INFO: Waiting for StatefulSet e2e-tests-statefulset-679pp/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jan 9 11:11:19.952: INFO: Deleting all statefulset in ns e2e-tests-statefulset-679pp Jan 9 11:11:19.957: INFO: Scaling statefulset ss2 to 0 Jan 9 11:12:00.004: INFO: Waiting for statefulset status.replicas updated to 0 Jan 9 11:12:00.014: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:12:00.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-679pp" for this suite. Jan 9 11:12:08.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:12:08.247: INFO: namespace: e2e-tests-statefulset-679pp, resource: bindings, ignored listing per whitelist Jan 9 11:12:08.388: INFO: namespace e2e-tests-statefulset-679pp deletion completed in 8.252194529s • [SLOW TEST:232.076 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:12:08.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-vr85h [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-vr85h STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-vr85h Jan 9 11:12:08.737: INFO: Found 0 stateful pods, waiting for 1 Jan 9 11:12:18.764: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jan 9 11:12:18.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 9 11:12:19.432: INFO: stderr: "I0109 11:12:19.093512 665 log.go:172] (0xc00035e4d0) (0xc0007e8640) Create stream\nI0109 11:12:19.093692 665 log.go:172] (0xc00035e4d0) (0xc0007e8640) Stream added, broadcasting: 1\nI0109 11:12:19.098974 665 log.go:172] (0xc00035e4d0) Reply frame received for 1\nI0109 11:12:19.099010 665 log.go:172] (0xc00035e4d0) (0xc00062ae60) Create stream\nI0109 11:12:19.099026 665 log.go:172] (0xc00035e4d0) (0xc00062ae60) Stream added, broadcasting: 3\nI0109 11:12:19.101663 665 log.go:172] (0xc00035e4d0) Reply frame received for 3\nI0109 11:12:19.101784 665 log.go:172] (0xc00035e4d0) (0xc000632000) Create stream\nI0109 11:12:19.101808 665 log.go:172] (0xc00035e4d0) (0xc000632000) Stream added, broadcasting: 5\nI0109 11:12:19.103051 665 log.go:172] (0xc00035e4d0) Reply frame received for 5\nI0109 11:12:19.299013 665 log.go:172] (0xc00035e4d0) Data frame received for 3\nI0109 11:12:19.299053 665 log.go:172] (0xc00062ae60) (3) Data frame handling\nI0109 11:12:19.299070 665 log.go:172] (0xc00062ae60) (3) Data frame sent\nI0109 11:12:19.417550 665 log.go:172] (0xc00035e4d0) Data frame received for 1\nI0109 11:12:19.417687 665 log.go:172] (0xc0007e8640) (1) Data frame handling\nI0109 11:12:19.417730 665 log.go:172] (0xc0007e8640) (1) Data frame sent\nI0109 11:12:19.417777 665 log.go:172] (0xc00035e4d0) (0xc0007e8640) Stream removed, broadcasting: 1\nI0109 11:12:19.419430 665 log.go:172] (0xc00035e4d0) (0xc00062ae60) Stream removed, broadcasting: 3\nI0109 11:12:19.419555 665 log.go:172] (0xc00035e4d0) (0xc000632000) Stream removed, broadcasting: 5\nI0109 11:12:19.419627 665 log.go:172] (0xc00035e4d0) (0xc0007e8640) Stream removed, broadcasting: 1\nI0109 11:12:19.419638 665 log.go:172] (0xc00035e4d0) (0xc00062ae60) Stream removed, broadcasting: 3\nI0109 11:12:19.419649 665 log.go:172] (0xc00035e4d0) (0xc000632000) Stream removed, broadcasting: 5\nI0109 11:12:19.419760 665 log.go:172] (0xc00035e4d0) Go away received\n" Jan 9 11:12:19.433: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 9 11:12:19.433: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 9 11:12:19.464: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 9 11:12:29.482: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 9 11:12:29.482: INFO: Waiting for statefulset status.replicas updated to 0 Jan 9 11:12:29.535: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999628s Jan 9 11:12:30.555: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.967856586s Jan 9 11:12:31.576: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.947919766s Jan 9 11:12:32.611: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.926106427s Jan 9 11:12:33.628: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.892140086s Jan 9 11:12:34.644: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.875217728s Jan 9 11:12:35.660: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.858738169s Jan 9 11:12:36.685: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.84223467s Jan 9 11:12:37.699: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.81802051s Jan 9 11:12:38.724: INFO: Verifying statefulset ss doesn't scale past 1 for another 803.586035ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-vr85h Jan 9 11:12:39.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:12:40.318: INFO: stderr: "I0109 11:12:40.007912 687 log.go:172] (0xc0005e4420) (0xc0000212c0) Create stream\nI0109 11:12:40.008235 687 log.go:172] (0xc0005e4420) (0xc0000212c0) Stream added, broadcasting: 1\nI0109 11:12:40.021549 687 log.go:172] (0xc0005e4420) Reply frame received for 1\nI0109 11:12:40.021607 687 log.go:172] (0xc0005e4420) (0xc000760000) Create stream\nI0109 11:12:40.021621 687 log.go:172] (0xc0005e4420) (0xc000760000) Stream added, broadcasting: 3\nI0109 11:12:40.022967 687 log.go:172] (0xc0005e4420) Reply frame received for 3\nI0109 11:12:40.022996 687 log.go:172] (0xc0005e4420) (0xc000021360) Create stream\nI0109 11:12:40.023002 687 log.go:172] (0xc0005e4420) (0xc000021360) Stream added, broadcasting: 5\nI0109 11:12:40.023882 687 log.go:172] (0xc0005e4420) Reply frame received for 5\nI0109 11:12:40.197191 687 log.go:172] (0xc0005e4420) Data frame received for 3\nI0109 11:12:40.197289 687 log.go:172] (0xc000760000) (3) Data frame handling\nI0109 11:12:40.197311 687 log.go:172] (0xc000760000) (3) Data frame sent\nI0109 11:12:40.309856 687 log.go:172] (0xc0005e4420) (0xc000760000) Stream removed, broadcasting: 3\nI0109 11:12:40.310024 687 log.go:172] (0xc0005e4420) Data frame received for 1\nI0109 11:12:40.310058 687 log.go:172] (0xc0000212c0) (1) Data frame handling\nI0109 11:12:40.310091 687 log.go:172] (0xc0000212c0) (1) Data frame sent\nI0109 11:12:40.310232 687 log.go:172] (0xc0005e4420) (0xc0000212c0) Stream removed, broadcasting: 1\nI0109 11:12:40.310306 687 log.go:172] (0xc0005e4420) (0xc000021360) Stream removed, broadcasting: 5\nI0109 11:12:40.310421 687 log.go:172] (0xc0005e4420) Go away received\nI0109 11:12:40.310682 687 log.go:172] (0xc0005e4420) (0xc0000212c0) Stream removed, broadcasting: 1\nI0109 11:12:40.310705 687 log.go:172] (0xc0005e4420) (0xc000760000) Stream removed, broadcasting: 3\nI0109 11:12:40.310722 687 log.go:172] (0xc0005e4420) (0xc000021360) Stream removed, broadcasting: 5\n" Jan 9 11:12:40.318: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 9 11:12:40.318: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 9 11:12:40.334: INFO: Found 1 stateful pods, waiting for 3 Jan 9 11:12:50.358: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 9 11:12:50.359: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 9 11:12:50.359: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 9 11:13:00.365: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 9 11:13:00.365: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 9 11:13:00.365: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jan 9 11:13:00.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 9 11:13:01.230: INFO: stderr: "I0109 11:13:00.681826 709 log.go:172] (0xc00088a210) (0xc0008825a0) Create stream\nI0109 11:13:00.682309 709 log.go:172] (0xc00088a210) (0xc0008825a0) Stream added, broadcasting: 1\nI0109 11:13:00.691028 709 log.go:172] (0xc00088a210) Reply frame received for 1\nI0109 11:13:00.691090 709 log.go:172] (0xc00088a210) (0xc0005eec80) Create stream\nI0109 11:13:00.691100 709 log.go:172] (0xc00088a210) (0xc0005eec80) Stream added, broadcasting: 3\nI0109 11:13:00.692656 709 log.go:172] (0xc00088a210) Reply frame received for 3\nI0109 11:13:00.692685 709 log.go:172] (0xc00088a210) (0xc000710000) Create stream\nI0109 11:13:00.692696 709 log.go:172] (0xc00088a210) (0xc000710000) Stream added, broadcasting: 5\nI0109 11:13:00.693958 709 log.go:172] (0xc00088a210) Reply frame received for 5\nI0109 11:13:00.947178 709 log.go:172] (0xc00088a210) Data frame received for 3\nI0109 11:13:00.947363 709 log.go:172] (0xc0005eec80) (3) Data frame handling\nI0109 11:13:00.947412 709 log.go:172] (0xc0005eec80) (3) Data frame sent\nI0109 11:13:01.217914 709 log.go:172] (0xc00088a210) (0xc0005eec80) Stream removed, broadcasting: 3\nI0109 11:13:01.218289 709 log.go:172] (0xc00088a210) Data frame received for 1\nI0109 11:13:01.218389 709 log.go:172] (0xc00088a210) (0xc000710000) Stream removed, broadcasting: 5\nI0109 11:13:01.218475 709 log.go:172] (0xc0008825a0) (1) Data frame handling\nI0109 11:13:01.218532 709 log.go:172] (0xc0008825a0) (1) Data frame sent\nI0109 11:13:01.218611 709 log.go:172] (0xc00088a210) (0xc0008825a0) Stream removed, broadcasting: 1\nI0109 11:13:01.218691 709 log.go:172] (0xc00088a210) Go away received\nI0109 11:13:01.219182 709 log.go:172] (0xc00088a210) (0xc0008825a0) Stream removed, broadcasting: 1\nI0109 11:13:01.219266 709 log.go:172] (0xc00088a210) (0xc0005eec80) Stream removed, broadcasting: 3\nI0109 11:13:01.219307 709 log.go:172] (0xc00088a210) (0xc000710000) Stream removed, broadcasting: 5\n" Jan 9 11:13:01.230: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 9 11:13:01.230: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 9 11:13:01.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 9 11:13:01.866: INFO: stderr: "I0109 11:13:01.486509 731 log.go:172] (0xc000624370) (0xc000704780) Create stream\nI0109 11:13:01.486785 731 log.go:172] (0xc000624370) (0xc000704780) Stream added, broadcasting: 1\nI0109 11:13:01.493110 731 log.go:172] (0xc000624370) Reply frame received for 1\nI0109 11:13:01.493254 731 log.go:172] (0xc000624370) (0xc0007ab0e0) Create stream\nI0109 11:13:01.493299 731 log.go:172] (0xc000624370) (0xc0007ab0e0) Stream added, broadcasting: 3\nI0109 11:13:01.497297 731 log.go:172] (0xc000624370) Reply frame received for 3\nI0109 11:13:01.497400 731 log.go:172] (0xc000624370) (0xc000492c80) Create stream\nI0109 11:13:01.497412 731 log.go:172] (0xc000624370) (0xc000492c80) Stream added, broadcasting: 5\nI0109 11:13:01.499157 731 log.go:172] (0xc000624370) Reply frame received for 5\nI0109 11:13:01.733630 731 log.go:172] (0xc000624370) Data frame received for 3\nI0109 11:13:01.733707 731 log.go:172] (0xc0007ab0e0) (3) Data frame handling\nI0109 11:13:01.733721 731 log.go:172] (0xc0007ab0e0) (3) Data frame sent\nI0109 11:13:01.854649 731 log.go:172] (0xc000624370) Data frame received for 1\nI0109 11:13:01.854836 731 log.go:172] (0xc000704780) (1) Data frame handling\nI0109 11:13:01.854855 731 log.go:172] (0xc000704780) (1) Data frame sent\nI0109 11:13:01.855105 731 log.go:172] (0xc000624370) (0xc000492c80) Stream removed, broadcasting: 5\nI0109 11:13:01.855195 731 log.go:172] (0xc000624370) (0xc000704780) Stream removed, broadcasting: 1\nI0109 11:13:01.855473 731 log.go:172] (0xc000624370) (0xc0007ab0e0) Stream removed, broadcasting: 3\nI0109 11:13:01.855695 731 log.go:172] (0xc000624370) Go away received\nI0109 11:13:01.856243 731 log.go:172] (0xc000624370) (0xc000704780) Stream removed, broadcasting: 1\nI0109 11:13:01.856272 731 log.go:172] (0xc000624370) (0xc0007ab0e0) Stream removed, broadcasting: 3\nI0109 11:13:01.856281 731 log.go:172] (0xc000624370) (0xc000492c80) Stream removed, broadcasting: 5\n" Jan 9 11:13:01.866: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 9 11:13:01.866: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 9 11:13:01.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 9 11:13:02.877: INFO: stderr: "I0109 11:13:02.301502 753 log.go:172] (0xc0001388f0) (0xc0005fd4a0) Create stream\nI0109 11:13:02.301780 753 log.go:172] (0xc0001388f0) (0xc0005fd4a0) Stream added, broadcasting: 1\nI0109 11:13:02.307403 753 log.go:172] (0xc0001388f0) Reply frame received for 1\nI0109 11:13:02.307462 753 log.go:172] (0xc0001388f0) (0xc0005fd540) Create stream\nI0109 11:13:02.307484 753 log.go:172] (0xc0001388f0) (0xc0005fd540) Stream added, broadcasting: 3\nI0109 11:13:02.309193 753 log.go:172] (0xc0001388f0) Reply frame received for 3\nI0109 11:13:02.309297 753 log.go:172] (0xc0001388f0) (0xc0006ee000) Create stream\nI0109 11:13:02.309319 753 log.go:172] (0xc0001388f0) (0xc0006ee000) Stream added, broadcasting: 5\nI0109 11:13:02.311227 753 log.go:172] (0xc0001388f0) Reply frame received for 5\nI0109 11:13:02.457249 753 log.go:172] (0xc0001388f0) Data frame received for 3\nI0109 11:13:02.457447 753 log.go:172] (0xc0005fd540) (3) Data frame handling\nI0109 11:13:02.457481 753 log.go:172] (0xc0005fd540) (3) Data frame sent\nI0109 11:13:02.859919 753 log.go:172] (0xc0001388f0) (0xc0006ee000) Stream removed, broadcasting: 5\nI0109 11:13:02.860148 753 log.go:172] (0xc0001388f0) Data frame received for 1\nI0109 11:13:02.860163 753 log.go:172] (0xc0005fd4a0) (1) Data frame handling\nI0109 11:13:02.860179 753 log.go:172] (0xc0005fd4a0) (1) Data frame sent\nI0109 11:13:02.860223 753 log.go:172] (0xc0001388f0) (0xc0005fd4a0) Stream removed, broadcasting: 1\nI0109 11:13:02.860426 753 log.go:172] (0xc0001388f0) (0xc0005fd540) Stream removed, broadcasting: 3\nI0109 11:13:02.860487 753 log.go:172] (0xc0001388f0) Go away received\nI0109 11:13:02.861106 753 log.go:172] (0xc0001388f0) (0xc0005fd4a0) Stream removed, broadcasting: 1\nI0109 11:13:02.861164 753 log.go:172] (0xc0001388f0) (0xc0005fd540) Stream removed, broadcasting: 3\nI0109 11:13:02.861204 753 log.go:172] (0xc0001388f0) (0xc0006ee000) Stream removed, broadcasting: 5\n" Jan 9 11:13:02.877: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 9 11:13:02.877: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 9 11:13:02.877: INFO: Waiting for statefulset status.replicas updated to 0 Jan 9 11:13:02.993: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 9 11:13:02.993: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 9 11:13:02.993: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 9 11:13:03.023: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999998829s Jan 9 11:13:04.148: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.985077235s Jan 9 11:13:05.177: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.860048193s Jan 9 11:13:06.191: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.831365227s Jan 9 11:13:07.206: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.816658912s Jan 9 11:13:08.221: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.802099561s Jan 9 11:13:09.237: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.786574616s Jan 9 11:13:10.294: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.770562315s Jan 9 11:13:11.316: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.714029087s Jan 9 11:13:12.329: INFO: Verifying statefulset ss doesn't scale past 3 for another 692.069595ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-vr85h Jan 9 11:13:13.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:13:14.376: INFO: stderr: "I0109 11:13:13.581863 774 log.go:172] (0xc0007242c0) (0xc000687400) Create stream\nI0109 11:13:13.582264 774 log.go:172] (0xc0007242c0) (0xc000687400) Stream added, broadcasting: 1\nI0109 11:13:13.651567 774 log.go:172] (0xc0007242c0) Reply frame received for 1\nI0109 11:13:13.651906 774 log.go:172] (0xc0007242c0) (0xc0004d6000) Create stream\nI0109 11:13:13.651938 774 log.go:172] (0xc0007242c0) (0xc0004d6000) Stream added, broadcasting: 3\nI0109 11:13:13.659200 774 log.go:172] (0xc0007242c0) Reply frame received for 3\nI0109 11:13:13.659265 774 log.go:172] (0xc0007242c0) (0xc0006874a0) Create stream\nI0109 11:13:13.659275 774 log.go:172] (0xc0007242c0) (0xc0006874a0) Stream added, broadcasting: 5\nI0109 11:13:13.663111 774 log.go:172] (0xc0007242c0) Reply frame received for 5\nI0109 11:13:13.960298 774 log.go:172] (0xc0007242c0) Data frame received for 3\nI0109 11:13:13.960422 774 log.go:172] (0xc0004d6000) (3) Data frame handling\nI0109 11:13:13.960448 774 log.go:172] (0xc0004d6000) (3) Data frame sent\nI0109 11:13:14.365546 774 log.go:172] (0xc0007242c0) (0xc0004d6000) Stream removed, broadcasting: 3\nI0109 11:13:14.365722 774 log.go:172] (0xc0007242c0) Data frame received for 1\nI0109 11:13:14.365737 774 log.go:172] (0xc000687400) (1) Data frame handling\nI0109 11:13:14.365756 774 log.go:172] (0xc000687400) (1) Data frame sent\nI0109 11:13:14.365833 774 log.go:172] (0xc0007242c0) (0xc000687400) Stream removed, broadcasting: 1\nI0109 11:13:14.366214 774 log.go:172] (0xc0007242c0) (0xc0006874a0) Stream removed, broadcasting: 5\nI0109 11:13:14.366473 774 log.go:172] (0xc0007242c0) Go away received\nI0109 11:13:14.366620 774 log.go:172] (0xc0007242c0) (0xc000687400) Stream removed, broadcasting: 1\nI0109 11:13:14.366682 774 log.go:172] (0xc0007242c0) (0xc0004d6000) Stream removed, broadcasting: 3\nI0109 11:13:14.366698 774 log.go:172] (0xc0007242c0) (0xc0006874a0) Stream removed, broadcasting: 5\n" Jan 9 11:13:14.376: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 9 11:13:14.376: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 9 11:13:14.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:13:14.832: INFO: stderr: "I0109 11:13:14.583721 795 log.go:172] (0xc00015c840) (0xc0005a52c0) Create stream\nI0109 11:13:14.583930 795 log.go:172] (0xc00015c840) (0xc0005a52c0) Stream added, broadcasting: 1\nI0109 11:13:14.590070 795 log.go:172] (0xc00015c840) Reply frame received for 1\nI0109 11:13:14.590107 795 log.go:172] (0xc00015c840) (0xc000710000) Create stream\nI0109 11:13:14.590120 795 log.go:172] (0xc00015c840) (0xc000710000) Stream added, broadcasting: 3\nI0109 11:13:14.591371 795 log.go:172] (0xc00015c840) Reply frame received for 3\nI0109 11:13:14.591390 795 log.go:172] (0xc00015c840) (0xc0007100a0) Create stream\nI0109 11:13:14.591398 795 log.go:172] (0xc00015c840) (0xc0007100a0) Stream added, broadcasting: 5\nI0109 11:13:14.594282 795 log.go:172] (0xc00015c840) Reply frame received for 5\nI0109 11:13:14.707116 795 log.go:172] (0xc00015c840) Data frame received for 3\nI0109 11:13:14.707170 795 log.go:172] (0xc000710000) (3) Data frame handling\nI0109 11:13:14.707187 795 log.go:172] (0xc000710000) (3) Data frame sent\nI0109 11:13:14.826960 795 log.go:172] (0xc00015c840) (0xc000710000) Stream removed, broadcasting: 3\nI0109 11:13:14.827078 795 log.go:172] (0xc00015c840) Data frame received for 1\nI0109 11:13:14.827094 795 log.go:172] (0xc0005a52c0) (1) Data frame handling\nI0109 11:13:14.827104 795 log.go:172] (0xc0005a52c0) (1) Data frame sent\nI0109 11:13:14.827113 795 log.go:172] (0xc00015c840) (0xc0005a52c0) Stream removed, broadcasting: 1\nI0109 11:13:14.827327 795 log.go:172] (0xc00015c840) (0xc0007100a0) Stream removed, broadcasting: 5\nI0109 11:13:14.827352 795 log.go:172] (0xc00015c840) (0xc0005a52c0) Stream removed, broadcasting: 1\nI0109 11:13:14.827370 795 log.go:172] (0xc00015c840) (0xc000710000) Stream removed, broadcasting: 3\nI0109 11:13:14.827379 795 log.go:172] (0xc00015c840) (0xc0007100a0) Stream removed, broadcasting: 5\n" Jan 9 11:13:14.832: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 9 11:13:14.832: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 9 11:13:14.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:13:15.334: INFO: rc: 126 Jan 9 11:13:15.334: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown I0109 11:13:15.093305 817 log.go:172] (0xc000138840) (0xc00079c640) Create stream I0109 11:13:15.093426 817 log.go:172] (0xc000138840) (0xc00079c640) Stream added, broadcasting: 1 I0109 11:13:15.098069 817 log.go:172] (0xc000138840) Reply frame received for 1 I0109 11:13:15.098125 817 log.go:172] (0xc000138840) (0xc0005b8be0) Create stream I0109 11:13:15.098133 817 log.go:172] (0xc000138840) (0xc0005b8be0) Stream added, broadcasting: 3 I0109 11:13:15.099141 817 log.go:172] (0xc000138840) Reply frame received for 3 I0109 11:13:15.099179 817 log.go:172] (0xc000138840) (0xc0006d0000) Create stream I0109 11:13:15.099201 817 log.go:172] (0xc000138840) (0xc0006d0000) Stream added, broadcasting: 5 I0109 11:13:15.100561 817 log.go:172] (0xc000138840) Reply frame received for 5 I0109 11:13:15.328059 817 log.go:172] (0xc000138840) Data frame received for 3 I0109 11:13:15.328152 817 log.go:172] (0xc0005b8be0) (3) Data frame handling I0109 11:13:15.328167 817 log.go:172] (0xc0005b8be0) (3) Data frame sent I0109 11:13:15.329324 817 log.go:172] (0xc000138840) Data frame received for 1 I0109 11:13:15.329361 817 log.go:172] (0xc00079c640) (1) Data frame handling I0109 11:13:15.329371 817 log.go:172] (0xc00079c640) (1) Data frame sent I0109 11:13:15.329529 817 log.go:172] (0xc000138840) (0xc00079c640) Stream removed, broadcasting: 1 I0109 11:13:15.329957 817 log.go:172] (0xc000138840) (0xc0005b8be0) Stream removed, broadcasting: 3 I0109 11:13:15.329991 817 log.go:172] (0xc000138840) (0xc0006d0000) Stream removed, broadcasting: 5 I0109 11:13:15.330019 817 log.go:172] (0xc000138840) (0xc00079c640) Stream removed, broadcasting: 1 I0109 11:13:15.330027 817 log.go:172] (0xc000138840) (0xc0005b8be0) Stream removed, broadcasting: 3 I0109 11:13:15.330040 817 log.go:172] (0xc000138840) (0xc0006d0000) Stream removed, broadcasting: 5 command terminated with exit code 126 [] 0xc0018b5ec0 exit status 126 true [0xc000e2a168 0xc000e2a180 0xc000e2a198] [0xc000e2a168 0xc000e2a180 0xc000e2a198] [0xc000e2a178 0xc000e2a190] [0x935700 0x935700] 0xc001a1eb40 }: Command stdout: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown stderr: I0109 11:13:15.093305 817 log.go:172] (0xc000138840) (0xc00079c640) Create stream I0109 11:13:15.093426 817 log.go:172] (0xc000138840) (0xc00079c640) Stream added, broadcasting: 1 I0109 11:13:15.098069 817 log.go:172] (0xc000138840) Reply frame received for 1 I0109 11:13:15.098125 817 log.go:172] (0xc000138840) (0xc0005b8be0) Create stream I0109 11:13:15.098133 817 log.go:172] (0xc000138840) (0xc0005b8be0) Stream added, broadcasting: 3 I0109 11:13:15.099141 817 log.go:172] (0xc000138840) Reply frame received for 3 I0109 11:13:15.099179 817 log.go:172] (0xc000138840) (0xc0006d0000) Create stream I0109 11:13:15.099201 817 log.go:172] (0xc000138840) (0xc0006d0000) Stream added, broadcasting: 5 I0109 11:13:15.100561 817 log.go:172] (0xc000138840) Reply frame received for 5 I0109 11:13:15.328059 817 log.go:172] (0xc000138840) Data frame received for 3 I0109 11:13:15.328152 817 log.go:172] (0xc0005b8be0) (3) Data frame handling I0109 11:13:15.328167 817 log.go:172] (0xc0005b8be0) (3) Data frame sent I0109 11:13:15.329324 817 log.go:172] (0xc000138840) Data frame received for 1 I0109 11:13:15.329361 817 log.go:172] (0xc00079c640) (1) Data frame handling I0109 11:13:15.329371 817 log.go:172] (0xc00079c640) (1) Data frame sent I0109 11:13:15.329529 817 log.go:172] (0xc000138840) (0xc00079c640) Stream removed, broadcasting: 1 I0109 11:13:15.329957 817 log.go:172] (0xc000138840) (0xc0005b8be0) Stream removed, broadcasting: 3 I0109 11:13:15.329991 817 log.go:172] (0xc000138840) (0xc0006d0000) Stream removed, broadcasting: 5 I0109 11:13:15.330019 817 log.go:172] (0xc000138840) (0xc00079c640) Stream removed, broadcasting: 1 I0109 11:13:15.330027 817 log.go:172] (0xc000138840) (0xc0005b8be0) Stream removed, broadcasting: 3 I0109 11:13:15.330040 817 log.go:172] (0xc000138840) (0xc0006d0000) Stream removed, broadcasting: 5 command terminated with exit code 126 error: exit status 126 Jan 9 11:13:25.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:13:26.022: INFO: rc: 1 Jan 9 11:13:26.022: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002044000 exit status 1 true [0xc000e2a1a0 0xc000e2a1b8 0xc000e2a1d0] [0xc000e2a1a0 0xc000e2a1b8 0xc000e2a1d0] [0xc000e2a1b0 0xc000e2a1c8] [0x935700 0x935700] 0xc001a1faa0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 9 11:13:36.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:13:36.190: INFO: rc: 1 Jan 9 11:13:36.190: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000c6b2c0 exit status 1 true [0xc0013e8268 0xc0013e82a8 0xc0013e8308] [0xc0013e8268 0xc0013e82a8 0xc0013e8308] [0xc0013e8288 0xc0013e8300] [0x935700 0x935700] 0xc00167ac60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 9 11:13:46.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:13:46.309: INFO: rc: 1 Jan 9 11:13:46.310: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002044150 exit status 1 true [0xc000e2a1d8 0xc000e2a1f0 0xc000e2a208] [0xc000e2a1d8 0xc000e2a1f0 0xc000e2a208] [0xc000e2a1e8 0xc000e2a200] [0x935700 0x935700] 0xc00181f080 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 9 11:13:56.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:13:56.536: INFO: rc: 1 Jan 9 11:13:56.537: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000cd6ba0 exit status 1 true [0xc00032ad40 0xc00032ae30 0xc00032af78] [0xc00032ad40 0xc00032ae30 0xc00032af78] [0xc00032ae20 0xc00032aed0] [0x935700 0x935700] 0xc001918f00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 9 11:14:06.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:14:06.730: INFO: rc: 1 Jan 9 11:14:06.731: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0008ecff0 exit status 1 true [0xc000d4a278 0xc000d4a290 0xc000d4a2a8] [0xc000d4a278 0xc000d4a290 0xc000d4a2a8] [0xc000d4a288 0xc000d4a2a0] [0x935700 0x935700] 0xc001c21aa0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 9 11:14:16.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:14:16.887: INFO: rc: 1 Jan 9 11:14:16.887: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000c6b440 exit status 1 true [0xc0013e8348 0xc0013e8410 0xc0013e8430] [0xc0013e8348 0xc0013e8410 0xc0013e8430] [0xc0013e83e8 0xc0013e8420] [0x935700 0x935700] 0xc00167b1a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 9 11:14:26.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:14:27.006: INFO: rc: 1 Jan 9 11:14:27.007: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000c6b560 exit status 1 true [0xc0013e8438 0xc0013e8498 0xc0013e84b8] [0xc0013e8438 0xc0013e8498 0xc0013e84b8] [0xc0013e8488 0xc0013e84b0] [0x935700 0x935700] 0xc00167bf20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 9 11:14:37.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:14:37.185: INFO: rc: 1 Jan 9 11:14:37.185: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000cd6cf0 exit status 1 true [0xc00032afd0 0xc00032b070 0xc00032b198] [0xc00032afd0 0xc00032b070 0xc00032b198] [0xc00032aff8 0xc00032b170] [0x935700 0x935700] 0xc001919980 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 9 11:14:47.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:14:47.356: INFO: rc: 1 Jan 9 11:14:47.357: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001758150 exit status 1 true [0xc00000e2e8 0xc000e2a008 0xc000e2a020] [0xc00000e2e8 0xc000e2a008 0xc000e2a020] [0xc000e2a000 0xc000e2a018] [0x935700 0x935700] 0xc001a1f2c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 9 11:14:57.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:14:57.518: INFO: rc: 1 Jan 9 11:14:57.519: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000507530 exit status 1 true [0xc00032ac00 0xc00032ac30 0xc00032ac70] [0xc00032ac00 0xc00032ac30 0xc00032ac70] [0xc00032ac28 0xc00032ac58] [0x935700 0x935700] 0xc001c218c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 9 11:15:07.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:15:07.680: INFO: rc: 1 Jan 9 11:15:07.680: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0017584e0 exit status 1 true [0xc000e2a028 0xc000e2a050 0xc000e2a068] [0xc000e2a028 0xc000e2a050 0xc000e2a068] [0xc000e2a048 0xc000e2a060] [0x935700 0x935700] 0xc001a1fc80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 9 11:15:17.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:15:17.860: INFO: rc: 1 Jan 9 11:15:17.861: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000507680 exit status 1 true [0xc00032ac80 0xc00032ad40 0xc00032ae30] [0xc00032ac80 0xc00032ad40 0xc00032ae30] [0xc00032aca0 0xc00032ae20] [0x935700 0x935700] 0xc001918a80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 9 11:15:27.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:15:27.979: INFO: rc: 1 Jan 9 11:15:27.980: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001758630 exit status 1 true [0xc000e2a070 0xc000e2a088 0xc000e2a0a0] [0xc000e2a070 0xc000e2a088 0xc000e2a0a0] [0xc000e2a080 0xc000e2a098] [0x935700 0x935700] 0xc0018fe420 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 9 11:15:37.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:15:38.231: INFO: rc: 1 Jan 9 11:15:38.231: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000ee6120 exit status 1 true [0xc000d4a000 0xc000d4a030 0xc000d4a048] [0xc000d4a000 0xc000d4a030 0xc000d4a048] [0xc000d4a028 0xc000d4a040] [0x935700 0x935700] 0xc000c1e1e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 9 11:15:48.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:15:48.366: INFO: rc: 1 Jan 9 11:15:48.366: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0005077a0 exit status 1 true [0xc00032ae50 0xc00032afd0 0xc00032b070] [0xc00032ae50 0xc00032afd0 0xc00032b070] [0xc00032af78 0xc00032aff8] [0x935700 0x935700] 0xc001918ea0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 9 11:15:58.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:15:58.627: INFO: rc: 1 Jan 9 11:15:58.627: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0017587e0 exit status 1 true [0xc000e2a0a8 0xc000e2a0c0 0xc000e2a0d8] [0xc000e2a0a8 0xc000e2a0c0 0xc000e2a0d8] [0xc000e2a0b8 0xc000e2a0d0] [0x935700 0x935700] 0xc0018feb40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 9 11:16:08.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:16:08.823: INFO: rc: 1 Jan 9 11:16:08.824: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0018b4150 exit status 1 true [0xc0013e8000 0xc0013e8078 0xc0013e8120] [0xc0013e8000 0xc0013e8078 0xc0013e8120] [0xc0013e8040 0xc0013e8118] [0x935700 0x935700] 0xc0014b43c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 9 11:16:18.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:16:18.989: INFO: rc: 1 Jan 9 11:16:18.990: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001758930 exit status 1 true [0xc000e2a0e0 0xc000e2a0f8 0xc000e2a110] [0xc000e2a0e0 0xc000e2a0f8 0xc000e2a110] [0xc000e2a0f0 0xc000e2a108] [0x935700 0x935700] 0xc0018ff380 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 9 11:16:28.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:16:29.132: INFO: rc: 1 Jan 9 11:16:29.132: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000507950 exit status 1 true [0xc00032b158 0xc00032b1d8 0xc00032b2d0] [0xc00032b158 0xc00032b1d8 0xc00032b2d0] [0xc00032b198 0xc00032b2c0] [0x935700 0x935700] 0xc001919740 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 9 11:16:39.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:16:39.302: INFO: rc: 1 Jan 9 11:16:39.302: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001758a50 exit status 1 true [0xc000e2a118 0xc000e2a130 0xc000e2a148] [0xc000e2a118 0xc000e2a130 0xc000e2a148] [0xc000e2a128 0xc000e2a140] [0x935700 0x935700] 0xc0018ffbc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 9 11:16:49.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:16:49.469: INFO: rc: 1 Jan 9 11:16:49.470: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0018b42d0 exit status 1 true [0xc0013e8140 0xc0013e8210 0xc0013e8258] [0xc0013e8140 0xc0013e8210 0xc0013e8258] [0xc0013e81c0 0xc0013e8240] [0x935700 0x935700] 0xc0014b47e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 9 11:16:59.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:16:59.631: INFO: rc: 1 Jan 9 11:16:59.632: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001758180 exit status 1 true [0xc00016e000 0xc00032ac00 0xc00032ac30] [0xc00016e000 0xc00032ac00 0xc00032ac30] [0xc00000e2e8 0xc00032ac28] [0x935700 0x935700] 0xc001918ba0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 9 11:17:09.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:17:09.829: INFO: rc: 1 Jan 9 11:17:09.830: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001758570 exit status 1 true [0xc00032ac48 0xc00032ac80 0xc00032ad40] [0xc00032ac48 0xc00032ac80 0xc00032ad40] [0xc00032ac70 0xc00032aca0] [0x935700 0x935700] 0xc001918f60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 9 11:17:19.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:17:20.032: INFO: rc: 1 Jan 9 11:17:20.032: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000507560 exit status 1 true [0xc000d4a000 0xc000d4a030 0xc000d4a048] [0xc000d4a000 0xc000d4a030 0xc000d4a048] [0xc000d4a028 0xc000d4a040] [0x935700 0x935700] 0xc001c218c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 9 11:17:30.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:17:30.166: INFO: rc: 1 Jan 9 11:17:30.167: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0005076e0 exit status 1 true [0xc000d4a050 0xc000d4a090 0xc000d4a0b8] [0xc000d4a050 0xc000d4a090 0xc000d4a0b8] [0xc000d4a088 0xc000d4a0b0] [0x935700 0x935700] 0xc001a1f020 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 9 11:17:40.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:17:40.298: INFO: rc: 1 Jan 9 11:17:40.298: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000ee60f0 exit status 1 true [0xc000e2a000 0xc000e2a018 0xc000e2a030] [0xc000e2a000 0xc000e2a018 0xc000e2a030] [0xc000e2a010 0xc000e2a028] [0x935700 0x935700] 0xc0018fe420 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 9 11:17:50.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:17:50.742: INFO: rc: 1 Jan 9 11:17:50.742: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000ee6270 exit status 1 true [0xc000e2a048 0xc000e2a060 0xc000e2a078] [0xc000e2a048 0xc000e2a060 0xc000e2a078] [0xc000e2a058 0xc000e2a070] [0x935700 0x935700] 0xc0018feb40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 9 11:18:00.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:18:00.878: INFO: rc: 1 Jan 9 11:18:00.879: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000507860 exit status 1 true [0xc000d4a0c0 0xc000d4a0d8 0xc000d4a0f0] [0xc000d4a0c0 0xc000d4a0d8 0xc000d4a0f0] [0xc000d4a0d0 0xc000d4a0e8] [0x935700 0x935700] 0xc001a1fb60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 9 11:18:10.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:18:11.065: INFO: rc: 1 Jan 9 11:18:11.065: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000ee6420 exit status 1 true [0xc000e2a080 0xc000e2a098 0xc000e2a0b0] [0xc000e2a080 0xc000e2a098 0xc000e2a0b0] [0xc000e2a090 0xc000e2a0a8] [0x935700 0x935700] 0xc0018ff380 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 9 11:18:21.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vr85h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:18:21.203: INFO: rc: 1 Jan 9 11:18:21.203: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: Jan 9 11:18:21.203: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jan 9 11:18:21.231: INFO: Deleting all statefulset in ns e2e-tests-statefulset-vr85h Jan 9 11:18:21.235: INFO: Scaling statefulset ss to 0 Jan 9 11:18:21.246: INFO: Waiting for statefulset status.replicas updated to 0 Jan 9 11:18:21.253: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:18:21.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-vr85h" for this suite. Jan 9 11:18:29.503: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:18:29.648: INFO: namespace: e2e-tests-statefulset-vr85h, resource: bindings, ignored listing per whitelist Jan 9 11:18:29.714: INFO: namespace e2e-tests-statefulset-vr85h deletion completed in 8.423027s • [SLOW TEST:381.326 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:18:29.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-c39a4954-32d1-11ea-ac2d-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 9 11:18:29.986: INFO: Waiting up to 5m0s for pod "pod-configmaps-c39cb350-32d1-11ea-ac2d-0242ac110005" in namespace "e2e-tests-configmap-txfdq" to be "success or failure" Jan 9 11:18:30.004: INFO: Pod "pod-configmaps-c39cb350-32d1-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.03834ms Jan 9 11:18:32.572: INFO: Pod "pod-configmaps-c39cb350-32d1-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.585947743s Jan 9 11:18:34.589: INFO: Pod "pod-configmaps-c39cb350-32d1-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.603093758s Jan 9 11:18:36.602: INFO: Pod "pod-configmaps-c39cb350-32d1-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.616402145s Jan 9 11:18:38.632: INFO: Pod "pod-configmaps-c39cb350-32d1-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.646133291s Jan 9 11:18:40.651: INFO: Pod "pod-configmaps-c39cb350-32d1-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.665522438s STEP: Saw pod success Jan 9 11:18:40.652: INFO: Pod "pod-configmaps-c39cb350-32d1-11ea-ac2d-0242ac110005" satisfied condition "success or failure" Jan 9 11:18:40.912: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-c39cb350-32d1-11ea-ac2d-0242ac110005 container configmap-volume-test: STEP: delete the pod Jan 9 11:18:41.087: INFO: Waiting for pod pod-configmaps-c39cb350-32d1-11ea-ac2d-0242ac110005 to disappear Jan 9 11:18:41.105: INFO: Pod pod-configmaps-c39cb350-32d1-11ea-ac2d-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:18:41.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-txfdq" for this suite. Jan 9 11:18:47.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:18:47.396: INFO: namespace: e2e-tests-configmap-txfdq, resource: bindings, ignored listing per whitelist Jan 9 11:18:47.404: INFO: namespace e2e-tests-configmap-txfdq deletion completed in 6.292623047s • [SLOW TEST:17.690 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:18:47.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 9 11:18:47.786: INFO: Number of nodes with available pods: 0 Jan 9 11:18:47.786: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:18:49.881: INFO: Number of nodes with available pods: 0 Jan 9 11:18:49.881: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:18:50.823: INFO: Number of nodes with available pods: 0 Jan 9 11:18:50.823: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:18:52.506: INFO: Number of nodes with available pods: 0 Jan 9 11:18:52.507: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:18:53.319: INFO: Number of nodes with available pods: 0 Jan 9 11:18:53.319: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:18:53.827: INFO: Number of nodes with available pods: 0 Jan 9 11:18:53.827: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:18:54.872: INFO: Number of nodes with available pods: 0 Jan 9 11:18:54.872: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:18:55.811: INFO: Number of nodes with available pods: 0 Jan 9 11:18:55.811: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:18:56.806: INFO: Number of nodes with available pods: 1 Jan 9 11:18:56.806: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jan 9 11:18:56.887: INFO: Number of nodes with available pods: 0 Jan 9 11:18:56.887: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:18:58.205: INFO: Number of nodes with available pods: 0 Jan 9 11:18:58.205: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:18:58.920: INFO: Number of nodes with available pods: 0 Jan 9 11:18:58.921: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:18:59.908: INFO: Number of nodes with available pods: 0 Jan 9 11:18:59.908: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:19:01.468: INFO: Number of nodes with available pods: 0 Jan 9 11:19:01.468: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:19:01.921: INFO: Number of nodes with available pods: 0 Jan 9 11:19:01.922: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:19:02.902: INFO: Number of nodes with available pods: 0 Jan 9 11:19:02.902: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:19:03.921: INFO: Number of nodes with available pods: 0 Jan 9 11:19:03.921: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:19:05.394: INFO: Number of nodes with available pods: 0 Jan 9 11:19:05.394: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:19:05.909: INFO: Number of nodes with available pods: 0 Jan 9 11:19:05.909: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:19:06.907: INFO: Number of nodes with available pods: 0 Jan 9 11:19:06.907: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 9 11:19:07.905: INFO: Number of nodes with available pods: 1 Jan 9 11:19:07.905: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-4nc7n, will wait for the garbage collector to delete the pods Jan 9 11:19:07.985: INFO: Deleting DaemonSet.extensions daemon-set took: 15.676111ms Jan 9 11:19:08.086: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.630827ms Jan 9 11:19:22.806: INFO: Number of nodes with available pods: 0 Jan 9 11:19:22.806: INFO: Number of running nodes: 0, number of available pods: 0 Jan 9 11:19:22.814: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-4nc7n/daemonsets","resourceVersion":"17691361"},"items":null} Jan 9 11:19:22.827: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-4nc7n/pods","resourceVersion":"17691362"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:19:22.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-4nc7n" for this suite. Jan 9 11:19:30.877: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:19:31.096: INFO: namespace: e2e-tests-daemonsets-4nc7n, resource: bindings, ignored listing per whitelist Jan 9 11:19:31.138: INFO: namespace e2e-tests-daemonsets-4nc7n deletion completed in 8.292695302s • [SLOW TEST:43.734 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:19:31.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-e82e1e7f-32d1-11ea-ac2d-0242ac110005 STEP: Creating a pod to test consume secrets Jan 9 11:19:31.513: INFO: Waiting up to 5m0s for pod "pod-secrets-e849e4bc-32d1-11ea-ac2d-0242ac110005" in namespace "e2e-tests-secrets-cpqzs" to be "success or failure" Jan 9 11:19:31.557: INFO: Pod "pod-secrets-e849e4bc-32d1-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 44.231985ms Jan 9 11:19:33.934: INFO: Pod "pod-secrets-e849e4bc-32d1-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.420940495s Jan 9 11:19:35.946: INFO: Pod "pod-secrets-e849e4bc-32d1-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.433326566s Jan 9 11:19:37.960: INFO: Pod "pod-secrets-e849e4bc-32d1-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.447377301s Jan 9 11:19:39.974: INFO: Pod "pod-secrets-e849e4bc-32d1-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.460514195s Jan 9 11:19:41.984: INFO: Pod "pod-secrets-e849e4bc-32d1-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.47082153s STEP: Saw pod success Jan 9 11:19:41.984: INFO: Pod "pod-secrets-e849e4bc-32d1-11ea-ac2d-0242ac110005" satisfied condition "success or failure" Jan 9 11:19:41.988: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-e849e4bc-32d1-11ea-ac2d-0242ac110005 container secret-volume-test: STEP: delete the pod Jan 9 11:19:42.268: INFO: Waiting for pod pod-secrets-e849e4bc-32d1-11ea-ac2d-0242ac110005 to disappear Jan 9 11:19:42.869: INFO: Pod pod-secrets-e849e4bc-32d1-11ea-ac2d-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:19:42.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-cpqzs" for this suite. Jan 9 11:19:49.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:19:49.636: INFO: namespace: e2e-tests-secrets-cpqzs, resource: bindings, ignored listing per whitelist Jan 9 11:19:49.653: INFO: namespace e2e-tests-secrets-cpqzs deletion completed in 6.749878251s STEP: Destroying namespace "e2e-tests-secret-namespace-qntk9" for this suite. Jan 9 11:19:55.775: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:19:55.957: INFO: namespace: e2e-tests-secret-namespace-qntk9, resource: bindings, ignored listing per whitelist Jan 9 11:19:55.994: INFO: namespace e2e-tests-secret-namespace-qntk9 deletion completed in 6.341214684s • [SLOW TEST:24.856 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:19:55.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Jan 9 11:19:56.351: INFO: Waiting up to 5m0s for pod "pod-f7181af4-32d1-11ea-ac2d-0242ac110005" in namespace "e2e-tests-emptydir-f4wtq" to be "success or failure" Jan 9 11:19:56.383: INFO: Pod "pod-f7181af4-32d1-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 31.665853ms Jan 9 11:19:58.395: INFO: Pod "pod-f7181af4-32d1-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043760299s Jan 9 11:20:00.409: INFO: Pod "pod-f7181af4-32d1-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05852486s Jan 9 11:20:02.427: INFO: Pod "pod-f7181af4-32d1-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076277896s Jan 9 11:20:04.448: INFO: Pod "pod-f7181af4-32d1-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.096647498s Jan 9 11:20:06.502: INFO: Pod "pod-f7181af4-32d1-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.151348915s STEP: Saw pod success Jan 9 11:20:06.502: INFO: Pod "pod-f7181af4-32d1-11ea-ac2d-0242ac110005" satisfied condition "success or failure" Jan 9 11:20:06.518: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-f7181af4-32d1-11ea-ac2d-0242ac110005 container test-container: STEP: delete the pod Jan 9 11:20:06.695: INFO: Waiting for pod pod-f7181af4-32d1-11ea-ac2d-0242ac110005 to disappear Jan 9 11:20:06.711: INFO: Pod pod-f7181af4-32d1-11ea-ac2d-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:20:06.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-f4wtq" for this suite. Jan 9 11:20:12.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:20:13.024: INFO: namespace: e2e-tests-emptydir-f4wtq, resource: bindings, ignored listing per whitelist Jan 9 11:20:13.024: INFO: namespace e2e-tests-emptydir-f4wtq deletion completed in 6.293987208s • [SLOW TEST:17.030 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:20:13.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 9 11:20:13.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-95n2j' Jan 9 11:20:15.228: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 9 11:20:15.228: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 Jan 9 11:20:17.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-95n2j' Jan 9 11:20:18.396: INFO: stderr: "" Jan 9 11:20:18.396: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:20:18.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-95n2j" for this suite. Jan 9 11:20:24.477: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:20:24.557: INFO: namespace: e2e-tests-kubectl-95n2j, resource: bindings, ignored listing per whitelist Jan 9 11:20:24.647: INFO: namespace e2e-tests-kubectl-95n2j deletion completed in 6.24362751s • [SLOW TEST:11.622 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:20:24.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jan 9 11:20:24.838: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-px8nd,SelfLink:/api/v1/namespaces/e2e-tests-watch-px8nd/configmaps/e2e-watch-test-watch-closed,UID:0813cb90-32d2-11ea-a994-fa163e34d433,ResourceVersion:17691547,Generation:0,CreationTimestamp:2020-01-09 11:20:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 9 11:20:24.839: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-px8nd,SelfLink:/api/v1/namespaces/e2e-tests-watch-px8nd/configmaps/e2e-watch-test-watch-closed,UID:0813cb90-32d2-11ea-a994-fa163e34d433,ResourceVersion:17691548,Generation:0,CreationTimestamp:2020-01-09 11:20:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jan 9 11:20:24.872: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-px8nd,SelfLink:/api/v1/namespaces/e2e-tests-watch-px8nd/configmaps/e2e-watch-test-watch-closed,UID:0813cb90-32d2-11ea-a994-fa163e34d433,ResourceVersion:17691549,Generation:0,CreationTimestamp:2020-01-09 11:20:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 9 11:20:24.872: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-px8nd,SelfLink:/api/v1/namespaces/e2e-tests-watch-px8nd/configmaps/e2e-watch-test-watch-closed,UID:0813cb90-32d2-11ea-a994-fa163e34d433,ResourceVersion:17691550,Generation:0,CreationTimestamp:2020-01-09 11:20:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:20:24.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-px8nd" for this suite. Jan 9 11:20:30.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:20:31.105: INFO: namespace: e2e-tests-watch-px8nd, resource: bindings, ignored listing per whitelist Jan 9 11:20:31.148: INFO: namespace e2e-tests-watch-px8nd deletion completed in 6.269200601s • [SLOW TEST:6.501 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:20:31.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-6l8qp [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-6l8qp STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-6l8qp Jan 9 11:20:31.409: INFO: Found 0 stateful pods, waiting for 1 Jan 9 11:20:41.432: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jan 9 11:20:41.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 9 11:20:42.214: INFO: stderr: "I0109 11:20:41.728545 1482 log.go:172] (0xc000714370) (0xc0005b3360) Create stream\nI0109 11:20:41.729223 1482 log.go:172] (0xc000714370) (0xc0005b3360) Stream added, broadcasting: 1\nI0109 11:20:41.738892 1482 log.go:172] (0xc000714370) Reply frame received for 1\nI0109 11:20:41.738984 1482 log.go:172] (0xc000714370) (0xc0000da000) Create stream\nI0109 11:20:41.738999 1482 log.go:172] (0xc000714370) (0xc0000da000) Stream added, broadcasting: 3\nI0109 11:20:41.740522 1482 log.go:172] (0xc000714370) Reply frame received for 3\nI0109 11:20:41.740552 1482 log.go:172] (0xc000714370) (0xc0000da0a0) Create stream\nI0109 11:20:41.740562 1482 log.go:172] (0xc000714370) (0xc0000da0a0) Stream added, broadcasting: 5\nI0109 11:20:41.742863 1482 log.go:172] (0xc000714370) Reply frame received for 5\nI0109 11:20:41.985238 1482 log.go:172] (0xc000714370) Data frame received for 3\nI0109 11:20:41.985331 1482 log.go:172] (0xc0000da000) (3) Data frame handling\nI0109 11:20:41.985354 1482 log.go:172] (0xc0000da000) (3) Data frame sent\nI0109 11:20:42.192338 1482 log.go:172] (0xc000714370) Data frame received for 1\nI0109 11:20:42.192819 1482 log.go:172] (0xc0005b3360) (1) Data frame handling\nI0109 11:20:42.192882 1482 log.go:172] (0xc0005b3360) (1) Data frame sent\nI0109 11:20:42.193075 1482 log.go:172] (0xc000714370) (0xc0005b3360) Stream removed, broadcasting: 1\nI0109 11:20:42.193462 1482 log.go:172] (0xc000714370) (0xc0000da000) Stream removed, broadcasting: 3\nI0109 11:20:42.193825 1482 log.go:172] (0xc000714370) (0xc0000da0a0) Stream removed, broadcasting: 5\nI0109 11:20:42.194313 1482 log.go:172] (0xc000714370) (0xc0005b3360) Stream removed, broadcasting: 1\nI0109 11:20:42.194620 1482 log.go:172] (0xc000714370) (0xc0000da000) Stream removed, broadcasting: 3\nI0109 11:20:42.194932 1482 log.go:172] (0xc000714370) (0xc0000da0a0) Stream removed, broadcasting: 5\n" Jan 9 11:20:42.215: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 9 11:20:42.215: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 9 11:20:42.336: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 9 11:20:42.336: INFO: Waiting for statefulset status.replicas updated to 0 Jan 9 11:20:42.352: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Jan 9 11:20:52.412: INFO: POD NODE PHASE GRACE CONDITIONS Jan 9 11:20:52.412: INFO: ss-0 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:31 +0000 UTC }] Jan 9 11:20:52.412: INFO: Jan 9 11:20:52.412: INFO: StatefulSet ss has not reached scale 3, at 1 Jan 9 11:20:53.433: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.976797645s Jan 9 11:20:54.741: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.955263818s Jan 9 11:20:55.785: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.64794627s Jan 9 11:20:56.802: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.603228427s Jan 9 11:20:57.828: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.586842068s Jan 9 11:20:59.355: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.560416901s Jan 9 11:21:00.372: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.034167035s Jan 9 11:21:01.402: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.016723186s STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-6l8qp Jan 9 11:21:02.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:21:02.870: INFO: stderr: "I0109 11:21:02.667593 1504 log.go:172] (0xc0006ee2c0) (0xc00071c640) Create stream\nI0109 11:21:02.667735 1504 log.go:172] (0xc0006ee2c0) (0xc00071c640) Stream added, broadcasting: 1\nI0109 11:21:02.671990 1504 log.go:172] (0xc0006ee2c0) Reply frame received for 1\nI0109 11:21:02.672011 1504 log.go:172] (0xc0006ee2c0) (0xc000692dc0) Create stream\nI0109 11:21:02.672017 1504 log.go:172] (0xc0006ee2c0) (0xc000692dc0) Stream added, broadcasting: 3\nI0109 11:21:02.673167 1504 log.go:172] (0xc0006ee2c0) Reply frame received for 3\nI0109 11:21:02.673204 1504 log.go:172] (0xc0006ee2c0) (0xc0004d8000) Create stream\nI0109 11:21:02.673244 1504 log.go:172] (0xc0006ee2c0) (0xc0004d8000) Stream added, broadcasting: 5\nI0109 11:21:02.673997 1504 log.go:172] (0xc0006ee2c0) Reply frame received for 5\nI0109 11:21:02.752583 1504 log.go:172] (0xc0006ee2c0) Data frame received for 3\nI0109 11:21:02.752663 1504 log.go:172] (0xc000692dc0) (3) Data frame handling\nI0109 11:21:02.752679 1504 log.go:172] (0xc000692dc0) (3) Data frame sent\nI0109 11:21:02.859382 1504 log.go:172] (0xc0006ee2c0) (0xc000692dc0) Stream removed, broadcasting: 3\nI0109 11:21:02.859582 1504 log.go:172] (0xc0006ee2c0) Data frame received for 1\nI0109 11:21:02.859608 1504 log.go:172] (0xc00071c640) (1) Data frame handling\nI0109 11:21:02.859814 1504 log.go:172] (0xc00071c640) (1) Data frame sent\nI0109 11:21:02.859831 1504 log.go:172] (0xc0006ee2c0) (0xc0004d8000) Stream removed, broadcasting: 5\nI0109 11:21:02.859864 1504 log.go:172] (0xc0006ee2c0) (0xc00071c640) Stream removed, broadcasting: 1\nI0109 11:21:02.859886 1504 log.go:172] (0xc0006ee2c0) Go away received\nI0109 11:21:02.860284 1504 log.go:172] (0xc0006ee2c0) (0xc00071c640) Stream removed, broadcasting: 1\nI0109 11:21:02.860331 1504 log.go:172] (0xc0006ee2c0) (0xc000692dc0) Stream removed, broadcasting: 3\nI0109 11:21:02.860362 1504 log.go:172] (0xc0006ee2c0) (0xc0004d8000) Stream removed, broadcasting: 5\n" Jan 9 11:21:02.870: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 9 11:21:02.870: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 9 11:21:02.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:21:03.415: INFO: stderr: "I0109 11:21:03.078506 1526 log.go:172] (0xc0007b6790) (0xc0001674a0) Create stream\nI0109 11:21:03.078872 1526 log.go:172] (0xc0007b6790) (0xc0001674a0) Stream added, broadcasting: 1\nI0109 11:21:03.092007 1526 log.go:172] (0xc0007b6790) Reply frame received for 1\nI0109 11:21:03.092168 1526 log.go:172] (0xc0007b6790) (0xc000881f40) Create stream\nI0109 11:21:03.092199 1526 log.go:172] (0xc0007b6790) (0xc000881f40) Stream added, broadcasting: 3\nI0109 11:21:03.093736 1526 log.go:172] (0xc0007b6790) Reply frame received for 3\nI0109 11:21:03.093786 1526 log.go:172] (0xc0007b6790) (0xc0007e1720) Create stream\nI0109 11:21:03.093793 1526 log.go:172] (0xc0007b6790) (0xc0007e1720) Stream added, broadcasting: 5\nI0109 11:21:03.094397 1526 log.go:172] (0xc0007b6790) Reply frame received for 5\nI0109 11:21:03.300363 1526 log.go:172] (0xc0007b6790) Data frame received for 3\nI0109 11:21:03.300485 1526 log.go:172] (0xc000881f40) (3) Data frame handling\nI0109 11:21:03.300531 1526 log.go:172] (0xc000881f40) (3) Data frame sent\nI0109 11:21:03.301453 1526 log.go:172] (0xc0007b6790) Data frame received for 5\nI0109 11:21:03.301481 1526 log.go:172] (0xc0007e1720) (5) Data frame handling\nI0109 11:21:03.301497 1526 log.go:172] (0xc0007e1720) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0109 11:21:03.405152 1526 log.go:172] (0xc0007b6790) Data frame received for 1\nI0109 11:21:03.405333 1526 log.go:172] (0xc0007b6790) (0xc000881f40) Stream removed, broadcasting: 3\nI0109 11:21:03.405498 1526 log.go:172] (0xc0001674a0) (1) Data frame handling\nI0109 11:21:03.405528 1526 log.go:172] (0xc0001674a0) (1) Data frame sent\nI0109 11:21:03.405543 1526 log.go:172] (0xc0007b6790) (0xc0001674a0) Stream removed, broadcasting: 1\nI0109 11:21:03.405825 1526 log.go:172] (0xc0007b6790) (0xc0007e1720) Stream removed, broadcasting: 5\nI0109 11:21:03.405872 1526 log.go:172] (0xc0007b6790) Go away received\nI0109 11:21:03.406134 1526 log.go:172] (0xc0007b6790) (0xc0001674a0) Stream removed, broadcasting: 1\nI0109 11:21:03.406156 1526 log.go:172] (0xc0007b6790) (0xc000881f40) Stream removed, broadcasting: 3\nI0109 11:21:03.406178 1526 log.go:172] (0xc0007b6790) (0xc0007e1720) Stream removed, broadcasting: 5\n" Jan 9 11:21:03.415: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 9 11:21:03.415: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 9 11:21:03.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:21:04.125: INFO: stderr: "I0109 11:21:03.768556 1548 log.go:172] (0xc000716370) (0xc000736640) Create stream\nI0109 11:21:03.768740 1548 log.go:172] (0xc000716370) (0xc000736640) Stream added, broadcasting: 1\nI0109 11:21:03.775788 1548 log.go:172] (0xc000716370) Reply frame received for 1\nI0109 11:21:03.775856 1548 log.go:172] (0xc000716370) (0xc0005a2c80) Create stream\nI0109 11:21:03.775882 1548 log.go:172] (0xc000716370) (0xc0005a2c80) Stream added, broadcasting: 3\nI0109 11:21:03.777115 1548 log.go:172] (0xc000716370) Reply frame received for 3\nI0109 11:21:03.777139 1548 log.go:172] (0xc000716370) (0xc0006c4000) Create stream\nI0109 11:21:03.777150 1548 log.go:172] (0xc000716370) (0xc0006c4000) Stream added, broadcasting: 5\nI0109 11:21:03.780588 1548 log.go:172] (0xc000716370) Reply frame received for 5\nI0109 11:21:03.974527 1548 log.go:172] (0xc000716370) Data frame received for 3\nI0109 11:21:03.974702 1548 log.go:172] (0xc0005a2c80) (3) Data frame handling\nI0109 11:21:03.974713 1548 log.go:172] (0xc0005a2c80) (3) Data frame sent\nI0109 11:21:03.974759 1548 log.go:172] (0xc000716370) Data frame received for 5\nI0109 11:21:03.974766 1548 log.go:172] (0xc0006c4000) (5) Data frame handling\nI0109 11:21:03.974782 1548 log.go:172] (0xc0006c4000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0109 11:21:04.114021 1548 log.go:172] (0xc000716370) Data frame received for 1\nI0109 11:21:04.114186 1548 log.go:172] (0xc000716370) (0xc0006c4000) Stream removed, broadcasting: 5\nI0109 11:21:04.114235 1548 log.go:172] (0xc000736640) (1) Data frame handling\nI0109 11:21:04.114243 1548 log.go:172] (0xc000736640) (1) Data frame sent\nI0109 11:21:04.114281 1548 log.go:172] (0xc000716370) (0xc0005a2c80) Stream removed, broadcasting: 3\nI0109 11:21:04.114324 1548 log.go:172] (0xc000716370) (0xc000736640) Stream removed, broadcasting: 1\nI0109 11:21:04.114340 1548 log.go:172] (0xc000716370) Go away received\nI0109 11:21:04.115197 1548 log.go:172] (0xc000716370) (0xc000736640) Stream removed, broadcasting: 1\nI0109 11:21:04.115210 1548 log.go:172] (0xc000716370) (0xc0005a2c80) Stream removed, broadcasting: 3\nI0109 11:21:04.115214 1548 log.go:172] (0xc000716370) (0xc0006c4000) Stream removed, broadcasting: 5\n" Jan 9 11:21:04.125: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 9 11:21:04.126: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 9 11:21:04.161: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 9 11:21:04.161: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=false Jan 9 11:21:14.188: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 9 11:21:14.188: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 9 11:21:14.188: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jan 9 11:21:14.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 9 11:21:14.734: INFO: stderr: "I0109 11:21:14.411979 1569 log.go:172] (0xc000748370) (0xc000766640) Create stream\nI0109 11:21:14.412265 1569 log.go:172] (0xc000748370) (0xc000766640) Stream added, broadcasting: 1\nI0109 11:21:14.418174 1569 log.go:172] (0xc000748370) Reply frame received for 1\nI0109 11:21:14.418218 1569 log.go:172] (0xc000748370) (0xc0000f0c80) Create stream\nI0109 11:21:14.418227 1569 log.go:172] (0xc000748370) (0xc0000f0c80) Stream added, broadcasting: 3\nI0109 11:21:14.419372 1569 log.go:172] (0xc000748370) Reply frame received for 3\nI0109 11:21:14.419397 1569 log.go:172] (0xc000748370) (0xc0000f0dc0) Create stream\nI0109 11:21:14.419406 1569 log.go:172] (0xc000748370) (0xc0000f0dc0) Stream added, broadcasting: 5\nI0109 11:21:14.420126 1569 log.go:172] (0xc000748370) Reply frame received for 5\nI0109 11:21:14.617810 1569 log.go:172] (0xc000748370) Data frame received for 3\nI0109 11:21:14.617883 1569 log.go:172] (0xc0000f0c80) (3) Data frame handling\nI0109 11:21:14.617894 1569 log.go:172] (0xc0000f0c80) (3) Data frame sent\nI0109 11:21:14.726815 1569 log.go:172] (0xc000748370) Data frame received for 1\nI0109 11:21:14.726924 1569 log.go:172] (0xc000748370) (0xc0000f0c80) Stream removed, broadcasting: 3\nI0109 11:21:14.726947 1569 log.go:172] (0xc000766640) (1) Data frame handling\nI0109 11:21:14.726962 1569 log.go:172] (0xc000766640) (1) Data frame sent\nI0109 11:21:14.727025 1569 log.go:172] (0xc000748370) (0xc0000f0dc0) Stream removed, broadcasting: 5\nI0109 11:21:14.727116 1569 log.go:172] (0xc000748370) (0xc000766640) Stream removed, broadcasting: 1\nI0109 11:21:14.727133 1569 log.go:172] (0xc000748370) Go away received\nI0109 11:21:14.727570 1569 log.go:172] (0xc000748370) (0xc000766640) Stream removed, broadcasting: 1\nI0109 11:21:14.727585 1569 log.go:172] (0xc000748370) (0xc0000f0c80) Stream removed, broadcasting: 3\nI0109 11:21:14.727591 1569 log.go:172] (0xc000748370) (0xc0000f0dc0) Stream removed, broadcasting: 5\n" Jan 9 11:21:14.734: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 9 11:21:14.734: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 9 11:21:14.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 9 11:21:15.281: INFO: stderr: "I0109 11:21:14.971256 1591 log.go:172] (0xc0001380b0) (0xc0005e2000) Create stream\nI0109 11:21:14.971386 1591 log.go:172] (0xc0001380b0) (0xc0005e2000) Stream added, broadcasting: 1\nI0109 11:21:14.976001 1591 log.go:172] (0xc0001380b0) Reply frame received for 1\nI0109 11:21:14.976028 1591 log.go:172] (0xc0001380b0) (0xc00001cc80) Create stream\nI0109 11:21:14.976034 1591 log.go:172] (0xc0001380b0) (0xc00001cc80) Stream added, broadcasting: 3\nI0109 11:21:14.976917 1591 log.go:172] (0xc0001380b0) Reply frame received for 3\nI0109 11:21:14.976934 1591 log.go:172] (0xc0001380b0) (0xc0005e20a0) Create stream\nI0109 11:21:14.976942 1591 log.go:172] (0xc0001380b0) (0xc0005e20a0) Stream added, broadcasting: 5\nI0109 11:21:14.977622 1591 log.go:172] (0xc0001380b0) Reply frame received for 5\nI0109 11:21:15.109044 1591 log.go:172] (0xc0001380b0) Data frame received for 3\nI0109 11:21:15.109113 1591 log.go:172] (0xc00001cc80) (3) Data frame handling\nI0109 11:21:15.109132 1591 log.go:172] (0xc00001cc80) (3) Data frame sent\nI0109 11:21:15.271525 1591 log.go:172] (0xc0001380b0) (0xc00001cc80) Stream removed, broadcasting: 3\nI0109 11:21:15.271701 1591 log.go:172] (0xc0001380b0) Data frame received for 1\nI0109 11:21:15.271723 1591 log.go:172] (0xc0001380b0) (0xc0005e20a0) Stream removed, broadcasting: 5\nI0109 11:21:15.271748 1591 log.go:172] (0xc0005e2000) (1) Data frame handling\nI0109 11:21:15.271759 1591 log.go:172] (0xc0005e2000) (1) Data frame sent\nI0109 11:21:15.271775 1591 log.go:172] (0xc0001380b0) (0xc0005e2000) Stream removed, broadcasting: 1\nI0109 11:21:15.271905 1591 log.go:172] (0xc0001380b0) Go away received\nI0109 11:21:15.272230 1591 log.go:172] (0xc0001380b0) (0xc0005e2000) Stream removed, broadcasting: 1\nI0109 11:21:15.272241 1591 log.go:172] (0xc0001380b0) (0xc00001cc80) Stream removed, broadcasting: 3\nI0109 11:21:15.272248 1591 log.go:172] (0xc0001380b0) (0xc0005e20a0) Stream removed, broadcasting: 5\n" Jan 9 11:21:15.281: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 9 11:21:15.281: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 9 11:21:15.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 9 11:21:15.765: INFO: stderr: "I0109 11:21:15.451415 1614 log.go:172] (0xc0007e0160) (0xc0006f8640) Create stream\nI0109 11:21:15.451902 1614 log.go:172] (0xc0007e0160) (0xc0006f8640) Stream added, broadcasting: 1\nI0109 11:21:15.460741 1614 log.go:172] (0xc0007e0160) Reply frame received for 1\nI0109 11:21:15.460863 1614 log.go:172] (0xc0007e0160) (0xc000650dc0) Create stream\nI0109 11:21:15.460904 1614 log.go:172] (0xc0007e0160) (0xc000650dc0) Stream added, broadcasting: 3\nI0109 11:21:15.464176 1614 log.go:172] (0xc0007e0160) Reply frame received for 3\nI0109 11:21:15.464199 1614 log.go:172] (0xc0007e0160) (0xc0006f86e0) Create stream\nI0109 11:21:15.464215 1614 log.go:172] (0xc0007e0160) (0xc0006f86e0) Stream added, broadcasting: 5\nI0109 11:21:15.465626 1614 log.go:172] (0xc0007e0160) Reply frame received for 5\nI0109 11:21:15.665297 1614 log.go:172] (0xc0007e0160) Data frame received for 3\nI0109 11:21:15.665344 1614 log.go:172] (0xc000650dc0) (3) Data frame handling\nI0109 11:21:15.665358 1614 log.go:172] (0xc000650dc0) (3) Data frame sent\nI0109 11:21:15.758488 1614 log.go:172] (0xc0007e0160) (0xc000650dc0) Stream removed, broadcasting: 3\nI0109 11:21:15.758914 1614 log.go:172] (0xc0007e0160) (0xc0006f86e0) Stream removed, broadcasting: 5\nI0109 11:21:15.759039 1614 log.go:172] (0xc0007e0160) Data frame received for 1\nI0109 11:21:15.759077 1614 log.go:172] (0xc0006f8640) (1) Data frame handling\nI0109 11:21:15.759096 1614 log.go:172] (0xc0006f8640) (1) Data frame sent\nI0109 11:21:15.759114 1614 log.go:172] (0xc0007e0160) (0xc0006f8640) Stream removed, broadcasting: 1\nI0109 11:21:15.759133 1614 log.go:172] (0xc0007e0160) Go away received\nI0109 11:21:15.759735 1614 log.go:172] (0xc0007e0160) (0xc0006f8640) Stream removed, broadcasting: 1\nI0109 11:21:15.759746 1614 log.go:172] (0xc0007e0160) (0xc000650dc0) Stream removed, broadcasting: 3\nI0109 11:21:15.759749 1614 log.go:172] (0xc0007e0160) (0xc0006f86e0) Stream removed, broadcasting: 5\n" Jan 9 11:21:15.765: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 9 11:21:15.765: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 9 11:21:15.765: INFO: Waiting for statefulset status.replicas updated to 0 Jan 9 11:21:15.775: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Jan 9 11:21:25.805: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 9 11:21:25.805: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 9 11:21:25.805: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 9 11:21:25.965: INFO: POD NODE PHASE GRACE CONDITIONS Jan 9 11:21:25.965: INFO: ss-0 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:31 +0000 UTC }] Jan 9 11:21:25.965: INFO: ss-1 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:52 +0000 UTC }] Jan 9 11:21:25.965: INFO: ss-2 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:52 +0000 UTC }] Jan 9 11:21:25.965: INFO: Jan 9 11:21:25.965: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 9 11:21:26.978: INFO: POD NODE PHASE GRACE CONDITIONS Jan 9 11:21:26.978: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:31 +0000 UTC }] Jan 9 11:21:26.978: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:52 +0000 UTC }] Jan 9 11:21:26.978: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:52 +0000 UTC }] Jan 9 11:21:26.978: INFO: Jan 9 11:21:26.978: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 9 11:21:28.071: INFO: POD NODE PHASE GRACE CONDITIONS Jan 9 11:21:28.071: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:31 +0000 UTC }] Jan 9 11:21:28.071: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:52 +0000 UTC }] Jan 9 11:21:28.071: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:52 +0000 UTC }] Jan 9 11:21:28.071: INFO: Jan 9 11:21:28.071: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 9 11:21:29.137: INFO: POD NODE PHASE GRACE CONDITIONS Jan 9 11:21:29.137: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:31 +0000 UTC }] Jan 9 11:21:29.138: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:52 +0000 UTC }] Jan 9 11:21:29.138: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:52 +0000 UTC }] Jan 9 11:21:29.138: INFO: Jan 9 11:21:29.138: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 9 11:21:30.157: INFO: POD NODE PHASE GRACE CONDITIONS Jan 9 11:21:30.157: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:31 +0000 UTC }] Jan 9 11:21:30.157: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:52 +0000 UTC }] Jan 9 11:21:30.157: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:52 +0000 UTC }] Jan 9 11:21:30.157: INFO: Jan 9 11:21:30.157: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 9 11:21:31.177: INFO: POD NODE PHASE GRACE CONDITIONS Jan 9 11:21:31.177: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:31 +0000 UTC }] Jan 9 11:21:31.177: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:52 +0000 UTC }] Jan 9 11:21:31.178: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:52 +0000 UTC }] Jan 9 11:21:31.178: INFO: Jan 9 11:21:31.178: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 9 11:21:32.467: INFO: POD NODE PHASE GRACE CONDITIONS Jan 9 11:21:32.467: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:31 +0000 UTC }] Jan 9 11:21:32.467: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:52 +0000 UTC }] Jan 9 11:21:32.467: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:52 +0000 UTC }] Jan 9 11:21:32.467: INFO: Jan 9 11:21:32.467: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 9 11:21:33.480: INFO: POD NODE PHASE GRACE CONDITIONS Jan 9 11:21:33.480: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:31 +0000 UTC }] Jan 9 11:21:33.480: INFO: ss-1 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:52 +0000 UTC }] Jan 9 11:21:33.480: INFO: ss-2 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:52 +0000 UTC }] Jan 9 11:21:33.480: INFO: Jan 9 11:21:33.480: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 9 11:21:34.521: INFO: POD NODE PHASE GRACE CONDITIONS Jan 9 11:21:34.521: INFO: ss-0 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:31 +0000 UTC }] Jan 9 11:21:34.522: INFO: ss-1 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:52 +0000 UTC }] Jan 9 11:21:34.522: INFO: ss-2 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:52 +0000 UTC }] Jan 9 11:21:34.522: INFO: Jan 9 11:21:34.522: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 9 11:21:35.538: INFO: POD NODE PHASE GRACE CONDITIONS Jan 9 11:21:35.538: INFO: ss-0 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:31 +0000 UTC }] Jan 9 11:21:35.539: INFO: ss-1 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:52 +0000 UTC }] Jan 9 11:21:35.539: INFO: ss-2 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:21:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:20:52 +0000 UTC }] Jan 9 11:21:35.539: INFO: Jan 9 11:21:35.539: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-6l8qp Jan 9 11:21:36.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:21:36.759: INFO: rc: 1 Jan 9 11:21:36.759: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc000e55b60 exit status 1 true [0xc002100380 0xc002100398 0xc0021003b0] [0xc002100380 0xc002100398 0xc0021003b0] [0xc002100390 0xc0021003a8] [0x935700 0x935700] 0xc001a58c00 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Jan 9 11:21:46.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:21:46.915: INFO: rc: 1 Jan 9 11:21:46.915: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000cd6750 exit status 1 true [0xc001674318 0xc001674330 0xc001674348] [0xc001674318 0xc001674330 0xc001674348] [0xc001674328 0xc001674340] [0x935700 0x935700] 0xc001c8f740 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 9 11:21:56.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:21:57.086: INFO: rc: 1 Jan 9 11:21:57.086: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000e55cb0 exit status 1 true [0xc0021003b8 0xc0021003d0 0xc0021003e8] [0xc0021003b8 0xc0021003d0 0xc0021003e8] [0xc0021003c8 0xc0021003e0] [0x935700 0x935700] 0xc001a59140 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 9 11:22:07.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:22:07.263: INFO: rc: 1 Jan 9 11:22:07.263: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000cd6930 exit status 1 true [0xc001674350 0xc001674368 0xc001674380] [0xc001674350 0xc001674368 0xc001674380] [0xc001674360 0xc001674378] [0x935700 0x935700] 0xc001c8faa0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 9 11:22:17.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:22:17.399: INFO: rc: 1 Jan 9 11:22:17.399: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000e55e00 exit status 1 true [0xc0021003f0 0xc002100408 0xc002100420] [0xc0021003f0 0xc002100408 0xc002100420] [0xc002100400 0xc002100418] [0x935700 0x935700] 0xc001a59740 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 9 11:22:27.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:22:27.567: INFO: rc: 1 Jan 9 11:22:27.567: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001758150 exit status 1 true [0xc00000e2e8 0xc00032ac18 0xc00032ac48] [0xc00000e2e8 0xc00032ac18 0xc00032ac48] [0xc00032ac00 0xc00032ac30] [0x935700 0x935700] 0xc001c681e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 9 11:22:37.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:22:37.706: INFO: rc: 1 Jan 9 11:22:37.706: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0018b40f0 exit status 1 true [0xc0013e8000 0xc0013e8078 0xc0013e8120] [0xc0013e8000 0xc0013e8078 0xc0013e8120] [0xc0013e8040 0xc0013e8118] [0x935700 0x935700] 0xc001d801e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 9 11:22:47.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:22:47.896: INFO: rc: 1 Jan 9 11:22:47.897: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001758510 exit status 1 true [0xc00032ac58 0xc00032ac88 0xc00032ae08] [0xc00032ac58 0xc00032ac88 0xc00032ae08] [0xc00032ac80 0xc00032ad40] [0x935700 0x935700] 0xc001c68480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 9 11:22:57.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:22:58.039: INFO: rc: 1 Jan 9 11:22:58.039: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0018b4270 exit status 1 true [0xc0013e8130 0xc0013e81c0 0xc0013e8240] [0xc0013e8130 0xc0013e81c0 0xc0013e8240] [0xc0013e81a0 0xc0013e8238] [0x935700 0x935700] 0xc001d80480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 9 11:23:08.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:23:08.218: INFO: rc: 1 Jan 9 11:23:08.219: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a14390 exit status 1 true [0xc00094c000 0xc00094c018 0xc00094c030] [0xc00094c000 0xc00094c018 0xc00094c030] [0xc00094c010 0xc00094c028] [0x935700 0x935700] 0xc00174ee40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 9 11:23:18.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:23:18.378: INFO: rc: 1 Jan 9 11:23:18.378: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000507530 exit status 1 true [0xc001674000 0xc001674018 0xc001674030] [0xc001674000 0xc001674018 0xc001674030] [0xc001674010 0xc001674028] [0x935700 0x935700] 0xc00181f680 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 9 11:23:28.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:23:28.618: INFO: rc: 1 Jan 9 11:23:28.619: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000507680 exit status 1 true [0xc001674038 0xc001674050 0xc001674068] [0xc001674038 0xc001674050 0xc001674068] [0xc001674048 0xc001674060] [0x935700 0x935700] 0xc001bd4480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 9 11:23:38.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:23:38.779: INFO: rc: 1 Jan 9 11:23:38.779: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0018b43c0 exit status 1 true [0xc0013e8258 0xc0013e8288 0xc0013e8300] [0xc0013e8258 0xc0013e8288 0xc0013e8300] [0xc0013e8280 0xc0013e82e0] [0x935700 0x935700] 0xc001d80840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 9 11:23:48.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:23:49.002: INFO: rc: 1 Jan 9 11:23:49.002: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0018b44e0 exit status 1 true [0xc0013e8308 0xc0013e83e8 0xc0013e8420] [0xc0013e8308 0xc0013e83e8 0xc0013e8420] [0xc0013e8380 0xc0013e8418] [0x935700 0x935700] 0xc001d80ba0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 9 11:23:59.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:23:59.158: INFO: rc: 1 Jan 9 11:23:59.159: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a14510 exit status 1 true [0xc00094c038 0xc00094c050 0xc00094c068] [0xc00094c038 0xc00094c050 0xc00094c068] [0xc00094c048 0xc00094c060] [0x935700 0x935700] 0xc00174f200 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 9 11:24:09.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:24:09.325: INFO: rc: 1 Jan 9 11:24:09.325: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001758690 exit status 1 true [0xc00032ae20 0xc00032aed0 0xc00032afd8] [0xc00032ae20 0xc00032aed0 0xc00032afd8] [0xc00032ae50 0xc00032afd0] [0x935700 0x935700] 0xc001c68720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 9 11:24:19.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:24:19.479: INFO: rc: 1 Jan 9 11:24:19.479: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0005077d0 exit status 1 true [0xc001674070 0xc001674088 0xc0016740a0] [0xc001674070 0xc001674088 0xc0016740a0] [0xc001674080 0xc001674098] [0x935700 0x935700] 0xc001bd5260 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 9 11:24:29.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:24:29.669: INFO: rc: 1 Jan 9 11:24:29.669: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0018b4600 exit status 1 true [0xc0013e8438 0xc0013e8498 0xc0013e84b8] [0xc0013e8438 0xc0013e8498 0xc0013e84b8] [0xc0013e8488 0xc0013e84b0] [0x935700 0x935700] 0xc001d80e40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 9 11:24:39.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:24:39.831: INFO: rc: 1 Jan 9 11:24:39.832: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0018b4150 exit status 1 true [0xc00016e000 0xc00094c000 0xc00094c018] [0xc00016e000 0xc00094c000 0xc00094c018] [0xc00000e2e8 0xc00094c010] [0x935700 0x935700] 0xc0017bae40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 9 11:24:49.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:24:49.974: INFO: rc: 1 Jan 9 11:24:49.974: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a143c0 exit status 1 true [0xc0013e8000 0xc0013e8078 0xc0013e8120] [0xc0013e8000 0xc0013e8078 0xc0013e8120] [0xc0013e8040 0xc0013e8118] [0x935700 0x935700] 0xc00174eba0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 9 11:24:59.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:25:00.143: INFO: rc: 1 Jan 9 11:25:00.144: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0018b42a0 exit status 1 true [0xc00094c020 0xc00094c038 0xc00094c050] [0xc00094c020 0xc00094c038 0xc00094c050] [0xc00094c030 0xc00094c048] [0x935700 0x935700] 0xc001d801e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 9 11:25:10.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:25:10.312: INFO: rc: 1 Jan 9 11:25:10.313: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0018b43f0 exit status 1 true [0xc00094c058 0xc00094c070 0xc00094c088] [0xc00094c058 0xc00094c070 0xc00094c088] [0xc00094c068 0xc00094c080] [0x935700 0x935700] 0xc001d80480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 9 11:25:20.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:25:20.450: INFO: rc: 1 Jan 9 11:25:20.451: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0018b4660 exit status 1 true [0xc00094c090 0xc00094c0a8 0xc00094c0c0] [0xc00094c090 0xc00094c0a8 0xc00094c0c0] [0xc00094c0a0 0xc00094c0b8] [0x935700 0x935700] 0xc001d80840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 9 11:25:30.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:25:30.662: INFO: rc: 1 Jan 9 11:25:30.662: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001758360 exit status 1 true [0xc00032ac00 0xc00032ac30 0xc00032ac70] [0xc00032ac00 0xc00032ac30 0xc00032ac70] [0xc00032ac28 0xc00032ac58] [0x935700 0x935700] 0xc001c681e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 9 11:25:40.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:25:40.811: INFO: rc: 1 Jan 9 11:25:40.812: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0017585a0 exit status 1 true [0xc00032ac80 0xc00032ad40 0xc00032ae30] [0xc00032ac80 0xc00032ad40 0xc00032ae30] [0xc00032aca0 0xc00032ae20] [0x935700 0x935700] 0xc001c68480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 9 11:25:50.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:25:50.989: INFO: rc: 1 Jan 9 11:25:50.989: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000507500 exit status 1 true [0xc001674000 0xc001674018 0xc001674030] [0xc001674000 0xc001674018 0xc001674030] [0xc001674010 0xc001674028] [0x935700 0x935700] 0xc001bd49c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 9 11:26:00.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:26:01.172: INFO: rc: 1 Jan 9 11:26:01.172: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a145d0 exit status 1 true [0xc0013e8130 0xc0013e81c0 0xc0013e8240] [0xc0013e8130 0xc0013e81c0 0xc0013e8240] [0xc0013e81a0 0xc0013e8238] [0x935700 0x935700] 0xc00174f080 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 9 11:26:11.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:26:11.344: INFO: rc: 1 Jan 9 11:26:11.344: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0017586f0 exit status 1 true [0xc00032ae50 0xc00032afd0 0xc00032b070] [0xc00032ae50 0xc00032afd0 0xc00032b070] [0xc00032af78 0xc00032aff8] [0x935700 0x935700] 0xc001c68720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 9 11:26:21.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:26:21.475: INFO: rc: 1 Jan 9 11:26:21.475: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0005076b0 exit status 1 true [0xc001674038 0xc001674050 0xc001674068] [0xc001674038 0xc001674050 0xc001674068] [0xc001674048 0xc001674060] [0x935700 0x935700] 0xc001bd59e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 9 11:26:31.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:26:31.658: INFO: rc: 1 Jan 9 11:26:31.658: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0018b4120 exit status 1 true [0xc00000e2e8 0xc00094c008 0xc00094c020] [0xc00000e2e8 0xc00094c008 0xc00094c020] [0xc00094c000 0xc00094c018] [0x935700 0x935700] 0xc00181f680 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 9 11:26:41.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6l8qp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 9 11:26:41.830: INFO: rc: 1 Jan 9 11:26:41.830: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Jan 9 11:26:41.830: INFO: Scaling statefulset ss to 0 Jan 9 11:26:41.855: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jan 9 11:26:41.858: INFO: Deleting all statefulset in ns e2e-tests-statefulset-6l8qp Jan 9 11:26:41.862: INFO: Scaling statefulset ss to 0 Jan 9 11:26:41.871: INFO: Waiting for statefulset status.replicas updated to 0 Jan 9 11:26:41.873: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:26:41.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-6l8qp" for this suite. Jan 9 11:26:48.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:26:48.181: INFO: namespace: e2e-tests-statefulset-6l8qp, resource: bindings, ignored listing per whitelist Jan 9 11:26:48.192: INFO: namespace e2e-tests-statefulset-6l8qp deletion completed in 6.244194421s • [SLOW TEST:377.044 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:26:48.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-jtvqk [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-jtvqk STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-jtvqk STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-jtvqk STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-jtvqk Jan 9 11:27:00.786: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-jtvqk, name: ss-0, uid: efb72b8b-32d2-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete. Jan 9 11:27:02.480: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-jtvqk, name: ss-0, uid: efb72b8b-32d2-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete. Jan 9 11:27:02.661: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-jtvqk, name: ss-0, uid: efb72b8b-32d2-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete. Jan 9 11:27:02.699: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-jtvqk STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-jtvqk STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-jtvqk and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jan 9 11:27:15.571: INFO: Deleting all statefulset in ns e2e-tests-statefulset-jtvqk Jan 9 11:27:15.584: INFO: Scaling statefulset ss to 0 Jan 9 11:27:35.661: INFO: Waiting for statefulset status.replicas updated to 0 Jan 9 11:27:35.679: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:27:35.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-jtvqk" for this suite. Jan 9 11:27:41.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:27:41.931: INFO: namespace: e2e-tests-statefulset-jtvqk, resource: bindings, ignored listing per whitelist Jan 9 11:27:41.960: INFO: namespace e2e-tests-statefulset-jtvqk deletion completed in 6.23098596s • [SLOW TEST:53.768 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:27:41.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-4kk2z STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 9 11:27:42.160: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 9 11:28:20.485: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-4kk2z PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 9 11:28:20.485: INFO: >>> kubeConfig: /root/.kube/config I0109 11:28:20.644835 9 log.go:172] (0xc000176a50) (0xc0016980a0) Create stream I0109 11:28:20.644958 9 log.go:172] (0xc000176a50) (0xc0016980a0) Stream added, broadcasting: 1 I0109 11:28:20.651160 9 log.go:172] (0xc000176a50) Reply frame received for 1 I0109 11:28:20.651192 9 log.go:172] (0xc000176a50) (0xc001a13900) Create stream I0109 11:28:20.651201 9 log.go:172] (0xc000176a50) (0xc001a13900) Stream added, broadcasting: 3 I0109 11:28:20.652148 9 log.go:172] (0xc000176a50) Reply frame received for 3 I0109 11:28:20.652171 9 log.go:172] (0xc000176a50) (0xc001a139a0) Create stream I0109 11:28:20.652179 9 log.go:172] (0xc000176a50) (0xc001a139a0) Stream added, broadcasting: 5 I0109 11:28:20.653442 9 log.go:172] (0xc000176a50) Reply frame received for 5 I0109 11:28:20.878536 9 log.go:172] (0xc000176a50) Data frame received for 3 I0109 11:28:20.878840 9 log.go:172] (0xc001a13900) (3) Data frame handling I0109 11:28:20.878880 9 log.go:172] (0xc001a13900) (3) Data frame sent I0109 11:28:21.083314 9 log.go:172] (0xc000176a50) Data frame received for 1 I0109 11:28:21.083468 9 log.go:172] (0xc000176a50) (0xc001a13900) Stream removed, broadcasting: 3 I0109 11:28:21.083532 9 log.go:172] (0xc0016980a0) (1) Data frame handling I0109 11:28:21.083554 9 log.go:172] (0xc0016980a0) (1) Data frame sent I0109 11:28:21.083571 9 log.go:172] (0xc000176a50) (0xc001a139a0) Stream removed, broadcasting: 5 I0109 11:28:21.083613 9 log.go:172] (0xc000176a50) (0xc0016980a0) Stream removed, broadcasting: 1 I0109 11:28:21.083642 9 log.go:172] (0xc000176a50) Go away received I0109 11:28:21.083811 9 log.go:172] (0xc000176a50) (0xc0016980a0) Stream removed, broadcasting: 1 I0109 11:28:21.083843 9 log.go:172] (0xc000176a50) (0xc001a13900) Stream removed, broadcasting: 3 I0109 11:28:21.083861 9 log.go:172] (0xc000176a50) (0xc001a139a0) Stream removed, broadcasting: 5 Jan 9 11:28:21.083: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:28:21.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-4kk2z" for this suite. Jan 9 11:28:45.145: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:28:45.192: INFO: namespace: e2e-tests-pod-network-test-4kk2z, resource: bindings, ignored listing per whitelist Jan 9 11:28:45.269: INFO: namespace e2e-tests-pod-network-test-4kk2z deletion completed in 24.168408333s • [SLOW TEST:63.309 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:28:45.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 9 11:31:50.185: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 11:31:50.238: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 11:31:52.239: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 11:31:52.263: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 11:31:54.239: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 11:31:54.255: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 11:31:56.239: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 11:31:56.264: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 11:31:58.239: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 11:31:58.302: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 11:32:00.239: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 11:32:00.291: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 11:32:02.239: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 11:32:02.260: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 11:32:04.239: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 11:32:04.324: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 11:32:06.239: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 11:32:06.262: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 11:32:08.239: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 11:32:08.257: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 11:32:10.239: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 11:32:10.264: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 11:32:12.239: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 11:32:12.270: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 11:32:14.239: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 11:32:14.253: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 11:32:16.239: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 11:32:16.255: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 11:32:18.239: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 11:32:18.257: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 11:32:20.239: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 11:32:20.255: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 11:32:22.239: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 11:32:22.255: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 11:32:24.239: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 11:32:24.253: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 11:32:26.239: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 11:32:26.248: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:32:26.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-vxx4g" for this suite. Jan 9 11:32:50.362: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:32:50.479: INFO: namespace: e2e-tests-container-lifecycle-hook-vxx4g, resource: bindings, ignored listing per whitelist Jan 9 11:32:50.565: INFO: namespace e2e-tests-container-lifecycle-hook-vxx4g deletion completed in 24.240966847s • [SLOW TEST:245.296 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:32:50.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 9 11:32:50.897: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c4a937c9-32d3-11ea-ac2d-0242ac110005" in namespace "e2e-tests-downward-api-f76w5" to be "success or failure" Jan 9 11:32:50.941: INFO: Pod "downwardapi-volume-c4a937c9-32d3-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 44.371014ms Jan 9 11:32:52.960: INFO: Pod "downwardapi-volume-c4a937c9-32d3-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063064745s Jan 9 11:32:54.993: INFO: Pod "downwardapi-volume-c4a937c9-32d3-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096446246s Jan 9 11:32:57.818: INFO: Pod "downwardapi-volume-c4a937c9-32d3-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.920804182s Jan 9 11:32:59.860: INFO: Pod "downwardapi-volume-c4a937c9-32d3-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.963616175s Jan 9 11:33:01.891: INFO: Pod "downwardapi-volume-c4a937c9-32d3-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.994500524s STEP: Saw pod success Jan 9 11:33:01.891: INFO: Pod "downwardapi-volume-c4a937c9-32d3-11ea-ac2d-0242ac110005" satisfied condition "success or failure" Jan 9 11:33:01.899: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-c4a937c9-32d3-11ea-ac2d-0242ac110005 container client-container: STEP: delete the pod Jan 9 11:33:02.038: INFO: Waiting for pod downwardapi-volume-c4a937c9-32d3-11ea-ac2d-0242ac110005 to disappear Jan 9 11:33:02.054: INFO: Pod downwardapi-volume-c4a937c9-32d3-11ea-ac2d-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:33:02.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-f76w5" for this suite. Jan 9 11:33:08.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:33:08.243: INFO: namespace: e2e-tests-downward-api-f76w5, resource: bindings, ignored listing per whitelist Jan 9 11:33:08.298: INFO: namespace e2e-tests-downward-api-f76w5 deletion completed in 6.224044801s • [SLOW TEST:17.733 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:33:08.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-j59ws.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-j59ws.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-j59ws.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-j59ws.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-j59ws.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-j59ws.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 9 11:33:22.726: INFO: Unable to read wheezy_udp@kubernetes.default from pod e2e-tests-dns-j59ws/dns-test-cf4152b9-32d3-11ea-ac2d-0242ac110005: the server could not find the requested resource (get pods dns-test-cf4152b9-32d3-11ea-ac2d-0242ac110005) Jan 9 11:33:22.742: INFO: Unable to read wheezy_tcp@kubernetes.default from pod e2e-tests-dns-j59ws/dns-test-cf4152b9-32d3-11ea-ac2d-0242ac110005: the server could not find the requested resource (get pods dns-test-cf4152b9-32d3-11ea-ac2d-0242ac110005) Jan 9 11:33:22.749: INFO: Unable to read wheezy_udp@kubernetes.default.svc from pod e2e-tests-dns-j59ws/dns-test-cf4152b9-32d3-11ea-ac2d-0242ac110005: the server could not find the requested resource (get pods dns-test-cf4152b9-32d3-11ea-ac2d-0242ac110005) Jan 9 11:33:22.765: INFO: Unable to read wheezy_tcp@kubernetes.default.svc from pod e2e-tests-dns-j59ws/dns-test-cf4152b9-32d3-11ea-ac2d-0242ac110005: the server could not find the requested resource (get pods dns-test-cf4152b9-32d3-11ea-ac2d-0242ac110005) Jan 9 11:33:22.775: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-j59ws/dns-test-cf4152b9-32d3-11ea-ac2d-0242ac110005: the server could not find the requested resource (get pods dns-test-cf4152b9-32d3-11ea-ac2d-0242ac110005) Jan 9 11:33:22.782: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-j59ws/dns-test-cf4152b9-32d3-11ea-ac2d-0242ac110005: the server could not find the requested resource (get pods dns-test-cf4152b9-32d3-11ea-ac2d-0242ac110005) Jan 9 11:33:22.787: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-j59ws.svc.cluster.local from pod e2e-tests-dns-j59ws/dns-test-cf4152b9-32d3-11ea-ac2d-0242ac110005: the server could not find the requested resource (get pods dns-test-cf4152b9-32d3-11ea-ac2d-0242ac110005) Jan 9 11:33:22.795: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod e2e-tests-dns-j59ws/dns-test-cf4152b9-32d3-11ea-ac2d-0242ac110005: the server could not find the requested resource (get pods dns-test-cf4152b9-32d3-11ea-ac2d-0242ac110005) Jan 9 11:33:22.802: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-j59ws/dns-test-cf4152b9-32d3-11ea-ac2d-0242ac110005: the server could not find the requested resource (get pods dns-test-cf4152b9-32d3-11ea-ac2d-0242ac110005) Jan 9 11:33:22.806: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-j59ws/dns-test-cf4152b9-32d3-11ea-ac2d-0242ac110005: the server could not find the requested resource (get pods dns-test-cf4152b9-32d3-11ea-ac2d-0242ac110005) Jan 9 11:33:22.811: INFO: Unable to read jessie_udp@kubernetes.default from pod e2e-tests-dns-j59ws/dns-test-cf4152b9-32d3-11ea-ac2d-0242ac110005: the server could not find the requested resource (get pods dns-test-cf4152b9-32d3-11ea-ac2d-0242ac110005) Jan 9 11:33:22.815: INFO: Unable to read jessie_tcp@kubernetes.default from pod e2e-tests-dns-j59ws/dns-test-cf4152b9-32d3-11ea-ac2d-0242ac110005: the server could not find the requested resource (get pods dns-test-cf4152b9-32d3-11ea-ac2d-0242ac110005) Jan 9 11:33:22.819: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod e2e-tests-dns-j59ws/dns-test-cf4152b9-32d3-11ea-ac2d-0242ac110005: the server could not find the requested resource (get pods dns-test-cf4152b9-32d3-11ea-ac2d-0242ac110005) Jan 9 11:33:22.834: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod e2e-tests-dns-j59ws/dns-test-cf4152b9-32d3-11ea-ac2d-0242ac110005: the server could not find the requested resource (get pods dns-test-cf4152b9-32d3-11ea-ac2d-0242ac110005) Jan 9 11:33:22.845: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-j59ws/dns-test-cf4152b9-32d3-11ea-ac2d-0242ac110005: the server could not find the requested resource (get pods dns-test-cf4152b9-32d3-11ea-ac2d-0242ac110005) Jan 9 11:33:22.855: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-j59ws/dns-test-cf4152b9-32d3-11ea-ac2d-0242ac110005: the server could not find the requested resource (get pods dns-test-cf4152b9-32d3-11ea-ac2d-0242ac110005) Jan 9 11:33:22.875: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-j59ws.svc.cluster.local from pod e2e-tests-dns-j59ws/dns-test-cf4152b9-32d3-11ea-ac2d-0242ac110005: the server could not find the requested resource (get pods dns-test-cf4152b9-32d3-11ea-ac2d-0242ac110005) Jan 9 11:33:22.887: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-j59ws/dns-test-cf4152b9-32d3-11ea-ac2d-0242ac110005: the server could not find the requested resource (get pods dns-test-cf4152b9-32d3-11ea-ac2d-0242ac110005) Jan 9 11:33:22.895: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-j59ws/dns-test-cf4152b9-32d3-11ea-ac2d-0242ac110005: the server could not find the requested resource (get pods dns-test-cf4152b9-32d3-11ea-ac2d-0242ac110005) Jan 9 11:33:22.900: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-j59ws/dns-test-cf4152b9-32d3-11ea-ac2d-0242ac110005: the server could not find the requested resource (get pods dns-test-cf4152b9-32d3-11ea-ac2d-0242ac110005) Jan 9 11:33:22.900: INFO: Lookups using e2e-tests-dns-j59ws/dns-test-cf4152b9-32d3-11ea-ac2d-0242ac110005 failed for: [wheezy_udp@kubernetes.default wheezy_tcp@kubernetes.default wheezy_udp@kubernetes.default.svc wheezy_tcp@kubernetes.default.svc wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-j59ws.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-j59ws.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord] Jan 9 11:33:28.142: INFO: DNS probes using e2e-tests-dns-j59ws/dns-test-cf4152b9-32d3-11ea-ac2d-0242ac110005 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:33:28.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-j59ws" for this suite. Jan 9 11:33:34.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:33:34.904: INFO: namespace: e2e-tests-dns-j59ws, resource: bindings, ignored listing per whitelist Jan 9 11:33:34.924: INFO: namespace e2e-tests-dns-j59ws deletion completed in 6.487022925s • [SLOW TEST:26.626 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:33:34.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 9 11:33:35.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-6rlft' Jan 9 11:33:37.097: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 9 11:33:37.097: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 Jan 9 11:33:41.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-6rlft' Jan 9 11:33:41.900: INFO: stderr: "" Jan 9 11:33:41.900: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:33:41.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-6rlft" for this suite. Jan 9 11:34:04.068: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:34:04.142: INFO: namespace: e2e-tests-kubectl-6rlft, resource: bindings, ignored listing per whitelist Jan 9 11:34:04.178: INFO: namespace e2e-tests-kubectl-6rlft deletion completed in 22.262548191s • [SLOW TEST:29.254 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:34:04.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 9 11:34:04.415: INFO: Creating deployment "nginx-deployment" Jan 9 11:34:04.422: INFO: Waiting for observed generation 1 Jan 9 11:34:06.998: INFO: Waiting for all required pods to come up Jan 9 11:34:07.640: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Jan 9 11:34:47.766: INFO: Waiting for deployment "nginx-deployment" to complete Jan 9 11:34:47.779: INFO: Updating deployment "nginx-deployment" with a non-existent image Jan 9 11:34:47.802: INFO: Updating deployment nginx-deployment Jan 9 11:34:47.802: INFO: Waiting for observed generation 2 Jan 9 11:34:50.394: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jan 9 11:34:51.879: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jan 9 11:34:51.893: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jan 9 11:34:52.173: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jan 9 11:34:52.173: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jan 9 11:34:52.184: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jan 9 11:34:52.504: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Jan 9 11:34:52.504: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Jan 9 11:34:52.568: INFO: Updating deployment nginx-deployment Jan 9 11:34:52.568: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Jan 9 11:34:52.867: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jan 9 11:34:55.944: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jan 9 11:34:57.580: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-76qdz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-76qdz/deployments/nginx-deployment,UID:f09757d1-32d3-11ea-a994-fa163e34d433,ResourceVersion:17693236,Generation:3,CreationTimestamp:2020-01-09 11:34:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-01-09 11:34:49 +0000 UTC 2020-01-09 11:34:04 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-01-09 11:34:53 +0000 UTC 2020-01-09 11:34:53 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Jan 9 11:34:59.369: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-76qdz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-76qdz/replicasets/nginx-deployment-5c98f8fb5,UID:0a74603d-32d4-11ea-a994-fa163e34d433,ResourceVersion:17693295,Generation:3,CreationTimestamp:2020-01-09 11:34:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment f09757d1-32d3-11ea-a994-fa163e34d433 0xc001360d27 0xc001360d28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 9 11:34:59.369: INFO: All old ReplicaSets of Deployment "nginx-deployment": Jan 9 11:34:59.370: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-76qdz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-76qdz/replicasets/nginx-deployment-85ddf47c5d,UID:f099d13c-32d3-11ea-a994-fa163e34d433,ResourceVersion:17693277,Generation:3,CreationTimestamp:2020-01-09 11:34:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment f09757d1-32d3-11ea-a994-fa163e34d433 0xc001360e67 0xc001360e68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Jan 9 11:35:00.280: INFO: Pod "nginx-deployment-5c98f8fb5-4l5hv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-4l5hv,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-76qdz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-76qdz/pods/nginx-deployment-5c98f8fb5-4l5hv,UID:0f4262f3-32d4-11ea-a994-fa163e34d433,ResourceVersion:17693274,Generation:0,CreationTimestamp:2020-01-09 11:34:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0a74603d-32d4-11ea-a994-fa163e34d433 0xc0014a0fb7 0xc0014a0fb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zz7kv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zz7kv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-zz7kv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0014a1020} {node.kubernetes.io/unreachable Exists NoExecute 0xc0014a1040}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:56 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 11:35:00.281: INFO: Pod "nginx-deployment-5c98f8fb5-58hp2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-58hp2,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-76qdz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-76qdz/pods/nginx-deployment-5c98f8fb5-58hp2,UID:0e8f1bbc-32d4-11ea-a994-fa163e34d433,ResourceVersion:17693261,Generation:0,CreationTimestamp:2020-01-09 11:34:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0a74603d-32d4-11ea-a994-fa163e34d433 0xc0014a10b7 0xc0014a10b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zz7kv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zz7kv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-zz7kv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0014a1150} {node.kubernetes.io/unreachable Exists NoExecute 0xc0014a1170}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:55 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 11:35:00.281: INFO: Pod "nginx-deployment-5c98f8fb5-ggrfr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-ggrfr,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-76qdz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-76qdz/pods/nginx-deployment-5c98f8fb5-ggrfr,UID:0f457df2-32d4-11ea-a994-fa163e34d433,ResourceVersion:17693279,Generation:0,CreationTimestamp:2020-01-09 11:34:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0a74603d-32d4-11ea-a994-fa163e34d433 0xc0014a11e7 0xc0014a11e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zz7kv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zz7kv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-zz7kv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0014a1250} {node.kubernetes.io/unreachable Exists NoExecute 0xc0014a1270}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:56 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 11:35:00.281: INFO: Pod "nginx-deployment-5c98f8fb5-h9b8t" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-h9b8t,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-76qdz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-76qdz/pods/nginx-deployment-5c98f8fb5-h9b8t,UID:0feeb89e-32d4-11ea-a994-fa163e34d433,ResourceVersion:17693289,Generation:0,CreationTimestamp:2020-01-09 11:34:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0a74603d-32d4-11ea-a994-fa163e34d433 0xc0014a12e7 0xc0014a12e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zz7kv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zz7kv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-zz7kv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0014a1410} {node.kubernetes.io/unreachable Exists NoExecute 0xc0014a1430}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 11:35:00.282: INFO: Pod "nginx-deployment-5c98f8fb5-jxzff" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-jxzff,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-76qdz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-76qdz/pods/nginx-deployment-5c98f8fb5-jxzff,UID:0ad2c8de-32d4-11ea-a994-fa163e34d433,ResourceVersion:17693226,Generation:0,CreationTimestamp:2020-01-09 11:34:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0a74603d-32d4-11ea-a994-fa163e34d433 0xc0014a14a7 0xc0014a14a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zz7kv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zz7kv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-zz7kv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0014a1510} {node.kubernetes.io/unreachable Exists NoExecute 0xc0014a1530}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:48 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-09 11:34:50 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 11:35:00.282: INFO: Pod "nginx-deployment-5c98f8fb5-kbxvx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-kbxvx,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-76qdz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-76qdz/pods/nginx-deployment-5c98f8fb5-kbxvx,UID:0a79111b-32d4-11ea-a994-fa163e34d433,ResourceVersion:17693220,Generation:0,CreationTimestamp:2020-01-09 11:34:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0a74603d-32d4-11ea-a994-fa163e34d433 0xc0014a15f7 0xc0014a15f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zz7kv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zz7kv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-zz7kv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0014a1660} {node.kubernetes.io/unreachable Exists NoExecute 0xc0014a1680}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:47 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-09 11:34:48 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 11:35:00.282: INFO: Pod "nginx-deployment-5c98f8fb5-ndlms" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-ndlms,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-76qdz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-76qdz/pods/nginx-deployment-5c98f8fb5-ndlms,UID:0f442d77-32d4-11ea-a994-fa163e34d433,ResourceVersion:17693284,Generation:0,CreationTimestamp:2020-01-09 11:34:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0a74603d-32d4-11ea-a994-fa163e34d433 0xc0014a1747 0xc0014a1748}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zz7kv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zz7kv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-zz7kv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0014a17c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0014a17e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:56 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 11:35:00.283: INFO: Pod "nginx-deployment-5c98f8fb5-p62xk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-p62xk,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-76qdz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-76qdz/pods/nginx-deployment-5c98f8fb5-p62xk,UID:0a8bf9b6-32d4-11ea-a994-fa163e34d433,ResourceVersion:17693225,Generation:0,CreationTimestamp:2020-01-09 11:34:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0a74603d-32d4-11ea-a994-fa163e34d433 0xc0014a1857 0xc0014a1858}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zz7kv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zz7kv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-zz7kv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0014a18c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0014a18e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:47 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-09 11:34:48 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 11:35:00.283: INFO: Pod "nginx-deployment-5c98f8fb5-q257h" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-q257h,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-76qdz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-76qdz/pods/nginx-deployment-5c98f8fb5-q257h,UID:0f44e988-32d4-11ea-a994-fa163e34d433,ResourceVersion:17693281,Generation:0,CreationTimestamp:2020-01-09 11:34:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0a74603d-32d4-11ea-a994-fa163e34d433 0xc0014a19d7 0xc0014a19d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zz7kv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zz7kv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-zz7kv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0014a1a50} {node.kubernetes.io/unreachable Exists NoExecute 0xc0014a1a70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:56 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 11:35:00.283: INFO: Pod "nginx-deployment-5c98f8fb5-qksgz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-qksgz,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-76qdz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-76qdz/pods/nginx-deployment-5c98f8fb5-qksgz,UID:0e2bb600-32d4-11ea-a994-fa163e34d433,ResourceVersion:17693250,Generation:0,CreationTimestamp:2020-01-09 11:34:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0a74603d-32d4-11ea-a994-fa163e34d433 0xc0014a1ae7 0xc0014a1ae8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zz7kv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zz7kv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-zz7kv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0014a1b50} {node.kubernetes.io/unreachable Exists NoExecute 0xc0014a1b70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:54 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 11:35:00.283: INFO: Pod "nginx-deployment-5c98f8fb5-stbzp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-stbzp,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-76qdz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-76qdz/pods/nginx-deployment-5c98f8fb5-stbzp,UID:0e8f3309-32d4-11ea-a994-fa163e34d433,ResourceVersion:17693266,Generation:0,CreationTimestamp:2020-01-09 11:34:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0a74603d-32d4-11ea-a994-fa163e34d433 0xc0014a1c27 0xc0014a1c28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zz7kv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zz7kv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-zz7kv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0014a1c90} {node.kubernetes.io/unreachable Exists NoExecute 0xc0014a1cb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:55 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 11:35:00.284: INFO: Pod "nginx-deployment-5c98f8fb5-tqjjs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-tqjjs,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-76qdz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-76qdz/pods/nginx-deployment-5c98f8fb5-tqjjs,UID:0af28b4c-32d4-11ea-a994-fa163e34d433,ResourceVersion:17693228,Generation:0,CreationTimestamp:2020-01-09 11:34:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0a74603d-32d4-11ea-a994-fa163e34d433 0xc0014a1d27 0xc0014a1d28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zz7kv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zz7kv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-zz7kv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0014a1da0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0014a1dc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:48 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-09 11:34:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 11:35:00.284: INFO: Pod "nginx-deployment-5c98f8fb5-x6gwn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-x6gwn,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-76qdz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-76qdz/pods/nginx-deployment-5c98f8fb5-x6gwn,UID:0a8c472f-32d4-11ea-a994-fa163e34d433,ResourceVersion:17693205,Generation:0,CreationTimestamp:2020-01-09 11:34:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 0a74603d-32d4-11ea-a994-fa163e34d433 0xc0014a1e87 0xc0014a1e88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zz7kv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zz7kv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-zz7kv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0014a1ef0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0015fa080}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:47 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-09 11:34:48 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 11:35:00.284: INFO: Pod "nginx-deployment-85ddf47c5d-49mj9" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-49mj9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-76qdz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-76qdz/pods/nginx-deployment-85ddf47c5d-49mj9,UID:f0c80e45-32d3-11ea-a994-fa163e34d433,ResourceVersion:17693168,Generation:0,CreationTimestamp:2020-01-09 11:34:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f099d13c-32d3-11ea-a994-fa163e34d433 0xc0015fa1d7 0xc0015fa1d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zz7kv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zz7kv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-zz7kv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0015fa240} {node.kubernetes.io/unreachable Exists NoExecute 0xc0015fa260}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:05 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:41 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:04 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.12,StartTime:2020-01-09 11:34:05 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-09 11:34:39 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://90f6a963c7256063db41c7fefc4ac6646a924ee83535108acbdae47717cd13c4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 11:35:00.284: INFO: Pod "nginx-deployment-85ddf47c5d-5pnkm" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5pnkm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-76qdz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-76qdz/pods/nginx-deployment-85ddf47c5d-5pnkm,UID:f0bd9567-32d3-11ea-a994-fa163e34d433,ResourceVersion:17693140,Generation:0,CreationTimestamp:2020-01-09 11:34:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f099d13c-32d3-11ea-a994-fa163e34d433 0xc0015fa327 0xc0015fa328}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zz7kv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zz7kv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-zz7kv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0015fa410} {node.kubernetes.io/unreachable Exists NoExecute 0xc0015fa430}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:40 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:04 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-09 11:34:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-09 11:34:39 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://1ec159fe1b1653fac3cc1208dbd23a75ed9df56c38f412c3e9bd37df726c6eef}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 11:35:00.285: INFO: Pod "nginx-deployment-85ddf47c5d-5q8lm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5q8lm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-76qdz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-76qdz/pods/nginx-deployment-85ddf47c5d-5q8lm,UID:0e8f2313-32d4-11ea-a994-fa163e34d433,ResourceVersion:17693264,Generation:0,CreationTimestamp:2020-01-09 11:34:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f099d13c-32d3-11ea-a994-fa163e34d433 0xc0015fa4f7 0xc0015fa4f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zz7kv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zz7kv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-zz7kv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0015fa870} {node.kubernetes.io/unreachable Exists NoExecute 0xc0015fa890}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:55 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 11:35:00.285: INFO: Pod "nginx-deployment-85ddf47c5d-62q95" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-62q95,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-76qdz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-76qdz/pods/nginx-deployment-85ddf47c5d-62q95,UID:0e2fa201-32d4-11ea-a994-fa163e34d433,ResourceVersion:17693253,Generation:0,CreationTimestamp:2020-01-09 11:34:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f099d13c-32d3-11ea-a994-fa163e34d433 0xc0015fa907 0xc0015fa908}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zz7kv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zz7kv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-zz7kv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0015fa970} {node.kubernetes.io/unreachable Exists NoExecute 0xc0015fa990}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:54 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 11:35:00.286: INFO: Pod "nginx-deployment-85ddf47c5d-6b2kv" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6b2kv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-76qdz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-76qdz/pods/nginx-deployment-85ddf47c5d-6b2kv,UID:f0de0ffe-32d3-11ea-a994-fa163e34d433,ResourceVersion:17693146,Generation:0,CreationTimestamp:2020-01-09 11:34:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f099d13c-32d3-11ea-a994-fa163e34d433 0xc0015facc7 0xc0015facc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zz7kv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zz7kv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-zz7kv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0015fad30} {node.kubernetes.io/unreachable Exists NoExecute 0xc0015fad50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:09 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:40 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:05 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.10,StartTime:2020-01-09 11:34:09 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-09 11:34:39 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://5ee34d75ca9ea6670adbd14355e24f6d1fd8ce5a89ddfec892b6c868723ba9dd}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 11:35:00.287: INFO: Pod "nginx-deployment-85ddf47c5d-6nztl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6nztl,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-76qdz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-76qdz/pods/nginx-deployment-85ddf47c5d-6nztl,UID:0e8f26b1-32d4-11ea-a994-fa163e34d433,ResourceVersion:17693272,Generation:0,CreationTimestamp:2020-01-09 11:34:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f099d13c-32d3-11ea-a994-fa163e34d433 0xc0015fae17 0xc0015fae18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zz7kv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zz7kv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-zz7kv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0015fb130} {node.kubernetes.io/unreachable Exists NoExecute 0xc0015fb150}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:55 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 11:35:00.287: INFO: Pod "nginx-deployment-85ddf47c5d-7rhqw" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7rhqw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-76qdz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-76qdz/pods/nginx-deployment-85ddf47c5d-7rhqw,UID:f0de0d2a-32d3-11ea-a994-fa163e34d433,ResourceVersion:17693150,Generation:0,CreationTimestamp:2020-01-09 11:34:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f099d13c-32d3-11ea-a994-fa163e34d433 0xc0015fb1c7 0xc0015fb1c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zz7kv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zz7kv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-zz7kv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0015fb230} {node.kubernetes.io/unreachable Exists NoExecute 0xc0015fb250}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:07 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:40 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:04 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2020-01-09 11:34:07 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-09 11:34:39 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://54afcc257f7290cbaaeeb8ba9292af3bf67462b366a4c58e25a5bc7d6e4e80a0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 11:35:00.288: INFO: Pod "nginx-deployment-85ddf47c5d-89hts" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-89hts,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-76qdz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-76qdz/pods/nginx-deployment-85ddf47c5d-89hts,UID:0e2f86fc-32d4-11ea-a994-fa163e34d433,ResourceVersion:17693252,Generation:0,CreationTimestamp:2020-01-09 11:34:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f099d13c-32d3-11ea-a994-fa163e34d433 0xc0015fb3a7 0xc0015fb3a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zz7kv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zz7kv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-zz7kv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0015fb410} {node.kubernetes.io/unreachable Exists NoExecute 0xc0015fb430}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:54 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 11:35:00.288: INFO: Pod "nginx-deployment-85ddf47c5d-8g5nr" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-8g5nr,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-76qdz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-76qdz/pods/nginx-deployment-85ddf47c5d-8g5nr,UID:f0c922e8-32d3-11ea-a994-fa163e34d433,ResourceVersion:17693144,Generation:0,CreationTimestamp:2020-01-09 11:34:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f099d13c-32d3-11ea-a994-fa163e34d433 0xc0015fb4a7 0xc0015fb4a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zz7kv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zz7kv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-zz7kv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0015fb520} {node.kubernetes.io/unreachable Exists NoExecute 0xc0015fb600}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:05 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:40 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:04 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.9,StartTime:2020-01-09 11:34:05 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-09 11:34:39 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://281eff2b88ce4ee2b76598bea2c89db6ff27d7181d5ec8dd81fb7ea7d99267fc}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 11:35:00.288: INFO: Pod "nginx-deployment-85ddf47c5d-czbh4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-czbh4,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-76qdz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-76qdz/pods/nginx-deployment-85ddf47c5d-czbh4,UID:0f417e8b-32d4-11ea-a994-fa163e34d433,ResourceVersion:17693276,Generation:0,CreationTimestamp:2020-01-09 11:34:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f099d13c-32d3-11ea-a994-fa163e34d433 0xc0015fbdc7 0xc0015fbdc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zz7kv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zz7kv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-zz7kv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0015fbe30} {node.kubernetes.io/unreachable Exists NoExecute 0xc0015fbe50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:56 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 11:35:00.289: INFO: Pod "nginx-deployment-85ddf47c5d-dswc2" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-dswc2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-76qdz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-76qdz/pods/nginx-deployment-85ddf47c5d-dswc2,UID:f0c8279f-32d3-11ea-a994-fa163e34d433,ResourceVersion:17693153,Generation:0,CreationTimestamp:2020-01-09 11:34:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f099d13c-32d3-11ea-a994-fa163e34d433 0xc0015fbec7 0xc0015fbec8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zz7kv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zz7kv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-zz7kv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b08000} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b08020}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:05 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:41 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:04 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.11,StartTime:2020-01-09 11:34:05 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-09 11:34:39 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://0e8598e5f61f48f9ec13179f3807f4e96228ebf15ba00376cdc663992485fbf6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 11:35:00.289: INFO: Pod "nginx-deployment-85ddf47c5d-kgbfc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-kgbfc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-76qdz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-76qdz/pods/nginx-deployment-85ddf47c5d-kgbfc,UID:0e2a4e87-32d4-11ea-a994-fa163e34d433,ResourceVersion:17693301,Generation:0,CreationTimestamp:2020-01-09 11:34:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f099d13c-32d3-11ea-a994-fa163e34d433 0xc000b080f7 0xc000b080f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zz7kv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zz7kv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-zz7kv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b08160} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b08180}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:54 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-09 11:34:56 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 11:35:00.289: INFO: Pod "nginx-deployment-85ddf47c5d-lhdp2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lhdp2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-76qdz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-76qdz/pods/nginx-deployment-85ddf47c5d-lhdp2,UID:0f45202d-32d4-11ea-a994-fa163e34d433,ResourceVersion:17693283,Generation:0,CreationTimestamp:2020-01-09 11:34:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f099d13c-32d3-11ea-a994-fa163e34d433 0xc000b08237 0xc000b08238}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zz7kv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zz7kv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-zz7kv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b082a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b082c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:56 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 11:35:00.290: INFO: Pod "nginx-deployment-85ddf47c5d-mfvzx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-mfvzx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-76qdz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-76qdz/pods/nginx-deployment-85ddf47c5d-mfvzx,UID:0f455e3b-32d4-11ea-a994-fa163e34d433,ResourceVersion:17693275,Generation:0,CreationTimestamp:2020-01-09 11:34:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f099d13c-32d3-11ea-a994-fa163e34d433 0xc000b08337 0xc000b08338}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zz7kv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zz7kv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-zz7kv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b083a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b083c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:56 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 11:35:00.290: INFO: Pod "nginx-deployment-85ddf47c5d-ml778" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ml778,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-76qdz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-76qdz/pods/nginx-deployment-85ddf47c5d-ml778,UID:0f44ff2a-32d4-11ea-a994-fa163e34d433,ResourceVersion:17693282,Generation:0,CreationTimestamp:2020-01-09 11:34:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f099d13c-32d3-11ea-a994-fa163e34d433 0xc000b08437 0xc000b08438}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zz7kv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zz7kv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-zz7kv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b084a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b084c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:56 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 11:35:00.290: INFO: Pod "nginx-deployment-85ddf47c5d-pb67k" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pb67k,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-76qdz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-76qdz/pods/nginx-deployment-85ddf47c5d-pb67k,UID:0f461c34-32d4-11ea-a994-fa163e34d433,ResourceVersion:17693280,Generation:0,CreationTimestamp:2020-01-09 11:34:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f099d13c-32d3-11ea-a994-fa163e34d433 0xc000b08537 0xc000b08538}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zz7kv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zz7kv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-zz7kv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b085a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b085c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:56 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 11:35:00.291: INFO: Pod "nginx-deployment-85ddf47c5d-pgbdl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pgbdl,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-76qdz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-76qdz/pods/nginx-deployment-85ddf47c5d-pgbdl,UID:0e8eaed5-32d4-11ea-a994-fa163e34d433,ResourceVersion:17693263,Generation:0,CreationTimestamp:2020-01-09 11:34:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f099d13c-32d3-11ea-a994-fa163e34d433 0xc000b08637 0xc000b08638}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zz7kv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zz7kv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-zz7kv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b086a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b086c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:55 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 11:35:00.291: INFO: Pod "nginx-deployment-85ddf47c5d-twcsh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-twcsh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-76qdz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-76qdz/pods/nginx-deployment-85ddf47c5d-twcsh,UID:0e8efcbc-32d4-11ea-a994-fa163e34d433,ResourceVersion:17693271,Generation:0,CreationTimestamp:2020-01-09 11:34:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f099d13c-32d3-11ea-a994-fa163e34d433 0xc000b08737 0xc000b08738}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zz7kv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zz7kv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-zz7kv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b087a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b087c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:55 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 11:35:00.292: INFO: Pod "nginx-deployment-85ddf47c5d-vm5vl" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vm5vl,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-76qdz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-76qdz/pods/nginx-deployment-85ddf47c5d-vm5vl,UID:f0ddc6de-32d3-11ea-a994-fa163e34d433,ResourceVersion:17693132,Generation:0,CreationTimestamp:2020-01-09 11:34:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f099d13c-32d3-11ea-a994-fa163e34d433 0xc000b08837 0xc000b08838}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zz7kv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zz7kv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-zz7kv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b088a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b088c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:06 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:04 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2020-01-09 11:34:06 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-09 11:34:39 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://b04f87d7e2eca6c2aec189e1d1609a2fa06c3567c8fefefcb5e416998a2a7181}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 9 11:35:00.292: INFO: Pod "nginx-deployment-85ddf47c5d-zr2kd" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zr2kd,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-76qdz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-76qdz/pods/nginx-deployment-85ddf47c5d-zr2kd,UID:f0c853f3-32d3-11ea-a994-fa163e34d433,ResourceVersion:17693158,Generation:0,CreationTimestamp:2020-01-09 11:34:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f099d13c-32d3-11ea-a994-fa163e34d433 0xc000b08987 0xc000b08988}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zz7kv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zz7kv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-zz7kv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b089f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b08a10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:05 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:41 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 11:34:04 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.13,StartTime:2020-01-09 11:34:05 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-09 11:34:39 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://f804fcf1a0eef94a8d8ff50d996a4eb8bcb4237b84a5dfaa3ac008f62ee2d923}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:35:00.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-76qdz" for this suite. Jan 9 11:36:06.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:36:07.124: INFO: namespace: e2e-tests-deployment-76qdz, resource: bindings, ignored listing per whitelist Jan 9 11:36:07.336: INFO: namespace e2e-tests-deployment-76qdz deletion completed in 1m7.025607858s • [SLOW TEST:123.158 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:36:07.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 9 11:36:51.009: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 9 11:36:51.030: INFO: Pod pod-with-poststart-http-hook still exists Jan 9 11:36:53.030: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 9 11:36:53.042: INFO: Pod pod-with-poststart-http-hook still exists Jan 9 11:36:55.030: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 9 11:36:55.064: INFO: Pod pod-with-poststart-http-hook still exists Jan 9 11:36:57.030: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 9 11:36:57.042: INFO: Pod pod-with-poststart-http-hook still exists Jan 9 11:36:59.030: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 9 11:36:59.048: INFO: Pod pod-with-poststart-http-hook still exists Jan 9 11:37:01.030: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 9 11:37:01.049: INFO: Pod pod-with-poststart-http-hook still exists Jan 9 11:37:03.030: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 9 11:37:03.053: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:37:03.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-8t5n2" for this suite. Jan 9 11:37:27.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:37:27.172: INFO: namespace: e2e-tests-container-lifecycle-hook-8t5n2, resource: bindings, ignored listing per whitelist Jan 9 11:37:27.311: INFO: namespace e2e-tests-container-lifecycle-hook-8t5n2 deletion completed in 24.241172812s • [SLOW TEST:79.974 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:37:27.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 9 11:37:27.588: INFO: Waiting up to 5m0s for pod "downwardapi-volume-69ae43c3-32d4-11ea-ac2d-0242ac110005" in namespace "e2e-tests-projected-vrtv7" to be "success or failure" Jan 9 11:37:27.751: INFO: Pod "downwardapi-volume-69ae43c3-32d4-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 163.053445ms Jan 9 11:37:29.838: INFO: Pod "downwardapi-volume-69ae43c3-32d4-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.249281787s Jan 9 11:37:31.855: INFO: Pod "downwardapi-volume-69ae43c3-32d4-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.267084537s Jan 9 11:37:35.112: INFO: Pod "downwardapi-volume-69ae43c3-32d4-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.523906858s Jan 9 11:37:37.130: INFO: Pod "downwardapi-volume-69ae43c3-32d4-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.541363813s Jan 9 11:37:39.146: INFO: Pod "downwardapi-volume-69ae43c3-32d4-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.557864195s Jan 9 11:37:41.155: INFO: Pod "downwardapi-volume-69ae43c3-32d4-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.566975884s STEP: Saw pod success Jan 9 11:37:41.155: INFO: Pod "downwardapi-volume-69ae43c3-32d4-11ea-ac2d-0242ac110005" satisfied condition "success or failure" Jan 9 11:37:41.159: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-69ae43c3-32d4-11ea-ac2d-0242ac110005 container client-container: STEP: delete the pod Jan 9 11:37:42.925: INFO: Waiting for pod downwardapi-volume-69ae43c3-32d4-11ea-ac2d-0242ac110005 to disappear Jan 9 11:37:42.977: INFO: Pod downwardapi-volume-69ae43c3-32d4-11ea-ac2d-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:37:42.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vrtv7" for this suite. Jan 9 11:37:49.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:37:49.267: INFO: namespace: e2e-tests-projected-vrtv7, resource: bindings, ignored listing per whitelist Jan 9 11:37:49.475: INFO: namespace e2e-tests-projected-vrtv7 deletion completed in 6.41387093s • [SLOW TEST:22.164 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:37:49.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 9 11:37:49.690: INFO: Waiting up to 5m0s for pod "downwardapi-volume-76d9d946-32d4-11ea-ac2d-0242ac110005" in namespace "e2e-tests-downward-api-szt7l" to be "success or failure" Jan 9 11:37:49.800: INFO: Pod "downwardapi-volume-76d9d946-32d4-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 109.183682ms Jan 9 11:37:51.816: INFO: Pod "downwardapi-volume-76d9d946-32d4-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125348692s Jan 9 11:37:53.878: INFO: Pod "downwardapi-volume-76d9d946-32d4-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.187738388s Jan 9 11:37:55.887: INFO: Pod "downwardapi-volume-76d9d946-32d4-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.196036173s Jan 9 11:37:57.900: INFO: Pod "downwardapi-volume-76d9d946-32d4-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.209961705s Jan 9 11:37:59.917: INFO: Pod "downwardapi-volume-76d9d946-32d4-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.226635724s STEP: Saw pod success Jan 9 11:37:59.917: INFO: Pod "downwardapi-volume-76d9d946-32d4-11ea-ac2d-0242ac110005" satisfied condition "success or failure" Jan 9 11:37:59.923: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-76d9d946-32d4-11ea-ac2d-0242ac110005 container client-container: STEP: delete the pod Jan 9 11:38:00.104: INFO: Waiting for pod downwardapi-volume-76d9d946-32d4-11ea-ac2d-0242ac110005 to disappear Jan 9 11:38:00.124: INFO: Pod downwardapi-volume-76d9d946-32d4-11ea-ac2d-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:38:00.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-szt7l" for this suite. Jan 9 11:38:06.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:38:06.300: INFO: namespace: e2e-tests-downward-api-szt7l, resource: bindings, ignored listing per whitelist Jan 9 11:38:06.404: INFO: namespace e2e-tests-downward-api-szt7l deletion completed in 6.266463234s • [SLOW TEST:16.928 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:38:06.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jan 9 11:38:06.778: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:38:21.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-d87pl" for this suite. Jan 9 11:38:28.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:38:28.113: INFO: namespace: e2e-tests-init-container-d87pl, resource: bindings, ignored listing per whitelist Jan 9 11:38:28.225: INFO: namespace e2e-tests-init-container-d87pl deletion completed in 6.340395563s • [SLOW TEST:21.821 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:38:28.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all Jan 9 11:38:28.443: INFO: Waiting up to 5m0s for pod "client-containers-8df468dc-32d4-11ea-ac2d-0242ac110005" in namespace "e2e-tests-containers-8dd2h" to be "success or failure" Jan 9 11:38:28.464: INFO: Pod "client-containers-8df468dc-32d4-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.903245ms Jan 9 11:38:30.485: INFO: Pod "client-containers-8df468dc-32d4-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041558391s Jan 9 11:38:32.511: INFO: Pod "client-containers-8df468dc-32d4-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067487025s Jan 9 11:38:34.562: INFO: Pod "client-containers-8df468dc-32d4-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118566224s Jan 9 11:38:36.588: INFO: Pod "client-containers-8df468dc-32d4-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.144974392s Jan 9 11:38:38.655: INFO: Pod "client-containers-8df468dc-32d4-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.21187457s STEP: Saw pod success Jan 9 11:38:38.655: INFO: Pod "client-containers-8df468dc-32d4-11ea-ac2d-0242ac110005" satisfied condition "success or failure" Jan 9 11:38:38.673: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-8df468dc-32d4-11ea-ac2d-0242ac110005 container test-container: STEP: delete the pod Jan 9 11:38:38.792: INFO: Waiting for pod client-containers-8df468dc-32d4-11ea-ac2d-0242ac110005 to disappear Jan 9 11:38:38.800: INFO: Pod client-containers-8df468dc-32d4-11ea-ac2d-0242ac110005 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:38:38.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-8dd2h" for this suite. Jan 9 11:38:44.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:38:44.956: INFO: namespace: e2e-tests-containers-8dd2h, resource: bindings, ignored listing per whitelist Jan 9 11:38:44.987: INFO: namespace e2e-tests-containers-8dd2h deletion completed in 6.182078248s • [SLOW TEST:16.762 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:38:44.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:38:51.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-lhrxw" for this suite. Jan 9 11:38:57.795: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:38:57.989: INFO: namespace: e2e-tests-namespaces-lhrxw, resource: bindings, ignored listing per whitelist Jan 9 11:38:58.009: INFO: namespace e2e-tests-namespaces-lhrxw deletion completed in 6.245119951s STEP: Destroying namespace "e2e-tests-nsdeletetest-5mmj9" for this suite. Jan 9 11:38:58.012: INFO: Namespace e2e-tests-nsdeletetest-5mmj9 was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-h8wpw" for this suite. Jan 9 11:39:04.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:39:04.088: INFO: namespace: e2e-tests-nsdeletetest-h8wpw, resource: bindings, ignored listing per whitelist Jan 9 11:39:04.139: INFO: namespace e2e-tests-nsdeletetest-h8wpw deletion completed in 6.12699298s • [SLOW TEST:19.152 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:39:04.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-a3569113-32d4-11ea-ac2d-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 9 11:39:04.331: INFO: Waiting up to 5m0s for pod "pod-configmaps-a3581191-32d4-11ea-ac2d-0242ac110005" in namespace "e2e-tests-configmap-ll9h2" to be "success or failure" Jan 9 11:39:04.340: INFO: Pod "pod-configmaps-a3581191-32d4-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.387455ms Jan 9 11:39:06.658: INFO: Pod "pod-configmaps-a3581191-32d4-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.326733021s Jan 9 11:39:08.688: INFO: Pod "pod-configmaps-a3581191-32d4-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.356480983s Jan 9 11:39:10.826: INFO: Pod "pod-configmaps-a3581191-32d4-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.494862892s Jan 9 11:39:12.865: INFO: Pod "pod-configmaps-a3581191-32d4-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.534194878s Jan 9 11:39:14.903: INFO: Pod "pod-configmaps-a3581191-32d4-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.572068487s STEP: Saw pod success Jan 9 11:39:14.903: INFO: Pod "pod-configmaps-a3581191-32d4-11ea-ac2d-0242ac110005" satisfied condition "success or failure" Jan 9 11:39:14.915: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-a3581191-32d4-11ea-ac2d-0242ac110005 container configmap-volume-test: STEP: delete the pod Jan 9 11:39:15.046: INFO: Waiting for pod pod-configmaps-a3581191-32d4-11ea-ac2d-0242ac110005 to disappear Jan 9 11:39:15.082: INFO: Pod pod-configmaps-a3581191-32d4-11ea-ac2d-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:39:15.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-ll9h2" for this suite. Jan 9 11:39:21.195: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:39:21.406: INFO: namespace: e2e-tests-configmap-ll9h2, resource: bindings, ignored listing per whitelist Jan 9 11:39:21.534: INFO: namespace e2e-tests-configmap-ll9h2 deletion completed in 6.442259646s • [SLOW TEST:17.395 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:39:21.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 9 11:39:21.932: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"add24dd0-32d4-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001595f42), BlockOwnerDeletion:(*bool)(0xc001595f43)}} Jan 9 11:39:21.972: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"adbe37f2-32d4-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001d0e522), BlockOwnerDeletion:(*bool)(0xc001d0e523)}} Jan 9 11:39:22.111: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"adc3d852-32d4-11ea-a994-fa163e34d433", Controller:(*bool)(0xc00237cf42), BlockOwnerDeletion:(*bool)(0xc00237cf43)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:39:27.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-kccp9" for this suite. Jan 9 11:39:33.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:39:33.399: INFO: namespace: e2e-tests-gc-kccp9, resource: bindings, ignored listing per whitelist Jan 9 11:39:33.432: INFO: namespace e2e-tests-gc-kccp9 deletion completed in 6.240328219s • [SLOW TEST:11.897 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:39:33.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:40:33.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-bsqs5" for this suite. Jan 9 11:40:57.820: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:40:57.901: INFO: namespace: e2e-tests-container-probe-bsqs5, resource: bindings, ignored listing per whitelist Jan 9 11:40:58.046: INFO: namespace e2e-tests-container-probe-bsqs5 deletion completed in 24.343870159s • [SLOW TEST:84.613 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:40:58.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-e747ee04-32d4-11ea-ac2d-0242ac110005 STEP: Creating a pod to test consume secrets Jan 9 11:40:58.317: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e749aa36-32d4-11ea-ac2d-0242ac110005" in namespace "e2e-tests-projected-48cdc" to be "success or failure" Jan 9 11:40:58.324: INFO: Pod "pod-projected-secrets-e749aa36-32d4-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.742792ms Jan 9 11:41:00.340: INFO: Pod "pod-projected-secrets-e749aa36-32d4-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023741181s Jan 9 11:41:02.370: INFO: Pod "pod-projected-secrets-e749aa36-32d4-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053291751s Jan 9 11:41:04.523: INFO: Pod "pod-projected-secrets-e749aa36-32d4-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.20635854s Jan 9 11:41:06.545: INFO: Pod "pod-projected-secrets-e749aa36-32d4-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.228434133s Jan 9 11:41:08.578: INFO: Pod "pod-projected-secrets-e749aa36-32d4-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.260928131s STEP: Saw pod success Jan 9 11:41:08.578: INFO: Pod "pod-projected-secrets-e749aa36-32d4-11ea-ac2d-0242ac110005" satisfied condition "success or failure" Jan 9 11:41:08.593: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-e749aa36-32d4-11ea-ac2d-0242ac110005 container secret-volume-test: STEP: delete the pod Jan 9 11:41:09.604: INFO: Waiting for pod pod-projected-secrets-e749aa36-32d4-11ea-ac2d-0242ac110005 to disappear Jan 9 11:41:10.147: INFO: Pod pod-projected-secrets-e749aa36-32d4-11ea-ac2d-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:41:10.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-48cdc" for this suite. Jan 9 11:41:16.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:41:16.393: INFO: namespace: e2e-tests-projected-48cdc, resource: bindings, ignored listing per whitelist Jan 9 11:41:16.545: INFO: namespace e2e-tests-projected-48cdc deletion completed in 6.372271106s • [SLOW TEST:18.499 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:41:16.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jan 9 11:41:16.877: INFO: Waiting up to 5m0s for pod "downward-api-f250c147-32d4-11ea-ac2d-0242ac110005" in namespace "e2e-tests-downward-api-ln8vf" to be "success or failure" Jan 9 11:41:16.893: INFO: Pod "downward-api-f250c147-32d4-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.192852ms Jan 9 11:41:18.905: INFO: Pod "downward-api-f250c147-32d4-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028047249s Jan 9 11:41:20.929: INFO: Pod "downward-api-f250c147-32d4-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052113646s Jan 9 11:41:22.982: INFO: Pod "downward-api-f250c147-32d4-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.104690085s Jan 9 11:41:24.995: INFO: Pod "downward-api-f250c147-32d4-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.117843516s Jan 9 11:41:27.015: INFO: Pod "downward-api-f250c147-32d4-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.137412242s STEP: Saw pod success Jan 9 11:41:27.015: INFO: Pod "downward-api-f250c147-32d4-11ea-ac2d-0242ac110005" satisfied condition "success or failure" Jan 9 11:41:27.021: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-f250c147-32d4-11ea-ac2d-0242ac110005 container dapi-container: STEP: delete the pod Jan 9 11:41:27.751: INFO: Waiting for pod downward-api-f250c147-32d4-11ea-ac2d-0242ac110005 to disappear Jan 9 11:41:28.079: INFO: Pod downward-api-f250c147-32d4-11ea-ac2d-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:41:28.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-ln8vf" for this suite. Jan 9 11:41:34.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:41:34.270: INFO: namespace: e2e-tests-downward-api-ln8vf, resource: bindings, ignored listing per whitelist Jan 9 11:41:34.302: INFO: namespace e2e-tests-downward-api-ln8vf deletion completed in 6.210026527s • [SLOW TEST:17.757 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:41:34.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-fcd5ebe4-32d4-11ea-ac2d-0242ac110005 STEP: Creating configMap with name cm-test-opt-upd-fcd5ec8e-32d4-11ea-ac2d-0242ac110005 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-fcd5ebe4-32d4-11ea-ac2d-0242ac110005 STEP: Updating configmap cm-test-opt-upd-fcd5ec8e-32d4-11ea-ac2d-0242ac110005 STEP: Creating configMap with name cm-test-opt-create-fcd5ecc2-32d4-11ea-ac2d-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:42:54.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-cbmdz" for this suite. Jan 9 11:43:20.806: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:43:20.895: INFO: namespace: e2e-tests-projected-cbmdz, resource: bindings, ignored listing per whitelist Jan 9 11:43:20.967: INFO: namespace e2e-tests-projected-cbmdz deletion completed in 26.18867713s • [SLOW TEST:106.665 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:43:20.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-3c6ec848-32d5-11ea-ac2d-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 9 11:43:21.183: INFO: Waiting up to 5m0s for pod "pod-configmaps-3c708b4d-32d5-11ea-ac2d-0242ac110005" in namespace "e2e-tests-configmap-vrng8" to be "success or failure" Jan 9 11:43:21.205: INFO: Pod "pod-configmaps-3c708b4d-32d5-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.642675ms Jan 9 11:43:23.591: INFO: Pod "pod-configmaps-3c708b4d-32d5-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.407607213s Jan 9 11:43:25.605: INFO: Pod "pod-configmaps-3c708b4d-32d5-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.421283945s Jan 9 11:43:27.677: INFO: Pod "pod-configmaps-3c708b4d-32d5-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.493481945s Jan 9 11:43:30.122: INFO: Pod "pod-configmaps-3c708b4d-32d5-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.938797458s Jan 9 11:43:32.147: INFO: Pod "pod-configmaps-3c708b4d-32d5-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.963364974s STEP: Saw pod success Jan 9 11:43:32.147: INFO: Pod "pod-configmaps-3c708b4d-32d5-11ea-ac2d-0242ac110005" satisfied condition "success or failure" Jan 9 11:43:32.163: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-3c708b4d-32d5-11ea-ac2d-0242ac110005 container configmap-volume-test: STEP: delete the pod Jan 9 11:43:32.270: INFO: Waiting for pod pod-configmaps-3c708b4d-32d5-11ea-ac2d-0242ac110005 to disappear Jan 9 11:43:32.283: INFO: Pod pod-configmaps-3c708b4d-32d5-11ea-ac2d-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:43:32.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-vrng8" for this suite. Jan 9 11:43:38.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:43:38.723: INFO: namespace: e2e-tests-configmap-vrng8, resource: bindings, ignored listing per whitelist Jan 9 11:43:38.799: INFO: namespace e2e-tests-configmap-vrng8 deletion completed in 6.337296092s • [SLOW TEST:17.832 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:43:38.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command Jan 9 11:43:39.300: INFO: Waiting up to 5m0s for pod "var-expansion-473d3a1e-32d5-11ea-ac2d-0242ac110005" in namespace "e2e-tests-var-expansion-fdldw" to be "success or failure" Jan 9 11:43:39.313: INFO: Pod "var-expansion-473d3a1e-32d5-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.855319ms Jan 9 11:43:41.325: INFO: Pod "var-expansion-473d3a1e-32d5-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025491733s Jan 9 11:43:43.349: INFO: Pod "var-expansion-473d3a1e-32d5-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04899311s Jan 9 11:43:45.423: INFO: Pod "var-expansion-473d3a1e-32d5-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.123183064s Jan 9 11:43:47.850: INFO: Pod "var-expansion-473d3a1e-32d5-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.550335284s Jan 9 11:43:50.524: INFO: Pod "var-expansion-473d3a1e-32d5-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.224373372s STEP: Saw pod success Jan 9 11:43:50.524: INFO: Pod "var-expansion-473d3a1e-32d5-11ea-ac2d-0242ac110005" satisfied condition "success or failure" Jan 9 11:43:50.537: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-473d3a1e-32d5-11ea-ac2d-0242ac110005 container dapi-container: STEP: delete the pod Jan 9 11:43:50.903: INFO: Waiting for pod var-expansion-473d3a1e-32d5-11ea-ac2d-0242ac110005 to disappear Jan 9 11:43:50.925: INFO: Pod var-expansion-473d3a1e-32d5-11ea-ac2d-0242ac110005 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:43:50.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-fdldw" for this suite. Jan 9 11:43:57.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:43:57.073: INFO: namespace: e2e-tests-var-expansion-fdldw, resource: bindings, ignored listing per whitelist Jan 9 11:43:57.224: INFO: namespace e2e-tests-var-expansion-fdldw deletion completed in 6.280088201s • [SLOW TEST:18.424 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:43:57.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token Jan 9 11:43:57.986: INFO: Waiting up to 5m0s for pod "pod-service-account-525ee384-32d5-11ea-ac2d-0242ac110005-l4mz4" in namespace "e2e-tests-svcaccounts-glzvt" to be "success or failure" Jan 9 11:43:57.996: INFO: Pod "pod-service-account-525ee384-32d5-11ea-ac2d-0242ac110005-l4mz4": Phase="Pending", Reason="", readiness=false. Elapsed: 9.751971ms Jan 9 11:44:00.015: INFO: Pod "pod-service-account-525ee384-32d5-11ea-ac2d-0242ac110005-l4mz4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028956832s Jan 9 11:44:02.067: INFO: Pod "pod-service-account-525ee384-32d5-11ea-ac2d-0242ac110005-l4mz4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080987458s Jan 9 11:44:04.263: INFO: Pod "pod-service-account-525ee384-32d5-11ea-ac2d-0242ac110005-l4mz4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.276850486s Jan 9 11:44:06.734: INFO: Pod "pod-service-account-525ee384-32d5-11ea-ac2d-0242ac110005-l4mz4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.748378774s Jan 9 11:44:09.291: INFO: Pod "pod-service-account-525ee384-32d5-11ea-ac2d-0242ac110005-l4mz4": Phase="Pending", Reason="", readiness=false. Elapsed: 11.30552908s Jan 9 11:44:11.307: INFO: Pod "pod-service-account-525ee384-32d5-11ea-ac2d-0242ac110005-l4mz4": Phase="Pending", Reason="", readiness=false. Elapsed: 13.320986284s Jan 9 11:44:13.319: INFO: Pod "pod-service-account-525ee384-32d5-11ea-ac2d-0242ac110005-l4mz4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.33280835s STEP: Saw pod success Jan 9 11:44:13.319: INFO: Pod "pod-service-account-525ee384-32d5-11ea-ac2d-0242ac110005-l4mz4" satisfied condition "success or failure" Jan 9 11:44:13.323: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-525ee384-32d5-11ea-ac2d-0242ac110005-l4mz4 container token-test: STEP: delete the pod Jan 9 11:44:13.970: INFO: Waiting for pod pod-service-account-525ee384-32d5-11ea-ac2d-0242ac110005-l4mz4 to disappear Jan 9 11:44:14.010: INFO: Pod pod-service-account-525ee384-32d5-11ea-ac2d-0242ac110005-l4mz4 no longer exists STEP: Creating a pod to test consume service account root CA Jan 9 11:44:14.166: INFO: Waiting up to 5m0s for pod "pod-service-account-525ee384-32d5-11ea-ac2d-0242ac110005-n5b2s" in namespace "e2e-tests-svcaccounts-glzvt" to be "success or failure" Jan 9 11:44:14.186: INFO: Pod "pod-service-account-525ee384-32d5-11ea-ac2d-0242ac110005-n5b2s": Phase="Pending", Reason="", readiness=false. Elapsed: 20.72269ms Jan 9 11:44:17.095: INFO: Pod "pod-service-account-525ee384-32d5-11ea-ac2d-0242ac110005-n5b2s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.929630505s Jan 9 11:44:19.111: INFO: Pod "pod-service-account-525ee384-32d5-11ea-ac2d-0242ac110005-n5b2s": Phase="Pending", Reason="", readiness=false. Elapsed: 4.944870511s Jan 9 11:44:21.360: INFO: Pod "pod-service-account-525ee384-32d5-11ea-ac2d-0242ac110005-n5b2s": Phase="Pending", Reason="", readiness=false. Elapsed: 7.19442671s Jan 9 11:44:23.645: INFO: Pod "pod-service-account-525ee384-32d5-11ea-ac2d-0242ac110005-n5b2s": Phase="Pending", Reason="", readiness=false. Elapsed: 9.479236631s Jan 9 11:44:25.656: INFO: Pod "pod-service-account-525ee384-32d5-11ea-ac2d-0242ac110005-n5b2s": Phase="Pending", Reason="", readiness=false. Elapsed: 11.490560638s Jan 9 11:44:27.778: INFO: Pod "pod-service-account-525ee384-32d5-11ea-ac2d-0242ac110005-n5b2s": Phase="Pending", Reason="", readiness=false. Elapsed: 13.612675086s Jan 9 11:44:29.794: INFO: Pod "pod-service-account-525ee384-32d5-11ea-ac2d-0242ac110005-n5b2s": Phase="Pending", Reason="", readiness=false. Elapsed: 15.628570574s Jan 9 11:44:31.810: INFO: Pod "pod-service-account-525ee384-32d5-11ea-ac2d-0242ac110005-n5b2s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.644589387s STEP: Saw pod success Jan 9 11:44:31.810: INFO: Pod "pod-service-account-525ee384-32d5-11ea-ac2d-0242ac110005-n5b2s" satisfied condition "success or failure" Jan 9 11:44:31.828: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-525ee384-32d5-11ea-ac2d-0242ac110005-n5b2s container root-ca-test: STEP: delete the pod Jan 9 11:44:32.006: INFO: Waiting for pod pod-service-account-525ee384-32d5-11ea-ac2d-0242ac110005-n5b2s to disappear Jan 9 11:44:32.028: INFO: Pod pod-service-account-525ee384-32d5-11ea-ac2d-0242ac110005-n5b2s no longer exists STEP: Creating a pod to test consume service account namespace Jan 9 11:44:32.099: INFO: Waiting up to 5m0s for pod "pod-service-account-525ee384-32d5-11ea-ac2d-0242ac110005-4gmbc" in namespace "e2e-tests-svcaccounts-glzvt" to be "success or failure" Jan 9 11:44:32.189: INFO: Pod "pod-service-account-525ee384-32d5-11ea-ac2d-0242ac110005-4gmbc": Phase="Pending", Reason="", readiness=false. Elapsed: 89.93466ms Jan 9 11:44:34.202: INFO: Pod "pod-service-account-525ee384-32d5-11ea-ac2d-0242ac110005-4gmbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103504896s Jan 9 11:44:36.215: INFO: Pod "pod-service-account-525ee384-32d5-11ea-ac2d-0242ac110005-4gmbc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116508609s Jan 9 11:44:38.228: INFO: Pod "pod-service-account-525ee384-32d5-11ea-ac2d-0242ac110005-4gmbc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.129837098s Jan 9 11:44:40.703: INFO: Pod "pod-service-account-525ee384-32d5-11ea-ac2d-0242ac110005-4gmbc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.604814213s Jan 9 11:44:42.723: INFO: Pod "pod-service-account-525ee384-32d5-11ea-ac2d-0242ac110005-4gmbc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.624837418s Jan 9 11:44:44.745: INFO: Pod "pod-service-account-525ee384-32d5-11ea-ac2d-0242ac110005-4gmbc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.646121804s Jan 9 11:44:46.767: INFO: Pod "pod-service-account-525ee384-32d5-11ea-ac2d-0242ac110005-4gmbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.668753068s STEP: Saw pod success Jan 9 11:44:46.768: INFO: Pod "pod-service-account-525ee384-32d5-11ea-ac2d-0242ac110005-4gmbc" satisfied condition "success or failure" Jan 9 11:44:46.775: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-525ee384-32d5-11ea-ac2d-0242ac110005-4gmbc container namespace-test: STEP: delete the pod Jan 9 11:44:46.898: INFO: Waiting for pod pod-service-account-525ee384-32d5-11ea-ac2d-0242ac110005-4gmbc to disappear Jan 9 11:44:46.907: INFO: Pod pod-service-account-525ee384-32d5-11ea-ac2d-0242ac110005-4gmbc no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:44:46.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-glzvt" for this suite. Jan 9 11:44:55.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:44:55.088: INFO: namespace: e2e-tests-svcaccounts-glzvt, resource: bindings, ignored listing per whitelist Jan 9 11:44:55.126: INFO: namespace e2e-tests-svcaccounts-glzvt deletion completed in 8.21025802s • [SLOW TEST:57.902 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:44:55.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args Jan 9 11:44:55.326: INFO: Waiting up to 5m0s for pod "var-expansion-748e7bd7-32d5-11ea-ac2d-0242ac110005" in namespace "e2e-tests-var-expansion-q58hw" to be "success or failure" Jan 9 11:44:55.350: INFO: Pod "var-expansion-748e7bd7-32d5-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.850835ms Jan 9 11:44:57.359: INFO: Pod "var-expansion-748e7bd7-32d5-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033438677s Jan 9 11:44:59.372: INFO: Pod "var-expansion-748e7bd7-32d5-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045702019s Jan 9 11:45:01.390: INFO: Pod "var-expansion-748e7bd7-32d5-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064433155s Jan 9 11:45:03.419: INFO: Pod "var-expansion-748e7bd7-32d5-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.092757121s Jan 9 11:45:05.437: INFO: Pod "var-expansion-748e7bd7-32d5-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.110996168s STEP: Saw pod success Jan 9 11:45:05.437: INFO: Pod "var-expansion-748e7bd7-32d5-11ea-ac2d-0242ac110005" satisfied condition "success or failure" Jan 9 11:45:05.441: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-748e7bd7-32d5-11ea-ac2d-0242ac110005 container dapi-container: STEP: delete the pod Jan 9 11:45:05.552: INFO: Waiting for pod var-expansion-748e7bd7-32d5-11ea-ac2d-0242ac110005 to disappear Jan 9 11:45:05.630: INFO: Pod var-expansion-748e7bd7-32d5-11ea-ac2d-0242ac110005 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:45:05.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-q58hw" for this suite. Jan 9 11:45:11.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:45:11.983: INFO: namespace: e2e-tests-var-expansion-q58hw, resource: bindings, ignored listing per whitelist Jan 9 11:45:12.012: INFO: namespace e2e-tests-var-expansion-q58hw deletion completed in 6.294445508s • [SLOW TEST:16.885 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:45:12.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-7e9fcf2e-32d5-11ea-ac2d-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 9 11:45:12.222: INFO: Waiting up to 5m0s for pod "pod-configmaps-7ea1074f-32d5-11ea-ac2d-0242ac110005" in namespace "e2e-tests-configmap-wgchf" to be "success or failure" Jan 9 11:45:12.234: INFO: Pod "pod-configmaps-7ea1074f-32d5-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.944004ms Jan 9 11:45:14.263: INFO: Pod "pod-configmaps-7ea1074f-32d5-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040586207s Jan 9 11:45:16.284: INFO: Pod "pod-configmaps-7ea1074f-32d5-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061127302s Jan 9 11:45:18.296: INFO: Pod "pod-configmaps-7ea1074f-32d5-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073557957s Jan 9 11:45:20.315: INFO: Pod "pod-configmaps-7ea1074f-32d5-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.092171611s Jan 9 11:45:22.327: INFO: Pod "pod-configmaps-7ea1074f-32d5-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.104427745s STEP: Saw pod success Jan 9 11:45:22.327: INFO: Pod "pod-configmaps-7ea1074f-32d5-11ea-ac2d-0242ac110005" satisfied condition "success or failure" Jan 9 11:45:22.331: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-7ea1074f-32d5-11ea-ac2d-0242ac110005 container configmap-volume-test: STEP: delete the pod Jan 9 11:45:22.608: INFO: Waiting for pod pod-configmaps-7ea1074f-32d5-11ea-ac2d-0242ac110005 to disappear Jan 9 11:45:22.631: INFO: Pod pod-configmaps-7ea1074f-32d5-11ea-ac2d-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:45:22.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-wgchf" for this suite. Jan 9 11:45:29.508: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:45:29.653: INFO: namespace: e2e-tests-configmap-wgchf, resource: bindings, ignored listing per whitelist Jan 9 11:45:29.811: INFO: namespace e2e-tests-configmap-wgchf deletion completed in 7.163666206s • [SLOW TEST:17.799 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:45:29.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition Jan 9 11:45:30.071: INFO: Waiting up to 5m0s for pod "var-expansion-89425ebe-32d5-11ea-ac2d-0242ac110005" in namespace "e2e-tests-var-expansion-gjrrz" to be "success or failure" Jan 9 11:45:30.096: INFO: Pod "var-expansion-89425ebe-32d5-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.387782ms Jan 9 11:45:32.123: INFO: Pod "var-expansion-89425ebe-32d5-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052122043s Jan 9 11:45:34.151: INFO: Pod "var-expansion-89425ebe-32d5-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08029227s Jan 9 11:45:36.166: INFO: Pod "var-expansion-89425ebe-32d5-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.095571251s Jan 9 11:45:38.506: INFO: Pod "var-expansion-89425ebe-32d5-11ea-ac2d-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.435204934s Jan 9 11:45:40.532: INFO: Pod "var-expansion-89425ebe-32d5-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.461083805s STEP: Saw pod success Jan 9 11:45:40.532: INFO: Pod "var-expansion-89425ebe-32d5-11ea-ac2d-0242ac110005" satisfied condition "success or failure" Jan 9 11:45:40.543: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-89425ebe-32d5-11ea-ac2d-0242ac110005 container dapi-container: STEP: delete the pod Jan 9 11:45:40.933: INFO: Waiting for pod var-expansion-89425ebe-32d5-11ea-ac2d-0242ac110005 to disappear Jan 9 11:45:40.948: INFO: Pod var-expansion-89425ebe-32d5-11ea-ac2d-0242ac110005 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:45:40.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-gjrrz" for this suite. Jan 9 11:45:46.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:45:47.204: INFO: namespace: e2e-tests-var-expansion-gjrrz, resource: bindings, ignored listing per whitelist Jan 9 11:45:47.209: INFO: namespace e2e-tests-var-expansion-gjrrz deletion completed in 6.249161285s • [SLOW TEST:17.397 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:45:47.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Jan 9 11:45:47.558: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 9 11:45:47.569: INFO: Waiting for terminating namespaces to be deleted... Jan 9 11:45:47.574: INFO: Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test Jan 9 11:45:47.605: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Jan 9 11:45:47.605: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Jan 9 11:45:47.605: INFO: Container coredns ready: true, restart count 0 Jan 9 11:45:47.605: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded) Jan 9 11:45:47.605: INFO: Container kube-proxy ready: true, restart count 0 Jan 9 11:45:47.605: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Jan 9 11:45:47.605: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded) Jan 9 11:45:47.605: INFO: Container weave ready: true, restart count 0 Jan 9 11:45:47.605: INFO: Container weave-npc ready: true, restart count 0 Jan 9 11:45:47.605: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Jan 9 11:45:47.605: INFO: Container coredns ready: true, restart count 0 Jan 9 11:45:47.605: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Jan 9 11:45:47.605: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-server-hu5at5svl7ps Jan 9 11:45:47.876: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps Jan 9 11:45:47.876: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps Jan 9 11:45:47.876: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps Jan 9 11:45:47.876: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps Jan 9 11:45:47.876: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps Jan 9 11:45:47.876: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps Jan 9 11:45:47.876: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps Jan 9 11:45:47.876: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-93e2e840-32d5-11ea-ac2d-0242ac110005.15e8355e32ae5a40], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-b2xwh/filler-pod-93e2e840-32d5-11ea-ac2d-0242ac110005 to hunter-server-hu5at5svl7ps] STEP: Considering event: Type = [Normal], Name = [filler-pod-93e2e840-32d5-11ea-ac2d-0242ac110005.15e8355f788f7099], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-93e2e840-32d5-11ea-ac2d-0242ac110005.15e83560042560c2], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-93e2e840-32d5-11ea-ac2d-0242ac110005.15e835603b5201bf], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.15e8356089d2bf77], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.] STEP: removing the label node off the node hunter-server-hu5at5svl7ps STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:45:59.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-b2xwh" for this suite. Jan 9 11:46:09.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:46:10.400: INFO: namespace: e2e-tests-sched-pred-b2xwh, resource: bindings, ignored listing per whitelist Jan 9 11:46:10.518: INFO: namespace e2e-tests-sched-pred-b2xwh deletion completed in 11.125823763s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:23.310 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:46:10.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jan 9 11:46:10.689: INFO: Pod name pod-release: Found 0 pods out of 1 Jan 9 11:46:15.707: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:46:15.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-hmgmj" for this suite. Jan 9 11:46:24.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:46:24.859: INFO: namespace: e2e-tests-replication-controller-hmgmj, resource: bindings, ignored listing per whitelist Jan 9 11:46:24.893: INFO: namespace e2e-tests-replication-controller-hmgmj deletion completed in 8.979536914s • [SLOW TEST:14.374 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:46:24.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 9 11:46:26.206: INFO: Waiting up to 5m0s for pod "downwardapi-volume-aaabe2c8-32d5-11ea-ac2d-0242ac110005" in namespace "e2e-tests-downward-api-wlqq4" to be "success or failure" Jan 9 11:46:26.257: INFO: Pod "downwardapi-volume-aaabe2c8-32d5-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 50.557254ms Jan 9 11:46:28.270: INFO: Pod "downwardapi-volume-aaabe2c8-32d5-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064154589s Jan 9 11:46:30.285: INFO: Pod "downwardapi-volume-aaabe2c8-32d5-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079079483s Jan 9 11:46:32.529: INFO: Pod "downwardapi-volume-aaabe2c8-32d5-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.323049354s Jan 9 11:46:34.549: INFO: Pod "downwardapi-volume-aaabe2c8-32d5-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.342863904s Jan 9 11:46:36.564: INFO: Pod "downwardapi-volume-aaabe2c8-32d5-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.358225377s STEP: Saw pod success Jan 9 11:46:36.564: INFO: Pod "downwardapi-volume-aaabe2c8-32d5-11ea-ac2d-0242ac110005" satisfied condition "success or failure" Jan 9 11:46:36.571: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-aaabe2c8-32d5-11ea-ac2d-0242ac110005 container client-container: STEP: delete the pod Jan 9 11:46:36.618: INFO: Waiting for pod downwardapi-volume-aaabe2c8-32d5-11ea-ac2d-0242ac110005 to disappear Jan 9 11:46:36.626: INFO: Pod downwardapi-volume-aaabe2c8-32d5-11ea-ac2d-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:46:36.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-wlqq4" for this suite. Jan 9 11:46:42.663: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:46:42.696: INFO: namespace: e2e-tests-downward-api-wlqq4, resource: bindings, ignored listing per whitelist Jan 9 11:46:42.840: INFO: namespace e2e-tests-downward-api-wlqq4 deletion completed in 6.203561488s • [SLOW TEST:17.946 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:46:42.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 9 11:46:43.108: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b4cb763a-32d5-11ea-ac2d-0242ac110005" in namespace "e2e-tests-downward-api-r5d7m" to be "success or failure" Jan 9 11:46:43.184: INFO: Pod "downwardapi-volume-b4cb763a-32d5-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 76.309374ms Jan 9 11:46:45.198: INFO: Pod "downwardapi-volume-b4cb763a-32d5-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090205115s Jan 9 11:46:47.210: INFO: Pod "downwardapi-volume-b4cb763a-32d5-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101670268s Jan 9 11:46:49.228: INFO: Pod "downwardapi-volume-b4cb763a-32d5-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.12008284s Jan 9 11:46:51.252: INFO: Pod "downwardapi-volume-b4cb763a-32d5-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.143672152s Jan 9 11:46:53.321: INFO: Pod "downwardapi-volume-b4cb763a-32d5-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.21290607s STEP: Saw pod success Jan 9 11:46:53.321: INFO: Pod "downwardapi-volume-b4cb763a-32d5-11ea-ac2d-0242ac110005" satisfied condition "success or failure" Jan 9 11:46:53.330: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-b4cb763a-32d5-11ea-ac2d-0242ac110005 container client-container: STEP: delete the pod Jan 9 11:46:53.528: INFO: Waiting for pod downwardapi-volume-b4cb763a-32d5-11ea-ac2d-0242ac110005 to disappear Jan 9 11:46:53.535: INFO: Pod downwardapi-volume-b4cb763a-32d5-11ea-ac2d-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:46:53.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-r5d7m" for this suite. Jan 9 11:46:59.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:46:59.852: INFO: namespace: e2e-tests-downward-api-r5d7m, resource: bindings, ignored listing per whitelist Jan 9 11:46:59.882: INFO: namespace e2e-tests-downward-api-r5d7m deletion completed in 6.337939095s • [SLOW TEST:17.043 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:46:59.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-bee77bbc-32d5-11ea-ac2d-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 9 11:47:00.065: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bee86bcb-32d5-11ea-ac2d-0242ac110005" in namespace "e2e-tests-projected-5d778" to be "success or failure" Jan 9 11:47:00.087: INFO: Pod "pod-projected-configmaps-bee86bcb-32d5-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.230286ms Jan 9 11:47:02.332: INFO: Pod "pod-projected-configmaps-bee86bcb-32d5-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.266964672s Jan 9 11:47:04.355: INFO: Pod "pod-projected-configmaps-bee86bcb-32d5-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.290555564s Jan 9 11:47:06.372: INFO: Pod "pod-projected-configmaps-bee86bcb-32d5-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.307589163s Jan 9 11:47:08.398: INFO: Pod "pod-projected-configmaps-bee86bcb-32d5-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.333318911s Jan 9 11:47:10.429: INFO: Pod "pod-projected-configmaps-bee86bcb-32d5-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.364453916s STEP: Saw pod success Jan 9 11:47:10.429: INFO: Pod "pod-projected-configmaps-bee86bcb-32d5-11ea-ac2d-0242ac110005" satisfied condition "success or failure" Jan 9 11:47:10.436: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-bee86bcb-32d5-11ea-ac2d-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Jan 9 11:47:10.561: INFO: Waiting for pod pod-projected-configmaps-bee86bcb-32d5-11ea-ac2d-0242ac110005 to disappear Jan 9 11:47:10.577: INFO: Pod pod-projected-configmaps-bee86bcb-32d5-11ea-ac2d-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:47:10.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-5d778" for this suite. Jan 9 11:47:16.813: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:47:16.947: INFO: namespace: e2e-tests-projected-5d778, resource: bindings, ignored listing per whitelist Jan 9 11:47:16.969: INFO: namespace e2e-tests-projected-5d778 deletion completed in 6.379201209s • [SLOW TEST:17.086 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:47:16.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:47:27.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-cvmhq" for this suite. Jan 9 11:48:13.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:48:13.565: INFO: namespace: e2e-tests-kubelet-test-cvmhq, resource: bindings, ignored listing per whitelist Jan 9 11:48:13.725: INFO: namespace e2e-tests-kubelet-test-cvmhq deletion completed in 46.346407433s • [SLOW TEST:56.755 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:48:13.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-vprz8 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-vprz8 to expose endpoints map[] Jan 9 11:48:14.073: INFO: Get endpoints failed (23.930797ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Jan 9 11:48:15.087: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-vprz8 exposes endpoints map[] (1.038466638s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-vprz8 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-vprz8 to expose endpoints map[pod1:[100]] Jan 9 11:48:19.737: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.607452033s elapsed, will retry) Jan 9 11:48:25.828: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-vprz8 exposes endpoints map[pod1:[100]] (10.698041484s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-vprz8 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-vprz8 to expose endpoints map[pod1:[100] pod2:[101]] Jan 9 11:48:30.905: INFO: Unexpected endpoints: found map[eba4d333-32d5-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (5.038985178s elapsed, will retry) Jan 9 11:48:35.382: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-vprz8 exposes endpoints map[pod1:[100] pod2:[101]] (9.515878281s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-vprz8 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-vprz8 to expose endpoints map[pod2:[101]] Jan 9 11:48:36.609: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-vprz8 exposes endpoints map[pod2:[101]] (1.172457988s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-vprz8 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-vprz8 to expose endpoints map[] Jan 9 11:48:37.671: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-vprz8 exposes endpoints map[] (1.040668816s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:48:37.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-vprz8" for this suite. Jan 9 11:49:02.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:49:02.280: INFO: namespace: e2e-tests-services-vprz8, resource: bindings, ignored listing per whitelist Jan 9 11:49:02.291: INFO: namespace e2e-tests-services-vprz8 deletion completed in 24.31528988s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:48.566 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:49:02.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 9 11:49:02.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-59gxk' Jan 9 11:49:04.505: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 9 11:49:04.505: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Jan 9 11:49:04.573: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Jan 9 11:49:04.661: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Jan 9 11:49:04.742: INFO: scanned /root for discovery docs: Jan 9 11:49:04.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-59gxk' Jan 9 11:49:31.891: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jan 9 11:49:31.891: INFO: stdout: "Created e2e-test-nginx-rc-98a103b114b75f603c786fce08505e7e\nScaling up e2e-test-nginx-rc-98a103b114b75f603c786fce08505e7e from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-98a103b114b75f603c786fce08505e7e up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-98a103b114b75f603c786fce08505e7e to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Jan 9 11:49:31.891: INFO: stdout: "Created e2e-test-nginx-rc-98a103b114b75f603c786fce08505e7e\nScaling up e2e-test-nginx-rc-98a103b114b75f603c786fce08505e7e from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-98a103b114b75f603c786fce08505e7e up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-98a103b114b75f603c786fce08505e7e to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Jan 9 11:49:31.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-59gxk' Jan 9 11:49:32.167: INFO: stderr: "" Jan 9 11:49:32.168: INFO: stdout: "e2e-test-nginx-rc-98a103b114b75f603c786fce08505e7e-2rr6h e2e-test-nginx-rc-pxtr4 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jan 9 11:49:37.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-59gxk' Jan 9 11:49:37.417: INFO: stderr: "" Jan 9 11:49:37.417: INFO: stdout: "e2e-test-nginx-rc-98a103b114b75f603c786fce08505e7e-2rr6h " Jan 9 11:49:37.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-98a103b114b75f603c786fce08505e7e-2rr6h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-59gxk' Jan 9 11:49:37.564: INFO: stderr: "" Jan 9 11:49:37.564: INFO: stdout: "true" Jan 9 11:49:37.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-98a103b114b75f603c786fce08505e7e-2rr6h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-59gxk' Jan 9 11:49:37.783: INFO: stderr: "" Jan 9 11:49:37.783: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Jan 9 11:49:37.783: INFO: e2e-test-nginx-rc-98a103b114b75f603c786fce08505e7e-2rr6h is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 Jan 9 11:49:37.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-59gxk' Jan 9 11:49:37.934: INFO: stderr: "" Jan 9 11:49:37.934: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:49:37.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-59gxk" for this suite. Jan 9 11:50:00.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:50:00.264: INFO: namespace: e2e-tests-kubectl-59gxk, resource: bindings, ignored listing per whitelist Jan 9 11:50:00.478: INFO: namespace e2e-tests-kubectl-59gxk deletion completed in 22.536546354s • [SLOW TEST:58.187 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:50:00.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 9 11:50:00.794: INFO: Creating ReplicaSet my-hostname-basic-2aa32756-32d6-11ea-ac2d-0242ac110005 Jan 9 11:50:01.078: INFO: Pod name my-hostname-basic-2aa32756-32d6-11ea-ac2d-0242ac110005: Found 0 pods out of 1 Jan 9 11:50:06.175: INFO: Pod name my-hostname-basic-2aa32756-32d6-11ea-ac2d-0242ac110005: Found 1 pods out of 1 Jan 9 11:50:06.175: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-2aa32756-32d6-11ea-ac2d-0242ac110005" is running Jan 9 11:50:12.206: INFO: Pod "my-hostname-basic-2aa32756-32d6-11ea-ac2d-0242ac110005-n7t8f" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-09 11:50:01 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-09 11:50:01 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-2aa32756-32d6-11ea-ac2d-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-09 11:50:01 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-2aa32756-32d6-11ea-ac2d-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-09 11:50:01 +0000 UTC Reason: Message:}]) Jan 9 11:50:12.207: INFO: Trying to dial the pod Jan 9 11:50:17.257: INFO: Controller my-hostname-basic-2aa32756-32d6-11ea-ac2d-0242ac110005: Got expected result from replica 1 [my-hostname-basic-2aa32756-32d6-11ea-ac2d-0242ac110005-n7t8f]: "my-hostname-basic-2aa32756-32d6-11ea-ac2d-0242ac110005-n7t8f", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:50:17.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-cx9vr" for this suite. Jan 9 11:50:25.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:50:26.344: INFO: namespace: e2e-tests-replicaset-cx9vr, resource: bindings, ignored listing per whitelist Jan 9 11:50:26.447: INFO: namespace e2e-tests-replicaset-cx9vr deletion completed in 9.182062304s • [SLOW TEST:25.968 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:50:26.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 9 11:50:27.292: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:50:37.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-ktp9l" for this suite. Jan 9 11:51:19.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:51:19.949: INFO: namespace: e2e-tests-pods-ktp9l, resource: bindings, ignored listing per whitelist Jan 9 11:51:19.965: INFO: namespace e2e-tests-pods-ktp9l deletion completed in 42.246743031s • [SLOW TEST:53.518 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:51:19.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-59f22b82-32d6-11ea-ac2d-0242ac110005 STEP: Creating a pod to test consume secrets Jan 9 11:51:20.191: INFO: Waiting up to 5m0s for pod "pod-secrets-59f3a0a6-32d6-11ea-ac2d-0242ac110005" in namespace "e2e-tests-secrets-x27tz" to be "success or failure" Jan 9 11:51:20.216: INFO: Pod "pod-secrets-59f3a0a6-32d6-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.902257ms Jan 9 11:51:22.232: INFO: Pod "pod-secrets-59f3a0a6-32d6-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04112294s Jan 9 11:51:24.254: INFO: Pod "pod-secrets-59f3a0a6-32d6-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062950929s Jan 9 11:51:26.279: INFO: Pod "pod-secrets-59f3a0a6-32d6-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.087981027s Jan 9 11:51:28.787: INFO: Pod "pod-secrets-59f3a0a6-32d6-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.595643801s Jan 9 11:51:30.812: INFO: Pod "pod-secrets-59f3a0a6-32d6-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.620946409s STEP: Saw pod success Jan 9 11:51:30.812: INFO: Pod "pod-secrets-59f3a0a6-32d6-11ea-ac2d-0242ac110005" satisfied condition "success or failure" Jan 9 11:51:30.819: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-59f3a0a6-32d6-11ea-ac2d-0242ac110005 container secret-volume-test: STEP: delete the pod Jan 9 11:51:31.262: INFO: Waiting for pod pod-secrets-59f3a0a6-32d6-11ea-ac2d-0242ac110005 to disappear Jan 9 11:51:31.277: INFO: Pod pod-secrets-59f3a0a6-32d6-11ea-ac2d-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:51:31.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-x27tz" for this suite. Jan 9 11:51:37.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:51:37.475: INFO: namespace: e2e-tests-secrets-x27tz, resource: bindings, ignored listing per whitelist Jan 9 11:51:37.495: INFO: namespace e2e-tests-secrets-x27tz deletion completed in 6.201831767s • [SLOW TEST:17.529 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:51:37.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command Jan 9 11:51:37.849: INFO: Waiting up to 5m0s for pod "client-containers-64718b78-32d6-11ea-ac2d-0242ac110005" in namespace "e2e-tests-containers-9gb7h" to be "success or failure" Jan 9 11:51:37.869: INFO: Pod "client-containers-64718b78-32d6-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.858592ms Jan 9 11:51:39.884: INFO: Pod "client-containers-64718b78-32d6-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035011003s Jan 9 11:51:41.896: INFO: Pod "client-containers-64718b78-32d6-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047694125s Jan 9 11:51:44.021: INFO: Pod "client-containers-64718b78-32d6-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.172486141s Jan 9 11:51:46.241: INFO: Pod "client-containers-64718b78-32d6-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.392231225s Jan 9 11:51:48.574: INFO: Pod "client-containers-64718b78-32d6-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.725347053s STEP: Saw pod success Jan 9 11:51:48.574: INFO: Pod "client-containers-64718b78-32d6-11ea-ac2d-0242ac110005" satisfied condition "success or failure" Jan 9 11:51:48.604: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-64718b78-32d6-11ea-ac2d-0242ac110005 container test-container: STEP: delete the pod Jan 9 11:51:48.733: INFO: Waiting for pod client-containers-64718b78-32d6-11ea-ac2d-0242ac110005 to disappear Jan 9 11:51:48.746: INFO: Pod client-containers-64718b78-32d6-11ea-ac2d-0242ac110005 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:51:48.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-9gb7h" for this suite. Jan 9 11:51:56.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:51:56.853: INFO: namespace: e2e-tests-containers-9gb7h, resource: bindings, ignored listing per whitelist Jan 9 11:51:56.955: INFO: namespace e2e-tests-containers-9gb7h deletion completed in 8.20197696s • [SLOW TEST:19.460 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:51:56.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-dqbg2 Jan 9 11:52:07.203: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-dqbg2 STEP: checking the pod's current state and verifying that restartCount is present Jan 9 11:52:07.220: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:56:08.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-dqbg2" for this suite. Jan 9 11:56:16.402: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:56:16.447: INFO: namespace: e2e-tests-container-probe-dqbg2, resource: bindings, ignored listing per whitelist Jan 9 11:56:16.673: INFO: namespace e2e-tests-container-probe-dqbg2 deletion completed in 8.389298798s • [SLOW TEST:259.718 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:56:16.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Jan 9 11:56:16.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-fqt72' Jan 9 11:56:17.320: INFO: stderr: "" Jan 9 11:56:17.320: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jan 9 11:56:18.335: INFO: Selector matched 1 pods for map[app:redis] Jan 9 11:56:18.335: INFO: Found 0 / 1 Jan 9 11:56:19.342: INFO: Selector matched 1 pods for map[app:redis] Jan 9 11:56:19.342: INFO: Found 0 / 1 Jan 9 11:56:20.349: INFO: Selector matched 1 pods for map[app:redis] Jan 9 11:56:20.349: INFO: Found 0 / 1 Jan 9 11:56:21.354: INFO: Selector matched 1 pods for map[app:redis] Jan 9 11:56:21.354: INFO: Found 0 / 1 Jan 9 11:56:22.757: INFO: Selector matched 1 pods for map[app:redis] Jan 9 11:56:22.758: INFO: Found 0 / 1 Jan 9 11:56:23.592: INFO: Selector matched 1 pods for map[app:redis] Jan 9 11:56:23.593: INFO: Found 0 / 1 Jan 9 11:56:24.342: INFO: Selector matched 1 pods for map[app:redis] Jan 9 11:56:24.342: INFO: Found 0 / 1 Jan 9 11:56:25.338: INFO: Selector matched 1 pods for map[app:redis] Jan 9 11:56:25.338: INFO: Found 0 / 1 Jan 9 11:56:26.341: INFO: Selector matched 1 pods for map[app:redis] Jan 9 11:56:26.341: INFO: Found 1 / 1 Jan 9 11:56:26.341: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jan 9 11:56:26.356: INFO: Selector matched 1 pods for map[app:redis] Jan 9 11:56:26.356: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 9 11:56:26.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-hmhxs --namespace=e2e-tests-kubectl-fqt72 -p {"metadata":{"annotations":{"x":"y"}}}' Jan 9 11:56:26.654: INFO: stderr: "" Jan 9 11:56:26.654: INFO: stdout: "pod/redis-master-hmhxs patched\n" STEP: checking annotations Jan 9 11:56:26.793: INFO: Selector matched 1 pods for map[app:redis] Jan 9 11:56:26.793: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:56:26.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-fqt72" for this suite. Jan 9 11:56:50.828: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:56:50.891: INFO: namespace: e2e-tests-kubectl-fqt72, resource: bindings, ignored listing per whitelist Jan 9 11:56:50.981: INFO: namespace e2e-tests-kubectl-fqt72 deletion completed in 24.181314694s • [SLOW TEST:34.307 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:56:50.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-rf8pc STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 9 11:56:51.233: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 9 11:57:27.426: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-rf8pc PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 9 11:57:27.426: INFO: >>> kubeConfig: /root/.kube/config I0109 11:57:27.518308 9 log.go:172] (0xc000d08580) (0xc001359ea0) Create stream I0109 11:57:27.518359 9 log.go:172] (0xc000d08580) (0xc001359ea0) Stream added, broadcasting: 1 I0109 11:57:27.525040 9 log.go:172] (0xc000d08580) Reply frame received for 1 I0109 11:57:27.525087 9 log.go:172] (0xc000d08580) (0xc001969e00) Create stream I0109 11:57:27.525103 9 log.go:172] (0xc000d08580) (0xc001969e00) Stream added, broadcasting: 3 I0109 11:57:27.526474 9 log.go:172] (0xc000d08580) Reply frame received for 3 I0109 11:57:27.526505 9 log.go:172] (0xc000d08580) (0xc001b6d900) Create stream I0109 11:57:27.526518 9 log.go:172] (0xc000d08580) (0xc001b6d900) Stream added, broadcasting: 5 I0109 11:57:27.527871 9 log.go:172] (0xc000d08580) Reply frame received for 5 I0109 11:57:28.746705 9 log.go:172] (0xc000d08580) Data frame received for 3 I0109 11:57:28.746834 9 log.go:172] (0xc001969e00) (3) Data frame handling I0109 11:57:28.746863 9 log.go:172] (0xc001969e00) (3) Data frame sent I0109 11:57:28.918792 9 log.go:172] (0xc000d08580) (0xc001969e00) Stream removed, broadcasting: 3 I0109 11:57:28.919032 9 log.go:172] (0xc000d08580) Data frame received for 1 I0109 11:57:28.919070 9 log.go:172] (0xc001359ea0) (1) Data frame handling I0109 11:57:28.919101 9 log.go:172] (0xc001359ea0) (1) Data frame sent I0109 11:57:28.919128 9 log.go:172] (0xc000d08580) (0xc001359ea0) Stream removed, broadcasting: 1 I0109 11:57:28.919196 9 log.go:172] (0xc000d08580) (0xc001b6d900) Stream removed, broadcasting: 5 I0109 11:57:28.919260 9 log.go:172] (0xc000d08580) Go away received I0109 11:57:28.919496 9 log.go:172] (0xc000d08580) (0xc001359ea0) Stream removed, broadcasting: 1 I0109 11:57:28.919541 9 log.go:172] (0xc000d08580) (0xc001969e00) Stream removed, broadcasting: 3 I0109 11:57:28.919560 9 log.go:172] (0xc000d08580) (0xc001b6d900) Stream removed, broadcasting: 5 Jan 9 11:57:28.919: INFO: Found all expected endpoints: [netserver-0] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:57:28.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-rf8pc" for this suite. Jan 9 11:57:52.980: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:57:53.015: INFO: namespace: e2e-tests-pod-network-test-rf8pc, resource: bindings, ignored listing per whitelist Jan 9 11:57:53.142: INFO: namespace e2e-tests-pod-network-test-rf8pc deletion completed in 24.199969142s • [SLOW TEST:62.161 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:57:53.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-443f6422-32d7-11ea-ac2d-0242ac110005 Jan 9 11:57:53.368: INFO: Pod name my-hostname-basic-443f6422-32d7-11ea-ac2d-0242ac110005: Found 0 pods out of 1 Jan 9 11:57:59.215: INFO: Pod name my-hostname-basic-443f6422-32d7-11ea-ac2d-0242ac110005: Found 1 pods out of 1 Jan 9 11:57:59.215: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-443f6422-32d7-11ea-ac2d-0242ac110005" are running Jan 9 11:58:03.240: INFO: Pod "my-hostname-basic-443f6422-32d7-11ea-ac2d-0242ac110005-qn42p" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-09 11:57:53 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-09 11:57:53 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-443f6422-32d7-11ea-ac2d-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-09 11:57:53 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-443f6422-32d7-11ea-ac2d-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-09 11:57:53 +0000 UTC Reason: Message:}]) Jan 9 11:58:03.240: INFO: Trying to dial the pod Jan 9 11:58:08.272: INFO: Controller my-hostname-basic-443f6422-32d7-11ea-ac2d-0242ac110005: Got expected result from replica 1 [my-hostname-basic-443f6422-32d7-11ea-ac2d-0242ac110005-qn42p]: "my-hostname-basic-443f6422-32d7-11ea-ac2d-0242ac110005-qn42p", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:58:08.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-zhg8j" for this suite. Jan 9 11:58:16.368: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:58:16.494: INFO: namespace: e2e-tests-replication-controller-zhg8j, resource: bindings, ignored listing per whitelist Jan 9 11:58:16.588: INFO: namespace e2e-tests-replication-controller-zhg8j deletion completed in 8.30773409s • [SLOW TEST:23.446 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:58:16.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:58:28.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-58jt2" for this suite. Jan 9 11:59:10.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:59:10.410: INFO: namespace: e2e-tests-kubelet-test-58jt2, resource: bindings, ignored listing per whitelist Jan 9 11:59:10.421: INFO: namespace e2e-tests-kubelet-test-58jt2 deletion completed in 42.269489722s • [SLOW TEST:53.833 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:59:10.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 9 11:59:10.753: INFO: Waiting up to 5m0s for pod "downwardapi-volume-726ecb4d-32d7-11ea-ac2d-0242ac110005" in namespace "e2e-tests-projected-zh2c8" to be "success or failure" Jan 9 11:59:10.767: INFO: Pod "downwardapi-volume-726ecb4d-32d7-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.790638ms Jan 9 11:59:12.790: INFO: Pod "downwardapi-volume-726ecb4d-32d7-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036080825s Jan 9 11:59:14.804: INFO: Pod "downwardapi-volume-726ecb4d-32d7-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050234435s Jan 9 11:59:16.845: INFO: Pod "downwardapi-volume-726ecb4d-32d7-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091189885s Jan 9 11:59:18.898: INFO: Pod "downwardapi-volume-726ecb4d-32d7-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.144159478s Jan 9 11:59:20.911: INFO: Pod "downwardapi-volume-726ecb4d-32d7-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.157584816s STEP: Saw pod success Jan 9 11:59:20.911: INFO: Pod "downwardapi-volume-726ecb4d-32d7-11ea-ac2d-0242ac110005" satisfied condition "success or failure" Jan 9 11:59:20.916: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-726ecb4d-32d7-11ea-ac2d-0242ac110005 container client-container: STEP: delete the pod Jan 9 11:59:21.003: INFO: Waiting for pod downwardapi-volume-726ecb4d-32d7-11ea-ac2d-0242ac110005 to disappear Jan 9 11:59:21.017: INFO: Pod downwardapi-volume-726ecb4d-32d7-11ea-ac2d-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:59:21.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-zh2c8" for this suite. Jan 9 11:59:28.282: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:59:28.311: INFO: namespace: e2e-tests-projected-zh2c8, resource: bindings, ignored listing per whitelist Jan 9 11:59:28.425: INFO: namespace e2e-tests-projected-zh2c8 deletion completed in 7.387029185s • [SLOW TEST:18.004 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:59:28.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 11:59:28.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-jlf98" for this suite. Jan 9 11:59:34.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 11:59:34.728: INFO: namespace: e2e-tests-services-jlf98, resource: bindings, ignored listing per whitelist Jan 9 11:59:34.760: INFO: namespace e2e-tests-services-jlf98 deletion completed in 6.135668095s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:6.334 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 11:59:34.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 9 11:59:55.222: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 9 11:59:55.240: INFO: Pod pod-with-prestop-exec-hook still exists Jan 9 11:59:57.240: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 9 11:59:57.266: INFO: Pod pod-with-prestop-exec-hook still exists Jan 9 11:59:59.240: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 9 11:59:59.255: INFO: Pod pod-with-prestop-exec-hook still exists Jan 9 12:00:01.240: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 9 12:00:01.255: INFO: Pod pod-with-prestop-exec-hook still exists Jan 9 12:00:03.240: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 9 12:00:03.262: INFO: Pod pod-with-prestop-exec-hook still exists Jan 9 12:00:05.240: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 9 12:00:05.278: INFO: Pod pod-with-prestop-exec-hook still exists Jan 9 12:00:07.240: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 9 12:00:07.254: INFO: Pod pod-with-prestop-exec-hook still exists Jan 9 12:00:09.241: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 9 12:00:09.257: INFO: Pod pod-with-prestop-exec-hook still exists Jan 9 12:00:11.241: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 9 12:00:11.261: INFO: Pod pod-with-prestop-exec-hook still exists Jan 9 12:00:13.240: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 9 12:00:13.282: INFO: Pod pod-with-prestop-exec-hook still exists Jan 9 12:00:15.240: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 9 12:00:15.256: INFO: Pod pod-with-prestop-exec-hook still exists Jan 9 12:00:17.240: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 9 12:00:17.260: INFO: Pod pod-with-prestop-exec-hook still exists Jan 9 12:00:19.240: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 9 12:00:19.259: INFO: Pod pod-with-prestop-exec-hook still exists Jan 9 12:00:21.240: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 9 12:00:21.265: INFO: Pod pod-with-prestop-exec-hook still exists Jan 9 12:00:23.240: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 9 12:00:23.262: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 12:00:23.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-zb9k2" for this suite. Jan 9 12:00:47.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 12:00:47.569: INFO: namespace: e2e-tests-container-lifecycle-hook-zb9k2, resource: bindings, ignored listing per whitelist Jan 9 12:00:47.576: INFO: namespace e2e-tests-container-lifecycle-hook-zb9k2 deletion completed in 24.204402484s • [SLOW TEST:72.816 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 12:00:47.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0109 12:00:49.451287 9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 9 12:00:49.451: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 12:00:49.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-r5tpx" for this suite. Jan 9 12:00:56.237: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 12:00:56.379: INFO: namespace: e2e-tests-gc-r5tpx, resource: bindings, ignored listing per whitelist Jan 9 12:00:56.401: INFO: namespace e2e-tests-gc-r5tpx deletion completed in 6.945933563s • [SLOW TEST:8.825 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 12:00:56.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-b193bf63-32d7-11ea-ac2d-0242ac110005 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-b193bf63-32d7-11ea-ac2d-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 12:01:08.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-drzf7" for this suite. Jan 9 12:01:32.900: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 12:01:33.064: INFO: namespace: e2e-tests-projected-drzf7, resource: bindings, ignored listing per whitelist Jan 9 12:01:33.155: INFO: namespace e2e-tests-projected-drzf7 deletion completed in 24.302575549s • [SLOW TEST:36.754 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 12:01:33.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-j2np6 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-j2np6 STEP: Deleting pre-stop pod Jan 9 12:01:58.635: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 12:01:58.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-j2np6" for this suite. Jan 9 12:02:38.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 12:02:38.981: INFO: namespace: e2e-tests-prestop-j2np6, resource: bindings, ignored listing per whitelist Jan 9 12:02:39.060: INFO: namespace e2e-tests-prestop-j2np6 deletion completed in 40.225240288s • [SLOW TEST:65.904 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 12:02:39.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-eec6acf7-32d7-11ea-ac2d-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 9 12:02:39.399: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-eeca0ce8-32d7-11ea-ac2d-0242ac110005" in namespace "e2e-tests-projected-wvpnl" to be "success or failure" Jan 9 12:02:39.416: INFO: Pod "pod-projected-configmaps-eeca0ce8-32d7-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.737926ms Jan 9 12:02:41.450: INFO: Pod "pod-projected-configmaps-eeca0ce8-32d7-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050418924s Jan 9 12:02:43.472: INFO: Pod "pod-projected-configmaps-eeca0ce8-32d7-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073214663s Jan 9 12:02:46.391: INFO: Pod "pod-projected-configmaps-eeca0ce8-32d7-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.991475463s Jan 9 12:02:48.417: INFO: Pod "pod-projected-configmaps-eeca0ce8-32d7-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.018080739s Jan 9 12:02:50.435: INFO: Pod "pod-projected-configmaps-eeca0ce8-32d7-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.035299872s STEP: Saw pod success Jan 9 12:02:50.435: INFO: Pod "pod-projected-configmaps-eeca0ce8-32d7-11ea-ac2d-0242ac110005" satisfied condition "success or failure" Jan 9 12:02:50.445: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-eeca0ce8-32d7-11ea-ac2d-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Jan 9 12:02:50.699: INFO: Waiting for pod pod-projected-configmaps-eeca0ce8-32d7-11ea-ac2d-0242ac110005 to disappear Jan 9 12:02:50.766: INFO: Pod pod-projected-configmaps-eeca0ce8-32d7-11ea-ac2d-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 12:02:50.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-wvpnl" for this suite. Jan 9 12:02:56.856: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 12:02:56.975: INFO: namespace: e2e-tests-projected-wvpnl, resource: bindings, ignored listing per whitelist Jan 9 12:02:56.982: INFO: namespace e2e-tests-projected-wvpnl deletion completed in 6.197681189s • [SLOW TEST:17.922 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 12:02:56.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-f963a83b-32d7-11ea-ac2d-0242ac110005 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-f963a83b-32d7-11ea-ac2d-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 12:04:31.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-llgxw" for this suite. Jan 9 12:04:57.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 12:04:57.529: INFO: namespace: e2e-tests-configmap-llgxw, resource: bindings, ignored listing per whitelist Jan 9 12:04:57.615: INFO: namespace e2e-tests-configmap-llgxw deletion completed in 26.440755244s • [SLOW TEST:120.633 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 12:04:57.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-415127d8-32d8-11ea-ac2d-0242ac110005 STEP: Creating a pod to test consume secrets Jan 9 12:04:57.876: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-415244d3-32d8-11ea-ac2d-0242ac110005" in namespace "e2e-tests-projected-4rscr" to be "success or failure" Jan 9 12:04:57.938: INFO: Pod "pod-projected-secrets-415244d3-32d8-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 61.628463ms Jan 9 12:04:59.971: INFO: Pod "pod-projected-secrets-415244d3-32d8-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095161161s Jan 9 12:05:02.172: INFO: Pod "pod-projected-secrets-415244d3-32d8-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.295645015s Jan 9 12:05:04.199: INFO: Pod "pod-projected-secrets-415244d3-32d8-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.323333448s Jan 9 12:05:06.212: INFO: Pod "pod-projected-secrets-415244d3-32d8-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.336258835s STEP: Saw pod success Jan 9 12:05:06.212: INFO: Pod "pod-projected-secrets-415244d3-32d8-11ea-ac2d-0242ac110005" satisfied condition "success or failure" Jan 9 12:05:06.215: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-415244d3-32d8-11ea-ac2d-0242ac110005 container projected-secret-volume-test: STEP: delete the pod Jan 9 12:05:06.387: INFO: Waiting for pod pod-projected-secrets-415244d3-32d8-11ea-ac2d-0242ac110005 to disappear Jan 9 12:05:06.396: INFO: Pod pod-projected-secrets-415244d3-32d8-11ea-ac2d-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 12:05:06.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-4rscr" for this suite. Jan 9 12:05:12.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 12:05:12.844: INFO: namespace: e2e-tests-projected-4rscr, resource: bindings, ignored listing per whitelist Jan 9 12:05:12.866: INFO: namespace e2e-tests-projected-4rscr deletion completed in 6.454294452s • [SLOW TEST:15.250 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 12:05:12.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 9 12:05:13.098: INFO: Waiting up to 5m0s for pod "pod-4a6795f2-32d8-11ea-ac2d-0242ac110005" in namespace "e2e-tests-emptydir-tjw69" to be "success or failure" Jan 9 12:05:13.103: INFO: Pod "pod-4a6795f2-32d8-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.782896ms Jan 9 12:05:15.316: INFO: Pod "pod-4a6795f2-32d8-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217973646s Jan 9 12:05:17.343: INFO: Pod "pod-4a6795f2-32d8-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.245041749s Jan 9 12:05:19.921: INFO: Pod "pod-4a6795f2-32d8-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.822822346s Jan 9 12:05:22.036: INFO: Pod "pod-4a6795f2-32d8-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.937994292s Jan 9 12:05:24.050: INFO: Pod "pod-4a6795f2-32d8-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.95229974s STEP: Saw pod success Jan 9 12:05:24.051: INFO: Pod "pod-4a6795f2-32d8-11ea-ac2d-0242ac110005" satisfied condition "success or failure" Jan 9 12:05:24.055: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-4a6795f2-32d8-11ea-ac2d-0242ac110005 container test-container: STEP: delete the pod Jan 9 12:05:24.673: INFO: Waiting for pod pod-4a6795f2-32d8-11ea-ac2d-0242ac110005 to disappear Jan 9 12:05:24.699: INFO: Pod pod-4a6795f2-32d8-11ea-ac2d-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 12:05:24.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-tjw69" for this suite. Jan 9 12:05:32.885: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 12:05:32.916: INFO: namespace: e2e-tests-emptydir-tjw69, resource: bindings, ignored listing per whitelist Jan 9 12:05:33.023: INFO: namespace e2e-tests-emptydir-tjw69 deletion completed in 8.312073475s • [SLOW TEST:20.157 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 12:05:33.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 9 12:05:33.217: INFO: Waiting up to 5m0s for pod "downwardapi-volume-56663d09-32d8-11ea-ac2d-0242ac110005" in namespace "e2e-tests-downward-api-dksxd" to be "success or failure" Jan 9 12:05:33.228: INFO: Pod "downwardapi-volume-56663d09-32d8-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.282515ms Jan 9 12:05:35.932: INFO: Pod "downwardapi-volume-56663d09-32d8-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.714733544s Jan 9 12:05:37.951: INFO: Pod "downwardapi-volume-56663d09-32d8-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.733122422s Jan 9 12:05:40.252: INFO: Pod "downwardapi-volume-56663d09-32d8-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.034248021s Jan 9 12:05:42.337: INFO: Pod "downwardapi-volume-56663d09-32d8-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.119137928s Jan 9 12:05:44.490: INFO: Pod "downwardapi-volume-56663d09-32d8-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.272933886s STEP: Saw pod success Jan 9 12:05:44.491: INFO: Pod "downwardapi-volume-56663d09-32d8-11ea-ac2d-0242ac110005" satisfied condition "success or failure" Jan 9 12:05:44.513: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-56663d09-32d8-11ea-ac2d-0242ac110005 container client-container: STEP: delete the pod Jan 9 12:05:44.765: INFO: Waiting for pod downwardapi-volume-56663d09-32d8-11ea-ac2d-0242ac110005 to disappear Jan 9 12:05:44.777: INFO: Pod downwardapi-volume-56663d09-32d8-11ea-ac2d-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 9 12:05:44.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-dksxd" for this suite. Jan 9 12:05:50.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 9 12:05:50.944: INFO: namespace: e2e-tests-downward-api-dksxd, resource: bindings, ignored listing per whitelist Jan 9 12:05:50.974: INFO: namespace e2e-tests-downward-api-dksxd deletion completed in 6.189497395s • [SLOW TEST:17.950 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 9 12:05:50.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-8zhzt in namespace e2e-tests-proxy-tscqp I0109 12:05:51.344038 9 runners.go:184] Created replication controller with name: proxy-service-8zhzt, namespace: e2e-tests-proxy-tscqp, replica count: 1 I0109 12:05:52.395569 9 runners.go:184] proxy-service-8zhzt Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0109 12:05:53.395941 9 runners.go:184] proxy-service-8zhzt Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0109 12:05:54.396343 9 runners.go:184] proxy-service-8zhzt Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0109 12:05:55.397352 9 runners.go:184] proxy-service-8zhzt Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0109 12:05:56.398097 9 runners.go:184] proxy-service-8zhzt Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0109 12:05:57.398741 9 runners.go:184] proxy-service-8zhzt Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0109 12:05:58.399219 9 runners.go:184] proxy-service-8zhzt Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0109 12:05:59.399579 9 runners.go:184] proxy-service-8zhzt Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0109 12:06:00.400048 9 runners.go:184] proxy-service-8zhzt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0109 12:06:01.400496 9 runners.go:184] proxy-service-8zhzt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0109 12:06:02.400937 9 runners.go:184] proxy-service-8zhzt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0109 12:06:03.401322 9 runners.go:184] proxy-service-8zhzt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0109 12:06:04.401827 9 runners.go:184] proxy-service-8zhzt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0109 12:06:05.402263 9 runners.go:184] proxy-service-8zhzt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0109 12:06:06.402756 9 runners.go:184] proxy-service-8zhzt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0109 12:06:07.403210 9 runners.go:184] proxy-service-8zhzt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0109 12:06:08.404159 9 runners.go:184] proxy-service-8zhzt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0109 12:06:09.404922 9 runners.go:184] proxy-service-8zhzt Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 9 12:06:09.417: INFO: setup took 18.274049208s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jan 9 12:06:09.466: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-tscqp/pods/proxy-service-8zhzt-cf4fj/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 9 12:06:29.388: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 27.037676ms)
Jan  9 12:06:29.394: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.231197ms)
Jan  9 12:06:29.399: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.076387ms)
Jan  9 12:06:29.403: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.802048ms)
Jan  9 12:06:29.407: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.874456ms)
Jan  9 12:06:29.411: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.93472ms)
Jan  9 12:06:29.474: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 63.066894ms)
Jan  9 12:06:29.483: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.907006ms)
Jan  9 12:06:29.489: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.568357ms)
Jan  9 12:06:29.495: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.247095ms)
Jan  9 12:06:29.500: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.077167ms)
Jan  9 12:06:29.506: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.724732ms)
Jan  9 12:06:29.514: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.622295ms)
Jan  9 12:06:29.522: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.292886ms)
Jan  9 12:06:29.528: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.303392ms)
Jan  9 12:06:29.533: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.307982ms)
Jan  9 12:06:29.538: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.757883ms)
Jan  9 12:06:29.544: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.030958ms)
Jan  9 12:06:29.549: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.079058ms)
Jan  9 12:06:29.554: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.257038ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:06:29.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-wcrwz" for this suite.
Jan  9 12:06:35.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:06:35.721: INFO: namespace: e2e-tests-proxy-wcrwz, resource: bindings, ignored listing per whitelist
Jan  9 12:06:35.875: INFO: namespace e2e-tests-proxy-wcrwz deletion completed in 6.316763452s

• [SLOW TEST:6.717 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:06:35.875: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-8bfck/configmap-test-7be2fa5a-32d8-11ea-ac2d-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  9 12:06:36.120: INFO: Waiting up to 5m0s for pod "pod-configmaps-7be420de-32d8-11ea-ac2d-0242ac110005" in namespace "e2e-tests-configmap-8bfck" to be "success or failure"
Jan  9 12:06:36.133: INFO: Pod "pod-configmaps-7be420de-32d8-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.719424ms
Jan  9 12:06:38.145: INFO: Pod "pod-configmaps-7be420de-32d8-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024667075s
Jan  9 12:06:40.160: INFO: Pod "pod-configmaps-7be420de-32d8-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040152958s
Jan  9 12:06:42.180: INFO: Pod "pod-configmaps-7be420de-32d8-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059723322s
Jan  9 12:06:44.327: INFO: Pod "pod-configmaps-7be420de-32d8-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.207224346s
Jan  9 12:06:46.381: INFO: Pod "pod-configmaps-7be420de-32d8-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.261239256s
STEP: Saw pod success
Jan  9 12:06:46.381: INFO: Pod "pod-configmaps-7be420de-32d8-11ea-ac2d-0242ac110005" satisfied condition "success or failure"
Jan  9 12:06:46.394: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-7be420de-32d8-11ea-ac2d-0242ac110005 container env-test: 
STEP: delete the pod
Jan  9 12:06:46.762: INFO: Waiting for pod pod-configmaps-7be420de-32d8-11ea-ac2d-0242ac110005 to disappear
Jan  9 12:06:46.777: INFO: Pod pod-configmaps-7be420de-32d8-11ea-ac2d-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:06:46.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-8bfck" for this suite.
Jan  9 12:06:52.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:06:52.968: INFO: namespace: e2e-tests-configmap-8bfck, resource: bindings, ignored listing per whitelist
Jan  9 12:06:53.104: INFO: namespace e2e-tests-configmap-8bfck deletion completed in 6.318410139s

• [SLOW TEST:17.229 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:06:53.105: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Jan  9 12:06:53.330: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-2zp7v,SelfLink:/api/v1/namespaces/e2e-tests-watch-2zp7v/configmaps/e2e-watch-test-resource-version,UID:8623cbc1-32d8-11ea-a994-fa163e34d433,ResourceVersion:17697252,Generation:0,CreationTimestamp:2020-01-09 12:06:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  9 12:06:53.330: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-2zp7v,SelfLink:/api/v1/namespaces/e2e-tests-watch-2zp7v/configmaps/e2e-watch-test-resource-version,UID:8623cbc1-32d8-11ea-a994-fa163e34d433,ResourceVersion:17697253,Generation:0,CreationTimestamp:2020-01-09 12:06:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:06:53.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-2zp7v" for this suite.
Jan  9 12:06:59.362: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:06:59.431: INFO: namespace: e2e-tests-watch-2zp7v, resource: bindings, ignored listing per whitelist
Jan  9 12:06:59.537: INFO: namespace e2e-tests-watch-2zp7v deletion completed in 6.202607647s

• [SLOW TEST:6.432 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:06:59.537: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-89f3cce7-32d8-11ea-ac2d-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  9 12:06:59.726: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-89f49b71-32d8-11ea-ac2d-0242ac110005" in namespace "e2e-tests-projected-clpnk" to be "success or failure"
Jan  9 12:06:59.736: INFO: Pod "pod-projected-configmaps-89f49b71-32d8-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.25288ms
Jan  9 12:07:01.760: INFO: Pod "pod-projected-configmaps-89f49b71-32d8-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033282319s
Jan  9 12:07:03.783: INFO: Pod "pod-projected-configmaps-89f49b71-32d8-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056868879s
Jan  9 12:07:05.818: INFO: Pod "pod-projected-configmaps-89f49b71-32d8-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.09114417s
Jan  9 12:07:07.834: INFO: Pod "pod-projected-configmaps-89f49b71-32d8-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.107352571s
Jan  9 12:07:09.852: INFO: Pod "pod-projected-configmaps-89f49b71-32d8-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.126089635s
STEP: Saw pod success
Jan  9 12:07:09.853: INFO: Pod "pod-projected-configmaps-89f49b71-32d8-11ea-ac2d-0242ac110005" satisfied condition "success or failure"
Jan  9 12:07:09.869: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-89f49b71-32d8-11ea-ac2d-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  9 12:07:10.042: INFO: Waiting for pod pod-projected-configmaps-89f49b71-32d8-11ea-ac2d-0242ac110005 to disappear
Jan  9 12:07:10.068: INFO: Pod pod-projected-configmaps-89f49b71-32d8-11ea-ac2d-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:07:10.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-clpnk" for this suite.
Jan  9 12:07:16.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:07:17.088: INFO: namespace: e2e-tests-projected-clpnk, resource: bindings, ignored listing per whitelist
Jan  9 12:07:17.129: INFO: namespace e2e-tests-projected-clpnk deletion completed in 6.385013912s

• [SLOW TEST:17.592 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:07:17.131: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan  9 12:07:17.334: INFO: Waiting up to 5m0s for pod "pod-9472e50b-32d8-11ea-ac2d-0242ac110005" in namespace "e2e-tests-emptydir-tcdjj" to be "success or failure"
Jan  9 12:07:17.355: INFO: Pod "pod-9472e50b-32d8-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.611656ms
Jan  9 12:07:19.369: INFO: Pod "pod-9472e50b-32d8-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035419506s
Jan  9 12:07:22.373: INFO: Pod "pod-9472e50b-32d8-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.038904631s
Jan  9 12:07:24.414: INFO: Pod "pod-9472e50b-32d8-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.080392826s
Jan  9 12:07:26.438: INFO: Pod "pod-9472e50b-32d8-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.103735696s
STEP: Saw pod success
Jan  9 12:07:26.438: INFO: Pod "pod-9472e50b-32d8-11ea-ac2d-0242ac110005" satisfied condition "success or failure"
Jan  9 12:07:26.446: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-9472e50b-32d8-11ea-ac2d-0242ac110005 container test-container: 
STEP: delete the pod
Jan  9 12:07:26.885: INFO: Waiting for pod pod-9472e50b-32d8-11ea-ac2d-0242ac110005 to disappear
Jan  9 12:07:26.901: INFO: Pod pod-9472e50b-32d8-11ea-ac2d-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:07:26.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-tcdjj" for this suite.
Jan  9 12:07:33.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:07:33.103: INFO: namespace: e2e-tests-emptydir-tcdjj, resource: bindings, ignored listing per whitelist
Jan  9 12:07:33.176: INFO: namespace e2e-tests-emptydir-tcdjj deletion completed in 6.234472129s

• [SLOW TEST:16.045 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:07:33.176: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan  9 12:07:33.401: INFO: Waiting up to 5m0s for pod "downward-api-9e06c8d1-32d8-11ea-ac2d-0242ac110005" in namespace "e2e-tests-downward-api-k4n6s" to be "success or failure"
Jan  9 12:07:33.406: INFO: Pod "downward-api-9e06c8d1-32d8-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.682894ms
Jan  9 12:07:35.729: INFO: Pod "downward-api-9e06c8d1-32d8-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.327752908s
Jan  9 12:07:37.745: INFO: Pod "downward-api-9e06c8d1-32d8-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.343690945s
Jan  9 12:07:40.060: INFO: Pod "downward-api-9e06c8d1-32d8-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.658126777s
Jan  9 12:07:42.088: INFO: Pod "downward-api-9e06c8d1-32d8-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.686778487s
Jan  9 12:07:44.099: INFO: Pod "downward-api-9e06c8d1-32d8-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.697714281s
STEP: Saw pod success
Jan  9 12:07:44.099: INFO: Pod "downward-api-9e06c8d1-32d8-11ea-ac2d-0242ac110005" satisfied condition "success or failure"
Jan  9 12:07:44.104: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-9e06c8d1-32d8-11ea-ac2d-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan  9 12:07:44.320: INFO: Waiting for pod downward-api-9e06c8d1-32d8-11ea-ac2d-0242ac110005 to disappear
Jan  9 12:07:44.375: INFO: Pod downward-api-9e06c8d1-32d8-11ea-ac2d-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:07:44.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-k4n6s" for this suite.
Jan  9 12:07:50.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:07:50.596: INFO: namespace: e2e-tests-downward-api-k4n6s, resource: bindings, ignored listing per whitelist
Jan  9 12:07:50.742: INFO: namespace e2e-tests-downward-api-k4n6s deletion completed in 6.358082597s

• [SLOW TEST:17.565 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:07:50.742: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-a889b659-32d8-11ea-ac2d-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  9 12:07:51.028: INFO: Waiting up to 5m0s for pod "pod-secrets-a88a8093-32d8-11ea-ac2d-0242ac110005" in namespace "e2e-tests-secrets-fpfnq" to be "success or failure"
Jan  9 12:07:51.041: INFO: Pod "pod-secrets-a88a8093-32d8-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.430524ms
Jan  9 12:07:53.060: INFO: Pod "pod-secrets-a88a8093-32d8-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031845087s
Jan  9 12:07:55.099: INFO: Pod "pod-secrets-a88a8093-32d8-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071107383s
Jan  9 12:07:57.713: INFO: Pod "pod-secrets-a88a8093-32d8-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.685066267s
Jan  9 12:08:00.003: INFO: Pod "pod-secrets-a88a8093-32d8-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.974238395s
Jan  9 12:08:02.195: INFO: Pod "pod-secrets-a88a8093-32d8-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.166490032s
STEP: Saw pod success
Jan  9 12:08:02.195: INFO: Pod "pod-secrets-a88a8093-32d8-11ea-ac2d-0242ac110005" satisfied condition "success or failure"
Jan  9 12:08:02.239: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-a88a8093-32d8-11ea-ac2d-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan  9 12:08:02.358: INFO: Waiting for pod pod-secrets-a88a8093-32d8-11ea-ac2d-0242ac110005 to disappear
Jan  9 12:08:02.364: INFO: Pod pod-secrets-a88a8093-32d8-11ea-ac2d-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:08:02.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-fpfnq" for this suite.
Jan  9 12:08:10.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:08:10.491: INFO: namespace: e2e-tests-secrets-fpfnq, resource: bindings, ignored listing per whitelist
Jan  9 12:08:10.640: INFO: namespace e2e-tests-secrets-fpfnq deletion completed in 8.266462353s

• [SLOW TEST:19.898 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:08:10.640: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan  9 12:08:10.926: INFO: Waiting up to 5m0s for pod "downward-api-b4656659-32d8-11ea-ac2d-0242ac110005" in namespace "e2e-tests-downward-api-9dq6r" to be "success or failure"
Jan  9 12:08:10.944: INFO: Pod "downward-api-b4656659-32d8-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.245101ms
Jan  9 12:08:12.995: INFO: Pod "downward-api-b4656659-32d8-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06949144s
Jan  9 12:08:15.012: INFO: Pod "downward-api-b4656659-32d8-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085827642s
Jan  9 12:08:17.030: INFO: Pod "downward-api-b4656659-32d8-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.104322424s
Jan  9 12:08:19.046: INFO: Pod "downward-api-b4656659-32d8-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.12076331s
Jan  9 12:08:21.082: INFO: Pod "downward-api-b4656659-32d8-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.155920557s
Jan  9 12:08:23.097: INFO: Pod "downward-api-b4656659-32d8-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.171506733s
STEP: Saw pod success
Jan  9 12:08:23.097: INFO: Pod "downward-api-b4656659-32d8-11ea-ac2d-0242ac110005" satisfied condition "success or failure"
Jan  9 12:08:23.104: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-b4656659-32d8-11ea-ac2d-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan  9 12:08:23.786: INFO: Waiting for pod downward-api-b4656659-32d8-11ea-ac2d-0242ac110005 to disappear
Jan  9 12:08:23.822: INFO: Pod downward-api-b4656659-32d8-11ea-ac2d-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:08:23.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-9dq6r" for this suite.
Jan  9 12:08:29.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:08:30.171: INFO: namespace: e2e-tests-downward-api-9dq6r, resource: bindings, ignored listing per whitelist
Jan  9 12:08:30.234: INFO: namespace e2e-tests-downward-api-9dq6r deletion completed in 6.389858608s

• [SLOW TEST:19.594 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:08:30.235: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Jan  9 12:08:30.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-77r7n'
Jan  9 12:08:32.836: INFO: stderr: ""
Jan  9 12:08:32.836: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  9 12:08:32.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-77r7n'
Jan  9 12:08:33.062: INFO: stderr: ""
Jan  9 12:08:33.062: INFO: stdout: "update-demo-nautilus-g4lzk update-demo-nautilus-ztj6l "
Jan  9 12:08:33.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g4lzk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-77r7n'
Jan  9 12:08:33.274: INFO: stderr: ""
Jan  9 12:08:33.274: INFO: stdout: ""
Jan  9 12:08:33.274: INFO: update-demo-nautilus-g4lzk is created but not running
Jan  9 12:08:38.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-77r7n'
Jan  9 12:08:38.781: INFO: stderr: ""
Jan  9 12:08:38.781: INFO: stdout: "update-demo-nautilus-g4lzk update-demo-nautilus-ztj6l "
Jan  9 12:08:38.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g4lzk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-77r7n'
Jan  9 12:08:39.419: INFO: stderr: ""
Jan  9 12:08:39.419: INFO: stdout: ""
Jan  9 12:08:39.419: INFO: update-demo-nautilus-g4lzk is created but not running
Jan  9 12:08:44.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-77r7n'
Jan  9 12:08:44.680: INFO: stderr: ""
Jan  9 12:08:44.681: INFO: stdout: "update-demo-nautilus-g4lzk update-demo-nautilus-ztj6l "
Jan  9 12:08:44.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g4lzk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-77r7n'
Jan  9 12:08:44.802: INFO: stderr: ""
Jan  9 12:08:44.802: INFO: stdout: "true"
Jan  9 12:08:44.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g4lzk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-77r7n'
Jan  9 12:08:44.961: INFO: stderr: ""
Jan  9 12:08:44.961: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  9 12:08:44.961: INFO: validating pod update-demo-nautilus-g4lzk
Jan  9 12:08:45.066: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  9 12:08:45.066: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  9 12:08:45.066: INFO: update-demo-nautilus-g4lzk is verified up and running
Jan  9 12:08:45.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ztj6l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-77r7n'
Jan  9 12:08:45.168: INFO: stderr: ""
Jan  9 12:08:45.168: INFO: stdout: "true"
Jan  9 12:08:45.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ztj6l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-77r7n'
Jan  9 12:08:45.285: INFO: stderr: ""
Jan  9 12:08:45.285: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  9 12:08:45.285: INFO: validating pod update-demo-nautilus-ztj6l
Jan  9 12:08:45.294: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  9 12:08:45.294: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  9 12:08:45.294: INFO: update-demo-nautilus-ztj6l is verified up and running
STEP: using delete to clean up resources
Jan  9 12:08:45.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-77r7n'
Jan  9 12:08:45.468: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  9 12:08:45.469: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan  9 12:08:45.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-77r7n'
Jan  9 12:08:45.643: INFO: stderr: "No resources found.\n"
Jan  9 12:08:45.643: INFO: stdout: ""
Jan  9 12:08:45.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-77r7n -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  9 12:08:45.875: INFO: stderr: ""
Jan  9 12:08:45.875: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:08:45.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-77r7n" for this suite.
Jan  9 12:09:09.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:09:10.001: INFO: namespace: e2e-tests-kubectl-77r7n, resource: bindings, ignored listing per whitelist
Jan  9 12:09:10.067: INFO: namespace e2e-tests-kubectl-77r7n deletion completed in 24.179265446s

• [SLOW TEST:39.833 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:09:10.068: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-d7cf788d-32d8-11ea-ac2d-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  9 12:09:10.349: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d7d0a34c-32d8-11ea-ac2d-0242ac110005" in namespace "e2e-tests-projected-x9gpb" to be "success or failure"
Jan  9 12:09:10.358: INFO: Pod "pod-projected-secrets-d7d0a34c-32d8-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.152073ms
Jan  9 12:09:12.546: INFO: Pod "pod-projected-secrets-d7d0a34c-32d8-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.197612331s
Jan  9 12:09:14.572: INFO: Pod "pod-projected-secrets-d7d0a34c-32d8-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.22351564s
Jan  9 12:09:16.643: INFO: Pod "pod-projected-secrets-d7d0a34c-32d8-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.294183103s
Jan  9 12:09:18.681: INFO: Pod "pod-projected-secrets-d7d0a34c-32d8-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.332457654s
Jan  9 12:09:20.695: INFO: Pod "pod-projected-secrets-d7d0a34c-32d8-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.346021671s
STEP: Saw pod success
Jan  9 12:09:20.695: INFO: Pod "pod-projected-secrets-d7d0a34c-32d8-11ea-ac2d-0242ac110005" satisfied condition "success or failure"
Jan  9 12:09:20.707: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-d7d0a34c-32d8-11ea-ac2d-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan  9 12:09:23.906: INFO: Waiting for pod pod-projected-secrets-d7d0a34c-32d8-11ea-ac2d-0242ac110005 to disappear
Jan  9 12:09:24.030: INFO: Pod pod-projected-secrets-d7d0a34c-32d8-11ea-ac2d-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:09:24.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-x9gpb" for this suite.
Jan  9 12:09:30.098: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:09:30.215: INFO: namespace: e2e-tests-projected-x9gpb, resource: bindings, ignored listing per whitelist
Jan  9 12:09:30.269: INFO: namespace e2e-tests-projected-x9gpb deletion completed in 6.21581409s

• [SLOW TEST:20.202 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:09:30.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052
STEP: creating the pod
Jan  9 12:09:30.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-5p2ng'
Jan  9 12:09:31.126: INFO: stderr: ""
Jan  9 12:09:31.126: INFO: stdout: "pod/pause created\n"
Jan  9 12:09:31.126: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jan  9 12:09:31.127: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-5p2ng" to be "running and ready"
Jan  9 12:09:31.282: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 154.773338ms
Jan  9 12:09:33.296: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.16953159s
Jan  9 12:09:35.312: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.185453904s
Jan  9 12:09:37.325: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.198566994s
Jan  9 12:09:39.345: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.217889887s
Jan  9 12:09:41.367: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.240524891s
Jan  9 12:09:41.367: INFO: Pod "pause" satisfied condition "running and ready"
Jan  9 12:09:41.367: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: adding the label testing-label with value testing-label-value to a pod
Jan  9 12:09:41.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-5p2ng'
Jan  9 12:09:41.570: INFO: stderr: ""
Jan  9 12:09:41.571: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jan  9 12:09:41.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-5p2ng'
Jan  9 12:09:41.727: INFO: stderr: ""
Jan  9 12:09:41.727: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          10s   testing-label-value\n"
STEP: removing the label testing-label of a pod
Jan  9 12:09:41.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-5p2ng'
Jan  9 12:09:41.875: INFO: stderr: ""
Jan  9 12:09:41.875: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Jan  9 12:09:41.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-5p2ng'
Jan  9 12:09:42.022: INFO: stderr: ""
Jan  9 12:09:42.022: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059
STEP: using delete to clean up resources
Jan  9 12:09:42.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-5p2ng'
Jan  9 12:09:42.192: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  9 12:09:42.192: INFO: stdout: "pod \"pause\" force deleted\n"
Jan  9 12:09:42.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-5p2ng'
Jan  9 12:09:42.448: INFO: stderr: "No resources found.\n"
Jan  9 12:09:42.448: INFO: stdout: ""
Jan  9 12:09:42.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-5p2ng -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  9 12:09:42.572: INFO: stderr: ""
Jan  9 12:09:42.572: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:09:42.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-5p2ng" for this suite.
Jan  9 12:09:48.724: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:09:48.753: INFO: namespace: e2e-tests-kubectl-5p2ng, resource: bindings, ignored listing per whitelist
Jan  9 12:09:48.915: INFO: namespace e2e-tests-kubectl-5p2ng deletion completed in 6.307326552s

• [SLOW TEST:18.646 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:09:48.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  9 12:09:49.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client'
Jan  9 12:09:49.187: INFO: stderr: ""
Jan  9 12:09:49.187: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
Jan  9 12:09:49.200: INFO: Not supported for server versions before "1.13.12"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:09:49.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-p45ss" for this suite.
Jan  9 12:09:55.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:09:55.538: INFO: namespace: e2e-tests-kubectl-p45ss, resource: bindings, ignored listing per whitelist
Jan  9 12:09:55.538: INFO: namespace e2e-tests-kubectl-p45ss deletion completed in 6.310600251s

S [SKIPPING] [6.623 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if kubectl describe prints relevant information for rc and pods  [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

    Jan  9 12:09:49.200: Not supported for server versions before "1.13.12"

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:09:55.539: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-897s
STEP: Creating a pod to test atomic-volume-subpath
Jan  9 12:09:56.233: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-897s" in namespace "e2e-tests-subpath-69kc5" to be "success or failure"
Jan  9 12:09:56.261: INFO: Pod "pod-subpath-test-projected-897s": Phase="Pending", Reason="", readiness=false. Elapsed: 28.17658ms
Jan  9 12:09:58.285: INFO: Pod "pod-subpath-test-projected-897s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052234574s
Jan  9 12:10:00.303: INFO: Pod "pod-subpath-test-projected-897s": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070354982s
Jan  9 12:10:02.327: INFO: Pod "pod-subpath-test-projected-897s": Phase="Pending", Reason="", readiness=false. Elapsed: 6.094589611s
Jan  9 12:10:04.336: INFO: Pod "pod-subpath-test-projected-897s": Phase="Pending", Reason="", readiness=false. Elapsed: 8.103394601s
Jan  9 12:10:06.345: INFO: Pod "pod-subpath-test-projected-897s": Phase="Pending", Reason="", readiness=false. Elapsed: 10.112079508s
Jan  9 12:10:08.364: INFO: Pod "pod-subpath-test-projected-897s": Phase="Pending", Reason="", readiness=false. Elapsed: 12.131382225s
Jan  9 12:10:10.376: INFO: Pod "pod-subpath-test-projected-897s": Phase="Pending", Reason="", readiness=false. Elapsed: 14.143130519s
Jan  9 12:10:12.387: INFO: Pod "pod-subpath-test-projected-897s": Phase="Pending", Reason="", readiness=false. Elapsed: 16.154141506s
Jan  9 12:10:14.422: INFO: Pod "pod-subpath-test-projected-897s": Phase="Running", Reason="", readiness=false. Elapsed: 18.189332691s
Jan  9 12:10:16.440: INFO: Pod "pod-subpath-test-projected-897s": Phase="Running", Reason="", readiness=false. Elapsed: 20.207007323s
Jan  9 12:10:18.548: INFO: Pod "pod-subpath-test-projected-897s": Phase="Running", Reason="", readiness=false. Elapsed: 22.315132534s
Jan  9 12:10:20.590: INFO: Pod "pod-subpath-test-projected-897s": Phase="Running", Reason="", readiness=false. Elapsed: 24.356634104s
Jan  9 12:10:22.616: INFO: Pod "pod-subpath-test-projected-897s": Phase="Running", Reason="", readiness=false. Elapsed: 26.38339009s
Jan  9 12:10:24.629: INFO: Pod "pod-subpath-test-projected-897s": Phase="Running", Reason="", readiness=false. Elapsed: 28.395852359s
Jan  9 12:10:26.670: INFO: Pod "pod-subpath-test-projected-897s": Phase="Running", Reason="", readiness=false. Elapsed: 30.437051774s
Jan  9 12:10:28.704: INFO: Pod "pod-subpath-test-projected-897s": Phase="Running", Reason="", readiness=false. Elapsed: 32.471240976s
Jan  9 12:10:30.725: INFO: Pod "pod-subpath-test-projected-897s": Phase="Running", Reason="", readiness=false. Elapsed: 34.491980121s
Jan  9 12:10:32.808: INFO: Pod "pod-subpath-test-projected-897s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.575544177s
STEP: Saw pod success
Jan  9 12:10:32.808: INFO: Pod "pod-subpath-test-projected-897s" satisfied condition "success or failure"
Jan  9 12:10:32.816: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-897s container test-container-subpath-projected-897s: 
STEP: delete the pod
Jan  9 12:10:33.199: INFO: Waiting for pod pod-subpath-test-projected-897s to disappear
Jan  9 12:10:33.218: INFO: Pod pod-subpath-test-projected-897s no longer exists
STEP: Deleting pod pod-subpath-test-projected-897s
Jan  9 12:10:33.218: INFO: Deleting pod "pod-subpath-test-projected-897s" in namespace "e2e-tests-subpath-69kc5"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:10:33.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-69kc5" for this suite.
Jan  9 12:10:39.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:10:39.596: INFO: namespace: e2e-tests-subpath-69kc5, resource: bindings, ignored listing per whitelist
Jan  9 12:10:39.609: INFO: namespace e2e-tests-subpath-69kc5 deletion completed in 6.376651326s

• [SLOW TEST:44.070 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:10:39.609: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jan  9 12:10:49.948: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-0d2a1bbf-32d9-11ea-ac2d-0242ac110005,GenerateName:,Namespace:e2e-tests-events-7qznk,SelfLink:/api/v1/namespaces/e2e-tests-events-7qznk/pods/send-events-0d2a1bbf-32d9-11ea-ac2d-0242ac110005,UID:0d2a9534-32d9-11ea-a994-fa163e34d433,ResourceVersion:17697814,Generation:0,CreationTimestamp:2020-01-09 12:10:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 837704490,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6r25j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6r25j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-6r25j true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002229df0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002229ee0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 12:10:39 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 12:10:48 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 12:10:48 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 12:10:39 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-01-09 12:10:39 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-01-09 12:10:47 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://a097e313d63504e119b5d8fd4d2a6a5bea11ec9ed13dcc216bd8d563c2ba69d2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Jan  9 12:10:51.957: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jan  9 12:10:54.000: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:10:54.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-events-7qznk" for this suite.
Jan  9 12:11:34.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:11:34.339: INFO: namespace: e2e-tests-events-7qznk, resource: bindings, ignored listing per whitelist
Jan  9 12:11:34.342: INFO: namespace e2e-tests-events-7qznk deletion completed in 40.25451891s

• [SLOW TEST:54.733 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:11:34.342: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Jan  9 12:11:34.787: INFO: Waiting up to 5m0s for pod "client-containers-2de6e1e8-32d9-11ea-ac2d-0242ac110005" in namespace "e2e-tests-containers-srhpq" to be "success or failure"
Jan  9 12:11:34.821: INFO: Pod "client-containers-2de6e1e8-32d9-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 34.223178ms
Jan  9 12:11:37.117: INFO: Pod "client-containers-2de6e1e8-32d9-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.33065849s
Jan  9 12:11:39.165: INFO: Pod "client-containers-2de6e1e8-32d9-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.378440841s
Jan  9 12:11:41.183: INFO: Pod "client-containers-2de6e1e8-32d9-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.395880681s
Jan  9 12:11:43.192: INFO: Pod "client-containers-2de6e1e8-32d9-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.405502581s
Jan  9 12:11:45.204: INFO: Pod "client-containers-2de6e1e8-32d9-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.417526148s
STEP: Saw pod success
Jan  9 12:11:45.204: INFO: Pod "client-containers-2de6e1e8-32d9-11ea-ac2d-0242ac110005" satisfied condition "success or failure"
Jan  9 12:11:45.209: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-2de6e1e8-32d9-11ea-ac2d-0242ac110005 container test-container: 
STEP: delete the pod
Jan  9 12:11:45.683: INFO: Waiting for pod client-containers-2de6e1e8-32d9-11ea-ac2d-0242ac110005 to disappear
Jan  9 12:11:45.741: INFO: Pod client-containers-2de6e1e8-32d9-11ea-ac2d-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:11:45.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-srhpq" for this suite.
Jan  9 12:11:51.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:11:52.049: INFO: namespace: e2e-tests-containers-srhpq, resource: bindings, ignored listing per whitelist
Jan  9 12:11:52.067: INFO: namespace e2e-tests-containers-srhpq deletion completed in 6.310290599s

• [SLOW TEST:17.725 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:11:52.068: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  9 12:11:52.341: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jan  9 12:11:52.386: INFO: Number of nodes with available pods: 0
Jan  9 12:11:52.386: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jan  9 12:11:52.438: INFO: Number of nodes with available pods: 0
Jan  9 12:11:52.438: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  9 12:11:54.375: INFO: Number of nodes with available pods: 0
Jan  9 12:11:54.375: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  9 12:11:54.821: INFO: Number of nodes with available pods: 0
Jan  9 12:11:54.822: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  9 12:11:55.462: INFO: Number of nodes with available pods: 0
Jan  9 12:11:55.462: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  9 12:11:56.450: INFO: Number of nodes with available pods: 0
Jan  9 12:11:56.450: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  9 12:11:57.464: INFO: Number of nodes with available pods: 0
Jan  9 12:11:57.464: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  9 12:11:58.725: INFO: Number of nodes with available pods: 0
Jan  9 12:11:58.725: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  9 12:11:59.454: INFO: Number of nodes with available pods: 0
Jan  9 12:11:59.455: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  9 12:12:00.468: INFO: Number of nodes with available pods: 0
Jan  9 12:12:00.468: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  9 12:12:01.454: INFO: Number of nodes with available pods: 0
Jan  9 12:12:01.454: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  9 12:12:02.475: INFO: Number of nodes with available pods: 1
Jan  9 12:12:02.475: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jan  9 12:12:02.676: INFO: Number of nodes with available pods: 1
Jan  9 12:12:02.676: INFO: Number of running nodes: 0, number of available pods: 1
Jan  9 12:12:03.702: INFO: Number of nodes with available pods: 0
Jan  9 12:12:03.702: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jan  9 12:12:03.790: INFO: Number of nodes with available pods: 0
Jan  9 12:12:03.791: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  9 12:12:04.836: INFO: Number of nodes with available pods: 0
Jan  9 12:12:04.836: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  9 12:12:05.816: INFO: Number of nodes with available pods: 0
Jan  9 12:12:05.816: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  9 12:12:06.804: INFO: Number of nodes with available pods: 0
Jan  9 12:12:06.804: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  9 12:12:07.901: INFO: Number of nodes with available pods: 0
Jan  9 12:12:07.901: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  9 12:12:08.811: INFO: Number of nodes with available pods: 0
Jan  9 12:12:08.811: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  9 12:12:09.808: INFO: Number of nodes with available pods: 0
Jan  9 12:12:09.808: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  9 12:12:10.819: INFO: Number of nodes with available pods: 0
Jan  9 12:12:10.819: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  9 12:12:11.806: INFO: Number of nodes with available pods: 0
Jan  9 12:12:11.806: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  9 12:12:12.800: INFO: Number of nodes with available pods: 0
Jan  9 12:12:12.800: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  9 12:12:13.915: INFO: Number of nodes with available pods: 0
Jan  9 12:12:13.915: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  9 12:12:14.803: INFO: Number of nodes with available pods: 0
Jan  9 12:12:14.803: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  9 12:12:15.844: INFO: Number of nodes with available pods: 0
Jan  9 12:12:15.844: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  9 12:12:16.798: INFO: Number of nodes with available pods: 0
Jan  9 12:12:16.798: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  9 12:12:18.122: INFO: Number of nodes with available pods: 0
Jan  9 12:12:18.122: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  9 12:12:18.808: INFO: Number of nodes with available pods: 0
Jan  9 12:12:18.808: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  9 12:12:20.048: INFO: Number of nodes with available pods: 0
Jan  9 12:12:20.049: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  9 12:12:20.834: INFO: Number of nodes with available pods: 0
Jan  9 12:12:20.834: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  9 12:12:21.813: INFO: Number of nodes with available pods: 0
Jan  9 12:12:21.813: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  9 12:12:22.813: INFO: Number of nodes with available pods: 1
Jan  9 12:12:22.813: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-2xzz9, will wait for the garbage collector to delete the pods
Jan  9 12:12:22.897: INFO: Deleting DaemonSet.extensions daemon-set took: 12.706982ms
Jan  9 12:12:22.997: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.464041ms
Jan  9 12:12:30.803: INFO: Number of nodes with available pods: 0
Jan  9 12:12:30.803: INFO: Number of running nodes: 0, number of available pods: 0
Jan  9 12:12:30.806: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-2xzz9/daemonsets","resourceVersion":"17698021"},"items":null}

Jan  9 12:12:30.808: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-2xzz9/pods","resourceVersion":"17698021"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:12:30.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-2xzz9" for this suite.
Jan  9 12:12:36.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:12:37.040: INFO: namespace: e2e-tests-daemonsets-2xzz9, resource: bindings, ignored listing per whitelist
Jan  9 12:12:37.045: INFO: namespace e2e-tests-daemonsets-2xzz9 deletion completed in 6.194276757s

• [SLOW TEST:44.977 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:12:37.046: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:12:37.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-jcbrc" for this suite.
Jan  9 12:13:01.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:13:01.624: INFO: namespace: e2e-tests-pods-jcbrc, resource: bindings, ignored listing per whitelist
Jan  9 12:13:01.722: INFO: namespace e2e-tests-pods-jcbrc deletion completed in 24.311688571s

• [SLOW TEST:24.676 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:13:01.722: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  9 12:13:01.994: INFO: Waiting up to 5m0s for pod "downwardapi-volume-61d77b3c-32d9-11ea-ac2d-0242ac110005" in namespace "e2e-tests-downward-api-xwl72" to be "success or failure"
Jan  9 12:13:02.010: INFO: Pod "downwardapi-volume-61d77b3c-32d9-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.608313ms
Jan  9 12:13:04.383: INFO: Pod "downwardapi-volume-61d77b3c-32d9-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.388242673s
Jan  9 12:13:06.406: INFO: Pod "downwardapi-volume-61d77b3c-32d9-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.411349634s
Jan  9 12:13:08.932: INFO: Pod "downwardapi-volume-61d77b3c-32d9-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.937187318s
Jan  9 12:13:10.948: INFO: Pod "downwardapi-volume-61d77b3c-32d9-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.95292388s
Jan  9 12:13:12.975: INFO: Pod "downwardapi-volume-61d77b3c-32d9-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.979924098s
STEP: Saw pod success
Jan  9 12:13:12.975: INFO: Pod "downwardapi-volume-61d77b3c-32d9-11ea-ac2d-0242ac110005" satisfied condition "success or failure"
Jan  9 12:13:12.983: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-61d77b3c-32d9-11ea-ac2d-0242ac110005 container client-container: 
STEP: delete the pod
Jan  9 12:13:13.118: INFO: Waiting for pod downwardapi-volume-61d77b3c-32d9-11ea-ac2d-0242ac110005 to disappear
Jan  9 12:13:13.140: INFO: Pod downwardapi-volume-61d77b3c-32d9-11ea-ac2d-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:13:13.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-xwl72" for this suite.
Jan  9 12:13:19.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:13:19.467: INFO: namespace: e2e-tests-downward-api-xwl72, resource: bindings, ignored listing per whitelist
Jan  9 12:13:19.488: INFO: namespace e2e-tests-downward-api-xwl72 deletion completed in 6.342030706s

• [SLOW TEST:17.765 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:13:19.488: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  9 12:13:19.706: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jan  9 12:13:24.722: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan  9 12:13:28.764: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan  9 12:13:28.878: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-xz2c2,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xz2c2/deployments/test-cleanup-deployment,UID:71e0c267-32d9-11ea-a994-fa163e34d433,ResourceVersion:17698167,Generation:1,CreationTimestamp:2020-01-09 12:13:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Jan  9 12:13:28.886: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:13:28.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-xz2c2" for this suite.
Jan  9 12:13:39.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:13:39.112: INFO: namespace: e2e-tests-deployment-xz2c2, resource: bindings, ignored listing per whitelist
Jan  9 12:13:39.223: INFO: namespace e2e-tests-deployment-xz2c2 deletion completed in 10.306720625s

• [SLOW TEST:19.735 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:13:39.224: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-ds754
Jan  9 12:13:49.603: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-ds754
STEP: checking the pod's current state and verifying that restartCount is present
Jan  9 12:13:49.607: INFO: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:17:50.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-ds754" for this suite.
Jan  9 12:17:56.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:17:56.708: INFO: namespace: e2e-tests-container-probe-ds754, resource: bindings, ignored listing per whitelist
Jan  9 12:17:56.737: INFO: namespace e2e-tests-container-probe-ds754 deletion completed in 6.334986539s

• [SLOW TEST:257.514 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:17:56.738: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-11acd028-32da-11ea-ac2d-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  9 12:17:56.935: INFO: Waiting up to 5m0s for pod "pod-secrets-11ad7690-32da-11ea-ac2d-0242ac110005" in namespace "e2e-tests-secrets-qftgq" to be "success or failure"
Jan  9 12:17:56.939: INFO: Pod "pod-secrets-11ad7690-32da-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.187691ms
Jan  9 12:17:58.947: INFO: Pod "pod-secrets-11ad7690-32da-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012152707s
Jan  9 12:18:00.963: INFO: Pod "pod-secrets-11ad7690-32da-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028760478s
Jan  9 12:18:03.144: INFO: Pod "pod-secrets-11ad7690-32da-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.209151447s
Jan  9 12:18:05.161: INFO: Pod "pod-secrets-11ad7690-32da-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.226526356s
Jan  9 12:18:07.189: INFO: Pod "pod-secrets-11ad7690-32da-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.254096517s
STEP: Saw pod success
Jan  9 12:18:07.189: INFO: Pod "pod-secrets-11ad7690-32da-11ea-ac2d-0242ac110005" satisfied condition "success or failure"
Jan  9 12:18:07.207: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-11ad7690-32da-11ea-ac2d-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan  9 12:18:07.366: INFO: Waiting for pod pod-secrets-11ad7690-32da-11ea-ac2d-0242ac110005 to disappear
Jan  9 12:18:07.386: INFO: Pod pod-secrets-11ad7690-32da-11ea-ac2d-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:18:07.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-qftgq" for this suite.
Jan  9 12:18:13.441: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:18:13.480: INFO: namespace: e2e-tests-secrets-qftgq, resource: bindings, ignored listing per whitelist
Jan  9 12:18:13.614: INFO: namespace e2e-tests-secrets-qftgq deletion completed in 6.217954434s

• [SLOW TEST:16.877 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:18:13.617: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  9 12:18:13.866: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1bbad791-32da-11ea-ac2d-0242ac110005" in namespace "e2e-tests-downward-api-v8t6j" to be "success or failure"
Jan  9 12:18:13.909: INFO: Pod "downwardapi-volume-1bbad791-32da-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 43.298457ms
Jan  9 12:18:16.249: INFO: Pod "downwardapi-volume-1bbad791-32da-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.383140797s
Jan  9 12:18:18.270: INFO: Pod "downwardapi-volume-1bbad791-32da-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.403633561s
Jan  9 12:18:20.383: INFO: Pod "downwardapi-volume-1bbad791-32da-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.517474522s
Jan  9 12:18:22.410: INFO: Pod "downwardapi-volume-1bbad791-32da-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.544498027s
Jan  9 12:18:24.429: INFO: Pod "downwardapi-volume-1bbad791-32da-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.563253536s
STEP: Saw pod success
Jan  9 12:18:24.429: INFO: Pod "downwardapi-volume-1bbad791-32da-11ea-ac2d-0242ac110005" satisfied condition "success or failure"
Jan  9 12:18:24.439: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-1bbad791-32da-11ea-ac2d-0242ac110005 container client-container: 
STEP: delete the pod
Jan  9 12:18:24.570: INFO: Waiting for pod downwardapi-volume-1bbad791-32da-11ea-ac2d-0242ac110005 to disappear
Jan  9 12:18:24.580: INFO: Pod downwardapi-volume-1bbad791-32da-11ea-ac2d-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:18:24.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-v8t6j" for this suite.
Jan  9 12:18:30.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:18:30.759: INFO: namespace: e2e-tests-downward-api-v8t6j, resource: bindings, ignored listing per whitelist
Jan  9 12:18:30.879: INFO: namespace e2e-tests-downward-api-v8t6j deletion completed in 6.283422367s

• [SLOW TEST:17.262 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:18:30.879: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  9 12:18:31.039: INFO: Waiting up to 5m0s for pod "downwardapi-volume-26035fa3-32da-11ea-ac2d-0242ac110005" in namespace "e2e-tests-projected-m5lx2" to be "success or failure"
Jan  9 12:18:31.053: INFO: Pod "downwardapi-volume-26035fa3-32da-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.944601ms
Jan  9 12:18:33.343: INFO: Pod "downwardapi-volume-26035fa3-32da-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.303798042s
Jan  9 12:18:35.377: INFO: Pod "downwardapi-volume-26035fa3-32da-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.337645251s
Jan  9 12:18:37.504: INFO: Pod "downwardapi-volume-26035fa3-32da-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.464526392s
Jan  9 12:18:39.522: INFO: Pod "downwardapi-volume-26035fa3-32da-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.482352608s
Jan  9 12:18:41.535: INFO: Pod "downwardapi-volume-26035fa3-32da-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.495897377s
STEP: Saw pod success
Jan  9 12:18:41.535: INFO: Pod "downwardapi-volume-26035fa3-32da-11ea-ac2d-0242ac110005" satisfied condition "success or failure"
Jan  9 12:18:41.541: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-26035fa3-32da-11ea-ac2d-0242ac110005 container client-container: 
STEP: delete the pod
Jan  9 12:18:42.362: INFO: Waiting for pod downwardapi-volume-26035fa3-32da-11ea-ac2d-0242ac110005 to disappear
Jan  9 12:18:42.373: INFO: Pod downwardapi-volume-26035fa3-32da-11ea-ac2d-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:18:42.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-m5lx2" for this suite.
Jan  9 12:18:48.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:18:48.660: INFO: namespace: e2e-tests-projected-m5lx2, resource: bindings, ignored listing per whitelist
Jan  9 12:18:48.754: INFO: namespace e2e-tests-projected-m5lx2 deletion completed in 6.3624788s

• [SLOW TEST:17.875 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:18:48.755: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan  9 12:18:49.093: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan  9 12:18:49.142: INFO: Waiting for terminating namespaces to be deleted...
Jan  9 12:18:49.146: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Jan  9 12:18:49.161: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  9 12:18:49.161: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan  9 12:18:49.161: INFO: 	Container coredns ready: true, restart count 0
Jan  9 12:18:49.161: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Jan  9 12:18:49.161: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  9 12:18:49.161: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  9 12:18:49.161: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Jan  9 12:18:49.161: INFO: 	Container weave ready: true, restart count 0
Jan  9 12:18:49.161: INFO: 	Container weave-npc ready: true, restart count 0
Jan  9 12:18:49.161: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan  9 12:18:49.161: INFO: 	Container coredns ready: true, restart count 0
Jan  9 12:18:49.161: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  9 12:18:49.161: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-36e2a80a-32da-11ea-ac2d-0242ac110005 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-36e2a80a-32da-11ea-ac2d-0242ac110005 off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label kubernetes.io/e2e-36e2a80a-32da-11ea-ac2d-0242ac110005
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:19:11.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-vm776" for this suite.
Jan  9 12:19:33.552: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:19:33.764: INFO: namespace: e2e-tests-sched-pred-vm776, resource: bindings, ignored listing per whitelist
Jan  9 12:19:33.947: INFO: namespace e2e-tests-sched-pred-vm776 deletion completed in 22.433896942s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:45.193 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:19:33.949: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan  9 12:19:34.228: INFO: Waiting up to 5m0s for pod "downward-api-4bab8ea2-32da-11ea-ac2d-0242ac110005" in namespace "e2e-tests-downward-api-cg2p9" to be "success or failure"
Jan  9 12:19:34.238: INFO: Pod "downward-api-4bab8ea2-32da-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.722322ms
Jan  9 12:19:36.251: INFO: Pod "downward-api-4bab8ea2-32da-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02329041s
Jan  9 12:19:38.265: INFO: Pod "downward-api-4bab8ea2-32da-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036963837s
Jan  9 12:19:40.569: INFO: Pod "downward-api-4bab8ea2-32da-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.341422071s
Jan  9 12:19:42.815: INFO: Pod "downward-api-4bab8ea2-32da-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.586821331s
Jan  9 12:19:44.843: INFO: Pod "downward-api-4bab8ea2-32da-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.614565502s
STEP: Saw pod success
Jan  9 12:19:44.843: INFO: Pod "downward-api-4bab8ea2-32da-11ea-ac2d-0242ac110005" satisfied condition "success or failure"
Jan  9 12:19:44.859: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-4bab8ea2-32da-11ea-ac2d-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan  9 12:19:44.999: INFO: Waiting for pod downward-api-4bab8ea2-32da-11ea-ac2d-0242ac110005 to disappear
Jan  9 12:19:45.005: INFO: Pod downward-api-4bab8ea2-32da-11ea-ac2d-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:19:45.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-cg2p9" for this suite.
Jan  9 12:19:51.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:19:51.163: INFO: namespace: e2e-tests-downward-api-cg2p9, resource: bindings, ignored listing per whitelist
Jan  9 12:19:51.219: INFO: namespace e2e-tests-downward-api-cg2p9 deletion completed in 6.207950546s

• [SLOW TEST:17.270 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:19:51.219: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  9 12:19:51.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-6qs5f'
Jan  9 12:19:53.857: INFO: stderr: ""
Jan  9 12:19:53.857: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Jan  9 12:20:03.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-6qs5f -o json'
Jan  9 12:20:04.026: INFO: stderr: ""
Jan  9 12:20:04.026: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-01-09T12:19:53Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"e2e-tests-kubectl-6qs5f\",\n        \"resourceVersion\": \"17698880\",\n        \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-6qs5f/pods/e2e-test-nginx-pod\",\n        \"uid\": \"575afe19-32da-11ea-a994-fa163e34d433\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-mj97l\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-mj97l\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-mj97l\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-09T12:19:53Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-09T12:20:03Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-09T12:20:03Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-09T12:19:53Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://8ca85a2549659c4eac8e1f6c32b7d72e8ef2f24e6097e092b408da33850ee1c1\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-01-09T12:20:02Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.1.240\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.32.0.4\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-01-09T12:19:53Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jan  9 12:20:04.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-6qs5f'
Jan  9 12:20:04.410: INFO: stderr: ""
Jan  9 12:20:04.411: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568
Jan  9 12:20:04.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-6qs5f'
Jan  9 12:20:13.608: INFO: stderr: ""
Jan  9 12:20:13.608: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:20:13.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-6qs5f" for this suite.
Jan  9 12:20:19.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:20:19.812: INFO: namespace: e2e-tests-kubectl-6qs5f, resource: bindings, ignored listing per whitelist
Jan  9 12:20:19.850: INFO: namespace e2e-tests-kubectl-6qs5f deletion completed in 6.213948369s

• [SLOW TEST:28.631 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:20:19.851: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:20:30.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-q7f6s" for this suite.
Jan  9 12:21:12.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:21:12.387: INFO: namespace: e2e-tests-kubelet-test-q7f6s, resource: bindings, ignored listing per whitelist
Jan  9 12:21:12.579: INFO: namespace e2e-tests-kubelet-test-q7f6s deletion completed in 42.271059098s

• [SLOW TEST:52.728 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:21:12.580: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan  9 12:21:12.700: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:21:29.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-djrm2" for this suite.
Jan  9 12:21:35.916: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:21:36.020: INFO: namespace: e2e-tests-init-container-djrm2, resource: bindings, ignored listing per whitelist
Jan  9 12:21:36.058: INFO: namespace e2e-tests-init-container-djrm2 deletion completed in 6.24314888s

• [SLOW TEST:23.479 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:21:36.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-projected-all-test-volume-94708c9f-32da-11ea-ac2d-0242ac110005
STEP: Creating secret with name secret-projected-all-test-volume-94708c5a-32da-11ea-ac2d-0242ac110005
STEP: Creating a pod to test Check all projections for projected volume plugin
Jan  9 12:21:36.312: INFO: Waiting up to 5m0s for pod "projected-volume-94708bae-32da-11ea-ac2d-0242ac110005" in namespace "e2e-tests-projected-jfj8j" to be "success or failure"
Jan  9 12:21:36.329: INFO: Pod "projected-volume-94708bae-32da-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.682826ms
Jan  9 12:21:38.358: INFO: Pod "projected-volume-94708bae-32da-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045478602s
Jan  9 12:21:40.418: INFO: Pod "projected-volume-94708bae-32da-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.105690089s
Jan  9 12:21:42.464: INFO: Pod "projected-volume-94708bae-32da-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.151287097s
Jan  9 12:21:44.551: INFO: Pod "projected-volume-94708bae-32da-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.238400334s
Jan  9 12:21:46.602: INFO: Pod "projected-volume-94708bae-32da-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.289756224s
STEP: Saw pod success
Jan  9 12:21:46.602: INFO: Pod "projected-volume-94708bae-32da-11ea-ac2d-0242ac110005" satisfied condition "success or failure"
Jan  9 12:21:46.617: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-94708bae-32da-11ea-ac2d-0242ac110005 container projected-all-volume-test: 
STEP: delete the pod
Jan  9 12:21:46.802: INFO: Waiting for pod projected-volume-94708bae-32da-11ea-ac2d-0242ac110005 to disappear
Jan  9 12:21:46.835: INFO: Pod projected-volume-94708bae-32da-11ea-ac2d-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:21:46.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-jfj8j" for this suite.
Jan  9 12:21:52.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:21:53.124: INFO: namespace: e2e-tests-projected-jfj8j, resource: bindings, ignored listing per whitelist
Jan  9 12:21:53.135: INFO: namespace e2e-tests-projected-jfj8j deletion completed in 6.21093481s

• [SLOW TEST:17.076 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:21:53.135: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0109 12:22:34.329726       9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  9 12:22:34.329: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:22:34.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-svszp" for this suite.
Jan  9 12:22:46.696: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:22:50.258: INFO: namespace: e2e-tests-gc-svszp, resource: bindings, ignored listing per whitelist
Jan  9 12:22:50.374: INFO: namespace e2e-tests-gc-svszp deletion completed in 16.039403629s

• [SLOW TEST:57.239 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:22:50.374: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jan  9 12:23:22.114: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jnm9f PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  9 12:23:22.114: INFO: >>> kubeConfig: /root/.kube/config
I0109 12:23:22.208320       9 log.go:172] (0xc000d38c60) (0xc00259b400) Create stream
I0109 12:23:22.208516       9 log.go:172] (0xc000d38c60) (0xc00259b400) Stream added, broadcasting: 1
I0109 12:23:22.214646       9 log.go:172] (0xc000d38c60) Reply frame received for 1
I0109 12:23:22.214695       9 log.go:172] (0xc000d38c60) (0xc001d6dcc0) Create stream
I0109 12:23:22.214702       9 log.go:172] (0xc000d38c60) (0xc001d6dcc0) Stream added, broadcasting: 3
I0109 12:23:22.216229       9 log.go:172] (0xc000d38c60) Reply frame received for 3
I0109 12:23:22.216267       9 log.go:172] (0xc000d38c60) (0xc00259b540) Create stream
I0109 12:23:22.216282       9 log.go:172] (0xc000d38c60) (0xc00259b540) Stream added, broadcasting: 5
I0109 12:23:22.217618       9 log.go:172] (0xc000d38c60) Reply frame received for 5
I0109 12:23:22.369991       9 log.go:172] (0xc000d38c60) Data frame received for 3
I0109 12:23:22.370192       9 log.go:172] (0xc001d6dcc0) (3) Data frame handling
I0109 12:23:22.370254       9 log.go:172] (0xc001d6dcc0) (3) Data frame sent
I0109 12:23:22.632600       9 log.go:172] (0xc000d38c60) (0xc001d6dcc0) Stream removed, broadcasting: 3
I0109 12:23:22.632904       9 log.go:172] (0xc000d38c60) Data frame received for 1
I0109 12:23:22.632921       9 log.go:172] (0xc00259b400) (1) Data frame handling
I0109 12:23:22.632938       9 log.go:172] (0xc00259b400) (1) Data frame sent
I0109 12:23:22.632946       9 log.go:172] (0xc000d38c60) (0xc00259b400) Stream removed, broadcasting: 1
I0109 12:23:22.633193       9 log.go:172] (0xc000d38c60) (0xc00259b540) Stream removed, broadcasting: 5
I0109 12:23:22.633402       9 log.go:172] (0xc000d38c60) Go away received
I0109 12:23:22.633597       9 log.go:172] (0xc000d38c60) (0xc00259b400) Stream removed, broadcasting: 1
I0109 12:23:22.633635       9 log.go:172] (0xc000d38c60) (0xc001d6dcc0) Stream removed, broadcasting: 3
I0109 12:23:22.633665       9 log.go:172] (0xc000d38c60) (0xc00259b540) Stream removed, broadcasting: 5
Jan  9 12:23:22.633: INFO: Exec stderr: ""
Jan  9 12:23:22.633: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jnm9f PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  9 12:23:22.634: INFO: >>> kubeConfig: /root/.kube/config
I0109 12:23:22.806081       9 log.go:172] (0xc000176a50) (0xc001d6df40) Create stream
I0109 12:23:22.806292       9 log.go:172] (0xc000176a50) (0xc001d6df40) Stream added, broadcasting: 1
I0109 12:23:22.846058       9 log.go:172] (0xc000176a50) Reply frame received for 1
I0109 12:23:22.846219       9 log.go:172] (0xc000176a50) (0xc0027b2be0) Create stream
I0109 12:23:22.846246       9 log.go:172] (0xc000176a50) (0xc0027b2be0) Stream added, broadcasting: 3
I0109 12:23:22.848566       9 log.go:172] (0xc000176a50) Reply frame received for 3
I0109 12:23:22.848641       9 log.go:172] (0xc000176a50) (0xc0020ac000) Create stream
I0109 12:23:22.848648       9 log.go:172] (0xc000176a50) (0xc0020ac000) Stream added, broadcasting: 5
I0109 12:23:22.849874       9 log.go:172] (0xc000176a50) Reply frame received for 5
I0109 12:23:22.974230       9 log.go:172] (0xc000176a50) Data frame received for 3
I0109 12:23:22.974498       9 log.go:172] (0xc0027b2be0) (3) Data frame handling
I0109 12:23:22.974534       9 log.go:172] (0xc0027b2be0) (3) Data frame sent
I0109 12:23:23.106576       9 log.go:172] (0xc000176a50) (0xc0027b2be0) Stream removed, broadcasting: 3
I0109 12:23:23.106707       9 log.go:172] (0xc000176a50) Data frame received for 1
I0109 12:23:23.106733       9 log.go:172] (0xc001d6df40) (1) Data frame handling
I0109 12:23:23.106779       9 log.go:172] (0xc001d6df40) (1) Data frame sent
I0109 12:23:23.106800       9 log.go:172] (0xc000176a50) (0xc001d6df40) Stream removed, broadcasting: 1
I0109 12:23:23.107066       9 log.go:172] (0xc000176a50) (0xc0020ac000) Stream removed, broadcasting: 5
I0109 12:23:23.107102       9 log.go:172] (0xc000176a50) Go away received
I0109 12:23:23.107238       9 log.go:172] (0xc000176a50) (0xc001d6df40) Stream removed, broadcasting: 1
I0109 12:23:23.107258       9 log.go:172] (0xc000176a50) (0xc0027b2be0) Stream removed, broadcasting: 3
I0109 12:23:23.107300       9 log.go:172] (0xc000176a50) (0xc0020ac000) Stream removed, broadcasting: 5
Jan  9 12:23:23.107: INFO: Exec stderr: ""
Jan  9 12:23:23.107: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jnm9f PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  9 12:23:23.107: INFO: >>> kubeConfig: /root/.kube/config
I0109 12:23:23.193640       9 log.go:172] (0xc0020fa2c0) (0xc0027b2f00) Create stream
I0109 12:23:23.193689       9 log.go:172] (0xc0020fa2c0) (0xc0027b2f00) Stream added, broadcasting: 1
I0109 12:23:23.199121       9 log.go:172] (0xc0020fa2c0) Reply frame received for 1
I0109 12:23:23.199161       9 log.go:172] (0xc0020fa2c0) (0xc00259b5e0) Create stream
I0109 12:23:23.199177       9 log.go:172] (0xc0020fa2c0) (0xc00259b5e0) Stream added, broadcasting: 3
I0109 12:23:23.200756       9 log.go:172] (0xc0020fa2c0) Reply frame received for 3
I0109 12:23:23.200802       9 log.go:172] (0xc0020fa2c0) (0xc0027b2fa0) Create stream
I0109 12:23:23.200820       9 log.go:172] (0xc0020fa2c0) (0xc0027b2fa0) Stream added, broadcasting: 5
I0109 12:23:23.202011       9 log.go:172] (0xc0020fa2c0) Reply frame received for 5
I0109 12:23:23.287521       9 log.go:172] (0xc0020fa2c0) Data frame received for 3
I0109 12:23:23.287687       9 log.go:172] (0xc00259b5e0) (3) Data frame handling
I0109 12:23:23.287708       9 log.go:172] (0xc00259b5e0) (3) Data frame sent
I0109 12:23:23.402215       9 log.go:172] (0xc0020fa2c0) Data frame received for 1
I0109 12:23:23.402456       9 log.go:172] (0xc0020fa2c0) (0xc0027b2fa0) Stream removed, broadcasting: 5
I0109 12:23:23.402566       9 log.go:172] (0xc0027b2f00) (1) Data frame handling
I0109 12:23:23.402630       9 log.go:172] (0xc0027b2f00) (1) Data frame sent
I0109 12:23:23.402699       9 log.go:172] (0xc0020fa2c0) (0xc00259b5e0) Stream removed, broadcasting: 3
I0109 12:23:23.402774       9 log.go:172] (0xc0020fa2c0) (0xc0027b2f00) Stream removed, broadcasting: 1
I0109 12:23:23.402817       9 log.go:172] (0xc0020fa2c0) Go away received
I0109 12:23:23.403449       9 log.go:172] (0xc0020fa2c0) (0xc0027b2f00) Stream removed, broadcasting: 1
I0109 12:23:23.403552       9 log.go:172] (0xc0020fa2c0) (0xc00259b5e0) Stream removed, broadcasting: 3
I0109 12:23:23.403571       9 log.go:172] (0xc0020fa2c0) (0xc0027b2fa0) Stream removed, broadcasting: 5
Jan  9 12:23:23.403: INFO: Exec stderr: ""
Jan  9 12:23:23.403: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jnm9f PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  9 12:23:23.403: INFO: >>> kubeConfig: /root/.kube/config
I0109 12:23:23.465232       9 log.go:172] (0xc0020fa790) (0xc0027b3220) Create stream
I0109 12:23:23.465295       9 log.go:172] (0xc0020fa790) (0xc0027b3220) Stream added, broadcasting: 1
I0109 12:23:23.473020       9 log.go:172] (0xc0020fa790) Reply frame received for 1
I0109 12:23:23.473173       9 log.go:172] (0xc0020fa790) (0xc001b6c8c0) Create stream
I0109 12:23:23.473213       9 log.go:172] (0xc0020fa790) (0xc001b6c8c0) Stream added, broadcasting: 3
I0109 12:23:23.475386       9 log.go:172] (0xc0020fa790) Reply frame received for 3
I0109 12:23:23.475432       9 log.go:172] (0xc0020fa790) (0xc0027b32c0) Create stream
I0109 12:23:23.475440       9 log.go:172] (0xc0020fa790) (0xc0027b32c0) Stream added, broadcasting: 5
I0109 12:23:23.476485       9 log.go:172] (0xc0020fa790) Reply frame received for 5
I0109 12:23:23.574883       9 log.go:172] (0xc0020fa790) Data frame received for 3
I0109 12:23:23.574943       9 log.go:172] (0xc001b6c8c0) (3) Data frame handling
I0109 12:23:23.574970       9 log.go:172] (0xc001b6c8c0) (3) Data frame sent
I0109 12:23:23.710474       9 log.go:172] (0xc0020fa790) Data frame received for 1
I0109 12:23:23.710700       9 log.go:172] (0xc0027b3220) (1) Data frame handling
I0109 12:23:23.710814       9 log.go:172] (0xc0027b3220) (1) Data frame sent
I0109 12:23:23.711059       9 log.go:172] (0xc0020fa790) (0xc0027b3220) Stream removed, broadcasting: 1
I0109 12:23:23.711427       9 log.go:172] (0xc0020fa790) (0xc001b6c8c0) Stream removed, broadcasting: 3
I0109 12:23:23.711586       9 log.go:172] (0xc0020fa790) (0xc0027b32c0) Stream removed, broadcasting: 5
I0109 12:23:23.711802       9 log.go:172] (0xc0020fa790) (0xc0027b3220) Stream removed, broadcasting: 1
I0109 12:23:23.711948       9 log.go:172] (0xc0020fa790) (0xc001b6c8c0) Stream removed, broadcasting: 3
I0109 12:23:23.711968       9 log.go:172] (0xc0020fa790) (0xc0027b32c0) Stream removed, broadcasting: 5
Jan  9 12:23:23.712: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jan  9 12:23:23.712: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jnm9f PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  9 12:23:23.712: INFO: >>> kubeConfig: /root/.kube/config
I0109 12:23:23.713441       9 log.go:172] (0xc0020fa790) Go away received
I0109 12:23:23.852204       9 log.go:172] (0xc0020fac60) (0xc0027b3540) Create stream
I0109 12:23:23.852431       9 log.go:172] (0xc0020fac60) (0xc0027b3540) Stream added, broadcasting: 1
I0109 12:23:23.873955       9 log.go:172] (0xc0020fac60) Reply frame received for 1
I0109 12:23:23.874088       9 log.go:172] (0xc0020fac60) (0xc0020ac1e0) Create stream
I0109 12:23:23.874109       9 log.go:172] (0xc0020fac60) (0xc0020ac1e0) Stream added, broadcasting: 3
I0109 12:23:23.876040       9 log.go:172] (0xc0020fac60) Reply frame received for 3
I0109 12:23:23.876121       9 log.go:172] (0xc0020fac60) (0xc0027b35e0) Create stream
I0109 12:23:23.876137       9 log.go:172] (0xc0020fac60) (0xc0027b35e0) Stream added, broadcasting: 5
I0109 12:23:23.877488       9 log.go:172] (0xc0020fac60) Reply frame received for 5
I0109 12:23:24.033903       9 log.go:172] (0xc0020fac60) Data frame received for 3
I0109 12:23:24.033979       9 log.go:172] (0xc0020ac1e0) (3) Data frame handling
I0109 12:23:24.034021       9 log.go:172] (0xc0020ac1e0) (3) Data frame sent
I0109 12:23:24.149459       9 log.go:172] (0xc0020fac60) (0xc0020ac1e0) Stream removed, broadcasting: 3
I0109 12:23:24.149616       9 log.go:172] (0xc0020fac60) Data frame received for 1
I0109 12:23:24.149689       9 log.go:172] (0xc0020fac60) (0xc0027b35e0) Stream removed, broadcasting: 5
I0109 12:23:24.149772       9 log.go:172] (0xc0027b3540) (1) Data frame handling
I0109 12:23:24.149850       9 log.go:172] (0xc0027b3540) (1) Data frame sent
I0109 12:23:24.149903       9 log.go:172] (0xc0020fac60) (0xc0027b3540) Stream removed, broadcasting: 1
I0109 12:23:24.149927       9 log.go:172] (0xc0020fac60) Go away received
I0109 12:23:24.150127       9 log.go:172] (0xc0020fac60) (0xc0027b3540) Stream removed, broadcasting: 1
I0109 12:23:24.150151       9 log.go:172] (0xc0020fac60) (0xc0020ac1e0) Stream removed, broadcasting: 3
I0109 12:23:24.150320       9 log.go:172] (0xc0020fac60) (0xc0027b35e0) Stream removed, broadcasting: 5
Jan  9 12:23:24.150: INFO: Exec stderr: ""
Jan  9 12:23:24.150: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jnm9f PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  9 12:23:24.150: INFO: >>> kubeConfig: /root/.kube/config
I0109 12:23:24.235154       9 log.go:172] (0xc000d08580) (0xc001b6cb40) Create stream
I0109 12:23:24.235342       9 log.go:172] (0xc000d08580) (0xc001b6cb40) Stream added, broadcasting: 1
I0109 12:23:24.239600       9 log.go:172] (0xc000d08580) Reply frame received for 1
I0109 12:23:24.239653       9 log.go:172] (0xc000d08580) (0xc00259b720) Create stream
I0109 12:23:24.239668       9 log.go:172] (0xc000d08580) (0xc00259b720) Stream added, broadcasting: 3
I0109 12:23:24.240576       9 log.go:172] (0xc000d08580) Reply frame received for 3
I0109 12:23:24.240630       9 log.go:172] (0xc000d08580) (0xc00259b7c0) Create stream
I0109 12:23:24.240638       9 log.go:172] (0xc000d08580) (0xc00259b7c0) Stream added, broadcasting: 5
I0109 12:23:24.242312       9 log.go:172] (0xc000d08580) Reply frame received for 5
I0109 12:23:24.324645       9 log.go:172] (0xc000d08580) Data frame received for 3
I0109 12:23:24.324713       9 log.go:172] (0xc00259b720) (3) Data frame handling
I0109 12:23:24.324754       9 log.go:172] (0xc00259b720) (3) Data frame sent
I0109 12:23:24.431259       9 log.go:172] (0xc000d08580) (0xc00259b720) Stream removed, broadcasting: 3
I0109 12:23:24.431416       9 log.go:172] (0xc000d08580) Data frame received for 1
I0109 12:23:24.431460       9 log.go:172] (0xc000d08580) (0xc00259b7c0) Stream removed, broadcasting: 5
I0109 12:23:24.431510       9 log.go:172] (0xc001b6cb40) (1) Data frame handling
I0109 12:23:24.431787       9 log.go:172] (0xc001b6cb40) (1) Data frame sent
I0109 12:23:24.431859       9 log.go:172] (0xc000d08580) (0xc001b6cb40) Stream removed, broadcasting: 1
I0109 12:23:24.431900       9 log.go:172] (0xc000d08580) Go away received
I0109 12:23:24.432102       9 log.go:172] (0xc000d08580) (0xc001b6cb40) Stream removed, broadcasting: 1
I0109 12:23:24.432134       9 log.go:172] (0xc000d08580) (0xc00259b720) Stream removed, broadcasting: 3
I0109 12:23:24.432153       9 log.go:172] (0xc000d08580) (0xc00259b7c0) Stream removed, broadcasting: 5
Jan  9 12:23:24.432: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jan  9 12:23:24.432: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jnm9f PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  9 12:23:24.432: INFO: >>> kubeConfig: /root/.kube/config
I0109 12:23:24.524649       9 log.go:172] (0xc000d39130) (0xc00259bae0) Create stream
I0109 12:23:24.524828       9 log.go:172] (0xc000d39130) (0xc00259bae0) Stream added, broadcasting: 1
I0109 12:23:24.576277       9 log.go:172] (0xc000d39130) Reply frame received for 1
I0109 12:23:24.576423       9 log.go:172] (0xc000d39130) (0xc001c48000) Create stream
I0109 12:23:24.576455       9 log.go:172] (0xc000d39130) (0xc001c48000) Stream added, broadcasting: 3
I0109 12:23:24.577876       9 log.go:172] (0xc000d39130) Reply frame received for 3
I0109 12:23:24.577911       9 log.go:172] (0xc000d39130) (0xc001698000) Create stream
I0109 12:23:24.577939       9 log.go:172] (0xc000d39130) (0xc001698000) Stream added, broadcasting: 5
I0109 12:23:24.587957       9 log.go:172] (0xc000d39130) Reply frame received for 5
I0109 12:23:24.709681       9 log.go:172] (0xc000d39130) Data frame received for 3
I0109 12:23:24.709785       9 log.go:172] (0xc001c48000) (3) Data frame handling
I0109 12:23:24.709826       9 log.go:172] (0xc001c48000) (3) Data frame sent
I0109 12:23:24.908772       9 log.go:172] (0xc000d39130) Data frame received for 1
I0109 12:23:24.908847       9 log.go:172] (0xc000d39130) (0xc001c48000) Stream removed, broadcasting: 3
I0109 12:23:24.908893       9 log.go:172] (0xc00259bae0) (1) Data frame handling
I0109 12:23:24.908923       9 log.go:172] (0xc00259bae0) (1) Data frame sent
I0109 12:23:24.908955       9 log.go:172] (0xc000d39130) (0xc001698000) Stream removed, broadcasting: 5
I0109 12:23:24.908981       9 log.go:172] (0xc000d39130) (0xc00259bae0) Stream removed, broadcasting: 1
I0109 12:23:24.909004       9 log.go:172] (0xc000d39130) Go away received
I0109 12:23:24.909156       9 log.go:172] (0xc000d39130) (0xc00259bae0) Stream removed, broadcasting: 1
I0109 12:23:24.909175       9 log.go:172] (0xc000d39130) (0xc001c48000) Stream removed, broadcasting: 3
I0109 12:23:24.909185       9 log.go:172] (0xc000d39130) (0xc001698000) Stream removed, broadcasting: 5
Jan  9 12:23:24.909: INFO: Exec stderr: ""
Jan  9 12:23:24.909: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jnm9f PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  9 12:23:24.909: INFO: >>> kubeConfig: /root/.kube/config
I0109 12:23:25.032110       9 log.go:172] (0xc000d388f0) (0xc001698280) Create stream
I0109 12:23:25.032207       9 log.go:172] (0xc000d388f0) (0xc001698280) Stream added, broadcasting: 1
I0109 12:23:25.050968       9 log.go:172] (0xc000d388f0) Reply frame received for 1
I0109 12:23:25.051097       9 log.go:172] (0xc000d388f0) (0xc001c480a0) Create stream
I0109 12:23:25.051112       9 log.go:172] (0xc000d388f0) (0xc001c480a0) Stream added, broadcasting: 3
I0109 12:23:25.057318       9 log.go:172] (0xc000d388f0) Reply frame received for 3
I0109 12:23:25.057389       9 log.go:172] (0xc000d388f0) (0xc001d6c000) Create stream
I0109 12:23:25.057420       9 log.go:172] (0xc000d388f0) (0xc001d6c000) Stream added, broadcasting: 5
I0109 12:23:25.065641       9 log.go:172] (0xc000d388f0) Reply frame received for 5
I0109 12:23:25.242441       9 log.go:172] (0xc000d388f0) Data frame received for 3
I0109 12:23:25.242487       9 log.go:172] (0xc001c480a0) (3) Data frame handling
I0109 12:23:25.242521       9 log.go:172] (0xc001c480a0) (3) Data frame sent
I0109 12:23:25.408897       9 log.go:172] (0xc000d388f0) Data frame received for 1
I0109 12:23:25.408978       9 log.go:172] (0xc000d388f0) (0xc001d6c000) Stream removed, broadcasting: 5
I0109 12:23:25.409022       9 log.go:172] (0xc001698280) (1) Data frame handling
I0109 12:23:25.409045       9 log.go:172] (0xc000d388f0) (0xc001c480a0) Stream removed, broadcasting: 3
I0109 12:23:25.409055       9 log.go:172] (0xc001698280) (1) Data frame sent
I0109 12:23:25.409073       9 log.go:172] (0xc000d388f0) (0xc001698280) Stream removed, broadcasting: 1
I0109 12:23:25.409235       9 log.go:172] (0xc000d388f0) (0xc001698280) Stream removed, broadcasting: 1
I0109 12:23:25.409245       9 log.go:172] (0xc000d388f0) (0xc001c480a0) Stream removed, broadcasting: 3
I0109 12:23:25.409254       9 log.go:172] (0xc000d388f0) (0xc001d6c000) Stream removed, broadcasting: 5
Jan  9 12:23:25.409: INFO: Exec stderr: ""
Jan  9 12:23:25.409: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jnm9f PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  9 12:23:25.409: INFO: >>> kubeConfig: /root/.kube/config
I0109 12:23:25.476893       9 log.go:172] (0xc000176a50) (0xc0018d0280) Create stream
I0109 12:23:25.476973       9 log.go:172] (0xc000176a50) (0xc0018d0280) Stream added, broadcasting: 1
I0109 12:23:25.481384       9 log.go:172] (0xc000176a50) Reply frame received for 1
I0109 12:23:25.481438       9 log.go:172] (0xc000176a50) (0xc001d6c0a0) Create stream
I0109 12:23:25.481451       9 log.go:172] (0xc000176a50) (0xc001d6c0a0) Stream added, broadcasting: 3
I0109 12:23:25.482259       9 log.go:172] (0xc000176a50) Reply frame received for 3
I0109 12:23:25.482289       9 log.go:172] (0xc000176a50) (0xc001698320) Create stream
I0109 12:23:25.482298       9 log.go:172] (0xc000176a50) (0xc001698320) Stream added, broadcasting: 5
I0109 12:23:25.483981       9 log.go:172] (0xc000176a50) Reply frame received for 5
I0109 12:23:25.593752       9 log.go:172] (0xc000176a50) Data frame received for 3
I0109 12:23:25.593812       9 log.go:172] (0xc001d6c0a0) (3) Data frame handling
I0109 12:23:25.593837       9 log.go:172] (0xc001d6c0a0) (3) Data frame sent
I0109 12:23:25.708341       9 log.go:172] (0xc000176a50) Data frame received for 1
I0109 12:23:25.708428       9 log.go:172] (0xc0018d0280) (1) Data frame handling
I0109 12:23:25.708546       9 log.go:172] (0xc0018d0280) (1) Data frame sent
I0109 12:23:25.709838       9 log.go:172] (0xc000176a50) (0xc0018d0280) Stream removed, broadcasting: 1
I0109 12:23:25.710104       9 log.go:172] (0xc000176a50) (0xc001d6c0a0) Stream removed, broadcasting: 3
I0109 12:23:25.710271       9 log.go:172] (0xc000176a50) (0xc001698320) Stream removed, broadcasting: 5
I0109 12:23:25.710304       9 log.go:172] (0xc000176a50) Go away received
I0109 12:23:25.710370       9 log.go:172] (0xc000176a50) (0xc0018d0280) Stream removed, broadcasting: 1
I0109 12:23:25.710382       9 log.go:172] (0xc000176a50) (0xc001d6c0a0) Stream removed, broadcasting: 3
I0109 12:23:25.710390       9 log.go:172] (0xc000176a50) (0xc001698320) Stream removed, broadcasting: 5
Jan  9 12:23:25.710: INFO: Exec stderr: ""
Jan  9 12:23:25.710: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-jnm9f PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  9 12:23:25.710: INFO: >>> kubeConfig: /root/.kube/config
I0109 12:23:25.785589       9 log.go:172] (0xc0027d44d0) (0xc000e8e640) Create stream
I0109 12:23:25.785747       9 log.go:172] (0xc0027d44d0) (0xc000e8e640) Stream added, broadcasting: 1
I0109 12:23:25.791403       9 log.go:172] (0xc0027d44d0) Reply frame received for 1
I0109 12:23:25.791487       9 log.go:172] (0xc0027d44d0) (0xc001d6c140) Create stream
I0109 12:23:25.791508       9 log.go:172] (0xc0027d44d0) (0xc001d6c140) Stream added, broadcasting: 3
I0109 12:23:25.793643       9 log.go:172] (0xc0027d44d0) Reply frame received for 3
I0109 12:23:25.793752       9 log.go:172] (0xc0027d44d0) (0xc001c48140) Create stream
I0109 12:23:25.793762       9 log.go:172] (0xc0027d44d0) (0xc001c48140) Stream added, broadcasting: 5
I0109 12:23:25.795064       9 log.go:172] (0xc0027d44d0) Reply frame received for 5
I0109 12:23:25.943973       9 log.go:172] (0xc0027d44d0) Data frame received for 3
I0109 12:23:25.944004       9 log.go:172] (0xc001d6c140) (3) Data frame handling
I0109 12:23:25.944024       9 log.go:172] (0xc001d6c140) (3) Data frame sent
I0109 12:23:26.121667       9 log.go:172] (0xc0027d44d0) (0xc001d6c140) Stream removed, broadcasting: 3
I0109 12:23:26.121777       9 log.go:172] (0xc0027d44d0) Data frame received for 1
I0109 12:23:26.121819       9 log.go:172] (0xc000e8e640) (1) Data frame handling
I0109 12:23:26.121834       9 log.go:172] (0xc000e8e640) (1) Data frame sent
I0109 12:23:26.121847       9 log.go:172] (0xc0027d44d0) (0xc001c48140) Stream removed, broadcasting: 5
I0109 12:23:26.121882       9 log.go:172] (0xc0027d44d0) (0xc000e8e640) Stream removed, broadcasting: 1
I0109 12:23:26.121920       9 log.go:172] (0xc0027d44d0) Go away received
I0109 12:23:26.122131       9 log.go:172] (0xc0027d44d0) (0xc000e8e640) Stream removed, broadcasting: 1
I0109 12:23:26.122188       9 log.go:172] (0xc0027d44d0) (0xc001d6c140) Stream removed, broadcasting: 3
I0109 12:23:26.122212       9 log.go:172] (0xc0027d44d0) (0xc001c48140) Stream removed, broadcasting: 5
Jan  9 12:23:26.122: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:23:26.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-jnm9f" for this suite.
Jan  9 12:24:10.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:24:10.225: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-jnm9f, resource: bindings, ignored listing per whitelist
Jan  9 12:24:10.407: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-jnm9f deletion completed in 44.271431955s

• [SLOW TEST:80.033 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:24:10.407: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Jan  9 12:24:10.696: INFO: Waiting up to 5m0s for pod "client-containers-f0772b3a-32da-11ea-ac2d-0242ac110005" in namespace "e2e-tests-containers-b4nmz" to be "success or failure"
Jan  9 12:24:10.714: INFO: Pod "client-containers-f0772b3a-32da-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.155695ms
Jan  9 12:24:12.725: INFO: Pod "client-containers-f0772b3a-32da-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028248055s
Jan  9 12:24:14.828: INFO: Pod "client-containers-f0772b3a-32da-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.131637781s
Jan  9 12:24:17.009: INFO: Pod "client-containers-f0772b3a-32da-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.312351519s
Jan  9 12:24:19.032: INFO: Pod "client-containers-f0772b3a-32da-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.335466778s
Jan  9 12:24:21.051: INFO: Pod "client-containers-f0772b3a-32da-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.354159492s
STEP: Saw pod success
Jan  9 12:24:21.051: INFO: Pod "client-containers-f0772b3a-32da-11ea-ac2d-0242ac110005" satisfied condition "success or failure"
Jan  9 12:24:21.067: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-f0772b3a-32da-11ea-ac2d-0242ac110005 container test-container: 
STEP: delete the pod
Jan  9 12:24:21.956: INFO: Waiting for pod client-containers-f0772b3a-32da-11ea-ac2d-0242ac110005 to disappear
Jan  9 12:24:21.991: INFO: Pod client-containers-f0772b3a-32da-11ea-ac2d-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:24:21.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-b4nmz" for this suite.
Jan  9 12:24:28.218: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:24:28.274: INFO: namespace: e2e-tests-containers-b4nmz, resource: bindings, ignored listing per whitelist
Jan  9 12:24:28.377: INFO: namespace e2e-tests-containers-b4nmz deletion completed in 6.364231619s

• [SLOW TEST:17.970 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:24:28.378: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan  9 12:24:28.653: INFO: Waiting up to 5m0s for pod "downward-api-fb298806-32da-11ea-ac2d-0242ac110005" in namespace "e2e-tests-downward-api-27zgk" to be "success or failure"
Jan  9 12:24:28.670: INFO: Pod "downward-api-fb298806-32da-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.889373ms
Jan  9 12:24:30.714: INFO: Pod "downward-api-fb298806-32da-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061442648s
Jan  9 12:24:32.722: INFO: Pod "downward-api-fb298806-32da-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069670267s
Jan  9 12:24:34.777: INFO: Pod "downward-api-fb298806-32da-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.123948304s
Jan  9 12:24:36.792: INFO: Pod "downward-api-fb298806-32da-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.138887307s
Jan  9 12:24:38.806: INFO: Pod "downward-api-fb298806-32da-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.152931427s
STEP: Saw pod success
Jan  9 12:24:38.806: INFO: Pod "downward-api-fb298806-32da-11ea-ac2d-0242ac110005" satisfied condition "success or failure"
Jan  9 12:24:38.809: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-fb298806-32da-11ea-ac2d-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan  9 12:24:38.870: INFO: Waiting for pod downward-api-fb298806-32da-11ea-ac2d-0242ac110005 to disappear
Jan  9 12:24:38.885: INFO: Pod downward-api-fb298806-32da-11ea-ac2d-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:24:38.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-27zgk" for this suite.
Jan  9 12:24:45.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:24:45.150: INFO: namespace: e2e-tests-downward-api-27zgk, resource: bindings, ignored listing per whitelist
Jan  9 12:24:45.256: INFO: namespace e2e-tests-downward-api-27zgk deletion completed in 6.359187322s

• [SLOW TEST:16.879 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:24:45.257: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
STEP: creating an rc
Jan  9 12:24:45.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-7krbj'
Jan  9 12:24:45.979: INFO: stderr: ""
Jan  9 12:24:45.979: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Waiting for Redis master to start.
Jan  9 12:24:47.873: INFO: Selector matched 1 pods for map[app:redis]
Jan  9 12:24:47.873: INFO: Found 0 / 1
Jan  9 12:24:47.996: INFO: Selector matched 1 pods for map[app:redis]
Jan  9 12:24:47.996: INFO: Found 0 / 1
Jan  9 12:24:49.000: INFO: Selector matched 1 pods for map[app:redis]
Jan  9 12:24:49.000: INFO: Found 0 / 1
Jan  9 12:24:49.991: INFO: Selector matched 1 pods for map[app:redis]
Jan  9 12:24:49.991: INFO: Found 0 / 1
Jan  9 12:24:51.918: INFO: Selector matched 1 pods for map[app:redis]
Jan  9 12:24:51.918: INFO: Found 0 / 1
Jan  9 12:24:52.222: INFO: Selector matched 1 pods for map[app:redis]
Jan  9 12:24:52.222: INFO: Found 0 / 1
Jan  9 12:24:52.993: INFO: Selector matched 1 pods for map[app:redis]
Jan  9 12:24:52.993: INFO: Found 0 / 1
Jan  9 12:24:54.031: INFO: Selector matched 1 pods for map[app:redis]
Jan  9 12:24:54.031: INFO: Found 0 / 1
Jan  9 12:24:55.022: INFO: Selector matched 1 pods for map[app:redis]
Jan  9 12:24:55.022: INFO: Found 0 / 1
Jan  9 12:24:56.004: INFO: Selector matched 1 pods for map[app:redis]
Jan  9 12:24:56.004: INFO: Found 1 / 1
Jan  9 12:24:56.004: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan  9 12:24:56.015: INFO: Selector matched 1 pods for map[app:redis]
Jan  9 12:24:56.015: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Jan  9 12:24:56.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-qckrm redis-master --namespace=e2e-tests-kubectl-7krbj'
Jan  9 12:24:56.256: INFO: stderr: ""
Jan  9 12:24:56.256: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 09 Jan 12:24:53.775 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 09 Jan 12:24:53.775 # Server started, Redis version 3.2.12\n1:M 09 Jan 12:24:53.775 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 09 Jan 12:24:53.775 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Jan  9 12:24:56.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-qckrm redis-master --namespace=e2e-tests-kubectl-7krbj --tail=1'
Jan  9 12:24:56.439: INFO: stderr: ""
Jan  9 12:24:56.439: INFO: stdout: "1:M 09 Jan 12:24:53.775 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Jan  9 12:24:56.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-qckrm redis-master --namespace=e2e-tests-kubectl-7krbj --limit-bytes=1'
Jan  9 12:24:56.607: INFO: stderr: ""
Jan  9 12:24:56.608: INFO: stdout: " "
STEP: exposing timestamps
Jan  9 12:24:56.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-qckrm redis-master --namespace=e2e-tests-kubectl-7krbj --tail=1 --timestamps'
Jan  9 12:24:56.781: INFO: stderr: ""
Jan  9 12:24:56.781: INFO: stdout: "2020-01-09T12:24:53.779568577Z 1:M 09 Jan 12:24:53.775 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Jan  9 12:24:59.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-qckrm redis-master --namespace=e2e-tests-kubectl-7krbj --since=1s'
Jan  9 12:24:59.527: INFO: stderr: ""
Jan  9 12:24:59.527: INFO: stdout: ""
Jan  9 12:24:59.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-qckrm redis-master --namespace=e2e-tests-kubectl-7krbj --since=24h'
Jan  9 12:24:59.718: INFO: stderr: ""
Jan  9 12:24:59.718: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 09 Jan 12:24:53.775 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 09 Jan 12:24:53.775 # Server started, Redis version 3.2.12\n1:M 09 Jan 12:24:53.775 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 09 Jan 12:24:53.775 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140
STEP: using delete to clean up resources
Jan  9 12:24:59.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-7krbj'
Jan  9 12:24:59.854: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  9 12:24:59.854: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Jan  9 12:24:59.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-7krbj'
Jan  9 12:25:00.004: INFO: stderr: "No resources found.\n"
Jan  9 12:25:00.004: INFO: stdout: ""
Jan  9 12:25:00.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-7krbj -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  9 12:25:00.171: INFO: stderr: ""
Jan  9 12:25:00.171: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:25:00.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-7krbj" for this suite.
Jan  9 12:25:06.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:25:06.380: INFO: namespace: e2e-tests-kubectl-7krbj, resource: bindings, ignored listing per whitelist
Jan  9 12:25:06.464: INFO: namespace e2e-tests-kubectl-7krbj deletion completed in 6.260598241s

• [SLOW TEST:21.207 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:25:06.465: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Jan  9 12:25:06.642: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:25:06.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-skxh2" for this suite.
Jan  9 12:25:12.822: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:25:13.065: INFO: namespace: e2e-tests-kubectl-skxh2, resource: bindings, ignored listing per whitelist
Jan  9 12:25:13.103: INFO: namespace e2e-tests-kubectl-skxh2 deletion completed in 6.319363059s

• [SLOW TEST:6.638 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:25:13.103: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Jan  9 12:25:13.420: INFO: Waiting up to 5m0s for pod "pod-15d7bbf7-32db-11ea-ac2d-0242ac110005" in namespace "e2e-tests-emptydir-qmdf8" to be "success or failure"
Jan  9 12:25:13.542: INFO: Pod "pod-15d7bbf7-32db-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 121.907799ms
Jan  9 12:25:15.555: INFO: Pod "pod-15d7bbf7-32db-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134777712s
Jan  9 12:25:17.569: INFO: Pod "pod-15d7bbf7-32db-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.148539984s
Jan  9 12:25:19.584: INFO: Pod "pod-15d7bbf7-32db-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.16327296s
Jan  9 12:25:21.616: INFO: Pod "pod-15d7bbf7-32db-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.195245079s
Jan  9 12:25:23.652: INFO: Pod "pod-15d7bbf7-32db-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.231922111s
STEP: Saw pod success
Jan  9 12:25:23.653: INFO: Pod "pod-15d7bbf7-32db-11ea-ac2d-0242ac110005" satisfied condition "success or failure"
Jan  9 12:25:23.670: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-15d7bbf7-32db-11ea-ac2d-0242ac110005 container test-container: 
STEP: delete the pod
Jan  9 12:25:23.862: INFO: Waiting for pod pod-15d7bbf7-32db-11ea-ac2d-0242ac110005 to disappear
Jan  9 12:25:23.944: INFO: Pod pod-15d7bbf7-32db-11ea-ac2d-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:25:23.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-qmdf8" for this suite.
Jan  9 12:25:32.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:25:32.250: INFO: namespace: e2e-tests-emptydir-qmdf8, resource: bindings, ignored listing per whitelist
Jan  9 12:25:32.344: INFO: namespace e2e-tests-emptydir-qmdf8 deletion completed in 8.37241334s

• [SLOW TEST:19.241 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:25:32.345: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:25:44.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-dxwg9" for this suite.
Jan  9 12:25:50.826: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:25:51.036: INFO: namespace: e2e-tests-kubelet-test-dxwg9, resource: bindings, ignored listing per whitelist
Jan  9 12:25:51.036: INFO: namespace e2e-tests-kubelet-test-dxwg9 deletion completed in 6.243342964s

• [SLOW TEST:18.691 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:25:51.036: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  9 12:25:51.339: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:26:01.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-gvxkg" for this suite.
Jan  9 12:26:45.505: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:26:45.780: INFO: namespace: e2e-tests-pods-gvxkg, resource: bindings, ignored listing per whitelist
Jan  9 12:26:45.811: INFO: namespace e2e-tests-pods-gvxkg deletion completed in 44.33832777s

• [SLOW TEST:54.775 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:26:45.811: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jan  9 12:26:46.341: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-b6tv7,SelfLink:/api/v1/namespaces/e2e-tests-watch-b6tv7/configmaps/e2e-watch-test-configmap-a,UID:4d3d28c8-32db-11ea-a994-fa163e34d433,ResourceVersion:17699866,Generation:0,CreationTimestamp:2020-01-09 12:26:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  9 12:26:46.341: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-b6tv7,SelfLink:/api/v1/namespaces/e2e-tests-watch-b6tv7/configmaps/e2e-watch-test-configmap-a,UID:4d3d28c8-32db-11ea-a994-fa163e34d433,ResourceVersion:17699866,Generation:0,CreationTimestamp:2020-01-09 12:26:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jan  9 12:26:56.382: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-b6tv7,SelfLink:/api/v1/namespaces/e2e-tests-watch-b6tv7/configmaps/e2e-watch-test-configmap-a,UID:4d3d28c8-32db-11ea-a994-fa163e34d433,ResourceVersion:17699878,Generation:0,CreationTimestamp:2020-01-09 12:26:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan  9 12:26:56.382: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-b6tv7,SelfLink:/api/v1/namespaces/e2e-tests-watch-b6tv7/configmaps/e2e-watch-test-configmap-a,UID:4d3d28c8-32db-11ea-a994-fa163e34d433,ResourceVersion:17699878,Generation:0,CreationTimestamp:2020-01-09 12:26:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jan  9 12:27:06.409: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-b6tv7,SelfLink:/api/v1/namespaces/e2e-tests-watch-b6tv7/configmaps/e2e-watch-test-configmap-a,UID:4d3d28c8-32db-11ea-a994-fa163e34d433,ResourceVersion:17699891,Generation:0,CreationTimestamp:2020-01-09 12:26:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  9 12:27:06.409: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-b6tv7,SelfLink:/api/v1/namespaces/e2e-tests-watch-b6tv7/configmaps/e2e-watch-test-configmap-a,UID:4d3d28c8-32db-11ea-a994-fa163e34d433,ResourceVersion:17699891,Generation:0,CreationTimestamp:2020-01-09 12:26:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jan  9 12:27:16.439: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-b6tv7,SelfLink:/api/v1/namespaces/e2e-tests-watch-b6tv7/configmaps/e2e-watch-test-configmap-a,UID:4d3d28c8-32db-11ea-a994-fa163e34d433,ResourceVersion:17699904,Generation:0,CreationTimestamp:2020-01-09 12:26:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  9 12:27:16.439: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-b6tv7,SelfLink:/api/v1/namespaces/e2e-tests-watch-b6tv7/configmaps/e2e-watch-test-configmap-a,UID:4d3d28c8-32db-11ea-a994-fa163e34d433,ResourceVersion:17699904,Generation:0,CreationTimestamp:2020-01-09 12:26:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jan  9 12:27:26.486: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-b6tv7,SelfLink:/api/v1/namespaces/e2e-tests-watch-b6tv7/configmaps/e2e-watch-test-configmap-b,UID:6527457c-32db-11ea-a994-fa163e34d433,ResourceVersion:17699917,Generation:0,CreationTimestamp:2020-01-09 12:27:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  9 12:27:26.487: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-b6tv7,SelfLink:/api/v1/namespaces/e2e-tests-watch-b6tv7/configmaps/e2e-watch-test-configmap-b,UID:6527457c-32db-11ea-a994-fa163e34d433,ResourceVersion:17699917,Generation:0,CreationTimestamp:2020-01-09 12:27:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jan  9 12:27:36.552: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-b6tv7,SelfLink:/api/v1/namespaces/e2e-tests-watch-b6tv7/configmaps/e2e-watch-test-configmap-b,UID:6527457c-32db-11ea-a994-fa163e34d433,ResourceVersion:17699930,Generation:0,CreationTimestamp:2020-01-09 12:27:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  9 12:27:36.553: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-b6tv7,SelfLink:/api/v1/namespaces/e2e-tests-watch-b6tv7/configmaps/e2e-watch-test-configmap-b,UID:6527457c-32db-11ea-a994-fa163e34d433,ResourceVersion:17699930,Generation:0,CreationTimestamp:2020-01-09 12:27:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:27:46.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-b6tv7" for this suite.
Jan  9 12:27:52.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:27:52.660: INFO: namespace: e2e-tests-watch-b6tv7, resource: bindings, ignored listing per whitelist
Jan  9 12:27:52.723: INFO: namespace e2e-tests-watch-b6tv7 deletion completed in 6.154771955s

• [SLOW TEST:66.912 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:27:52.724: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-74f0f1c5-32db-11ea-ac2d-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  9 12:27:53.128: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-750b35d2-32db-11ea-ac2d-0242ac110005" in namespace "e2e-tests-projected-54bxf" to be "success or failure"
Jan  9 12:27:53.164: INFO: Pod "pod-projected-secrets-750b35d2-32db-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 36.338292ms
Jan  9 12:27:55.232: INFO: Pod "pod-projected-secrets-750b35d2-32db-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104373507s
Jan  9 12:27:57.250: INFO: Pod "pod-projected-secrets-750b35d2-32db-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.122397065s
Jan  9 12:27:59.521: INFO: Pod "pod-projected-secrets-750b35d2-32db-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.392950769s
Jan  9 12:28:01.547: INFO: Pod "pod-projected-secrets-750b35d2-32db-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.418926417s
Jan  9 12:28:03.559: INFO: Pod "pod-projected-secrets-750b35d2-32db-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.431306875s
STEP: Saw pod success
Jan  9 12:28:03.559: INFO: Pod "pod-projected-secrets-750b35d2-32db-11ea-ac2d-0242ac110005" satisfied condition "success or failure"
Jan  9 12:28:03.564: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-750b35d2-32db-11ea-ac2d-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan  9 12:28:04.186: INFO: Waiting for pod pod-projected-secrets-750b35d2-32db-11ea-ac2d-0242ac110005 to disappear
Jan  9 12:28:04.214: INFO: Pod pod-projected-secrets-750b35d2-32db-11ea-ac2d-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:28:04.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-54bxf" for this suite.
Jan  9 12:28:10.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:28:10.684: INFO: namespace: e2e-tests-projected-54bxf, resource: bindings, ignored listing per whitelist
Jan  9 12:28:10.850: INFO: namespace e2e-tests-projected-54bxf deletion completed in 6.361223686s

• [SLOW TEST:18.126 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:28:10.850: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-pqbng
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Jan  9 12:28:11.135: INFO: Found 0 stateful pods, waiting for 3
Jan  9 12:28:21.149: INFO: Found 2 stateful pods, waiting for 3
Jan  9 12:28:31.147: INFO: Found 2 stateful pods, waiting for 3
Jan  9 12:28:41.246: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  9 12:28:41.246: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  9 12:28:41.246: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false
Jan  9 12:28:51.153: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  9 12:28:51.153: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  9 12:28:51.153: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan  9 12:28:51.211: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jan  9 12:29:01.306: INFO: Updating stateful set ss2
Jan  9 12:29:01.408: INFO: Waiting for Pod e2e-tests-statefulset-pqbng/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Jan  9 12:29:12.197: INFO: Found 2 stateful pods, waiting for 3
Jan  9 12:29:22.221: INFO: Found 2 stateful pods, waiting for 3
Jan  9 12:29:32.987: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  9 12:29:32.987: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  9 12:29:32.987: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  9 12:29:42.212: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  9 12:29:42.212: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  9 12:29:42.212: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jan  9 12:29:42.253: INFO: Updating stateful set ss2
Jan  9 12:29:42.293: INFO: Waiting for Pod e2e-tests-statefulset-pqbng/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  9 12:29:52.364: INFO: Updating stateful set ss2
Jan  9 12:29:52.390: INFO: Waiting for StatefulSet e2e-tests-statefulset-pqbng/ss2 to complete update
Jan  9 12:29:52.391: INFO: Waiting for Pod e2e-tests-statefulset-pqbng/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  9 12:30:02.410: INFO: Waiting for StatefulSet e2e-tests-statefulset-pqbng/ss2 to complete update
Jan  9 12:30:02.410: INFO: Waiting for Pod e2e-tests-statefulset-pqbng/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  9 12:30:12.413: INFO: Waiting for StatefulSet e2e-tests-statefulset-pqbng/ss2 to complete update
Jan  9 12:30:12.413: INFO: Waiting for Pod e2e-tests-statefulset-pqbng/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan  9 12:30:22.409: INFO: Deleting all statefulset in ns e2e-tests-statefulset-pqbng
Jan  9 12:30:22.412: INFO: Scaling statefulset ss2 to 0
Jan  9 12:30:52.449: INFO: Waiting for statefulset status.replicas updated to 0
Jan  9 12:30:52.460: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:30:52.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-pqbng" for this suite.
Jan  9 12:31:00.905: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:31:01.006: INFO: namespace: e2e-tests-statefulset-pqbng, resource: bindings, ignored listing per whitelist
Jan  9 12:31:01.056: INFO: namespace e2e-tests-statefulset-pqbng deletion completed in 8.32132685s

• [SLOW TEST:170.205 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:31:01.057: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan  9 12:31:01.380: INFO: Waiting up to 5m0s for pod "pod-e53dbc88-32db-11ea-ac2d-0242ac110005" in namespace "e2e-tests-emptydir-brvtm" to be "success or failure"
Jan  9 12:31:01.401: INFO: Pod "pod-e53dbc88-32db-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.835374ms
Jan  9 12:31:03.709: INFO: Pod "pod-e53dbc88-32db-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.328729301s
Jan  9 12:31:05.724: INFO: Pod "pod-e53dbc88-32db-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.343158086s
Jan  9 12:31:07.734: INFO: Pod "pod-e53dbc88-32db-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.35408986s
Jan  9 12:31:09.747: INFO: Pod "pod-e53dbc88-32db-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.366295605s
Jan  9 12:31:12.093: INFO: Pod "pod-e53dbc88-32db-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.712474071s
STEP: Saw pod success
Jan  9 12:31:12.093: INFO: Pod "pod-e53dbc88-32db-11ea-ac2d-0242ac110005" satisfied condition "success or failure"
Jan  9 12:31:12.101: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-e53dbc88-32db-11ea-ac2d-0242ac110005 container test-container: 
STEP: delete the pod
Jan  9 12:31:12.708: INFO: Waiting for pod pod-e53dbc88-32db-11ea-ac2d-0242ac110005 to disappear
Jan  9 12:31:12.726: INFO: Pod pod-e53dbc88-32db-11ea-ac2d-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:31:12.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-brvtm" for this suite.
Jan  9 12:31:18.796: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:31:18.991: INFO: namespace: e2e-tests-emptydir-brvtm, resource: bindings, ignored listing per whitelist
Jan  9 12:31:18.991: INFO: namespace e2e-tests-emptydir-brvtm deletion completed in 6.254060067s

• [SLOW TEST:17.934 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:31:18.991: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-efddcbd6-32db-11ea-ac2d-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-efddcc8c-32db-11ea-ac2d-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-efddcbd6-32db-11ea-ac2d-0242ac110005
STEP: Updating configmap cm-test-opt-upd-efddcc8c-32db-11ea-ac2d-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-efddccba-32db-11ea-ac2d-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:33:02.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-sgmr8" for this suite.
Jan  9 12:33:26.220: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:33:26.375: INFO: namespace: e2e-tests-configmap-sgmr8, resource: bindings, ignored listing per whitelist
Jan  9 12:33:26.378: INFO: namespace e2e-tests-configmap-sgmr8 deletion completed in 24.219950004s

• [SLOW TEST:127.387 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:33:26.379: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  9 12:33:26.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-s42ff'
Jan  9 12:33:28.646: INFO: stderr: ""
Jan  9 12:33:28.646: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532
Jan  9 12:33:28.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-s42ff'
Jan  9 12:33:35.116: INFO: stderr: ""
Jan  9 12:33:35.116: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:33:35.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-s42ff" for this suite.
Jan  9 12:33:41.247: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:33:41.321: INFO: namespace: e2e-tests-kubectl-s42ff, resource: bindings, ignored listing per whitelist
Jan  9 12:33:41.408: INFO: namespace e2e-tests-kubectl-s42ff deletion completed in 6.282605515s

• [SLOW TEST:15.029 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:33:41.409: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on tmpfs
Jan  9 12:33:41.690: INFO: Waiting up to 5m0s for pod "pod-44caf4a3-32dc-11ea-ac2d-0242ac110005" in namespace "e2e-tests-emptydir-p7xw7" to be "success or failure"
Jan  9 12:33:41.702: INFO: Pod "pod-44caf4a3-32dc-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.568893ms
Jan  9 12:33:43.751: INFO: Pod "pod-44caf4a3-32dc-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061015311s
Jan  9 12:33:45.878: INFO: Pod "pod-44caf4a3-32dc-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.188027029s
Jan  9 12:33:48.269: INFO: Pod "pod-44caf4a3-32dc-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.579240187s
Jan  9 12:33:50.283: INFO: Pod "pod-44caf4a3-32dc-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.592926147s
Jan  9 12:33:52.296: INFO: Pod "pod-44caf4a3-32dc-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.606339567s
STEP: Saw pod success
Jan  9 12:33:52.296: INFO: Pod "pod-44caf4a3-32dc-11ea-ac2d-0242ac110005" satisfied condition "success or failure"
Jan  9 12:33:52.347: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-44caf4a3-32dc-11ea-ac2d-0242ac110005 container test-container: 
STEP: delete the pod
Jan  9 12:33:52.652: INFO: Waiting for pod pod-44caf4a3-32dc-11ea-ac2d-0242ac110005 to disappear
Jan  9 12:33:52.755: INFO: Pod pod-44caf4a3-32dc-11ea-ac2d-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:33:52.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-p7xw7" for this suite.
Jan  9 12:33:58.820: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:33:58.908: INFO: namespace: e2e-tests-emptydir-p7xw7, resource: bindings, ignored listing per whitelist
Jan  9 12:33:58.954: INFO: namespace e2e-tests-emptydir-p7xw7 deletion completed in 6.186556767s

• [SLOW TEST:17.546 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:33:58.955: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  9 12:33:59.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Jan  9 12:33:59.343: INFO: stderr: ""
Jan  9 12:33:59.343: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:33:59.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-bzqc8" for this suite.
Jan  9 12:34:05.533: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:34:05.697: INFO: namespace: e2e-tests-kubectl-bzqc8, resource: bindings, ignored listing per whitelist
Jan  9 12:34:05.706: INFO: namespace e2e-tests-kubectl-bzqc8 deletion completed in 6.330756917s

• [SLOW TEST:6.752 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:34:05.708: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  9 12:34:06.007: INFO: Waiting up to 5m0s for pod "downwardapi-volume-534458fe-32dc-11ea-ac2d-0242ac110005" in namespace "e2e-tests-projected-xcvx4" to be "success or failure"
Jan  9 12:34:06.017: INFO: Pod "downwardapi-volume-534458fe-32dc-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.417726ms
Jan  9 12:34:08.034: INFO: Pod "downwardapi-volume-534458fe-32dc-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02743785s
Jan  9 12:34:10.070: INFO: Pod "downwardapi-volume-534458fe-32dc-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062875484s
Jan  9 12:34:12.095: INFO: Pod "downwardapi-volume-534458fe-32dc-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.088494787s
Jan  9 12:34:14.106: INFO: Pod "downwardapi-volume-534458fe-32dc-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.099149973s
Jan  9 12:34:16.118: INFO: Pod "downwardapi-volume-534458fe-32dc-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.111022986s
STEP: Saw pod success
Jan  9 12:34:16.118: INFO: Pod "downwardapi-volume-534458fe-32dc-11ea-ac2d-0242ac110005" satisfied condition "success or failure"
Jan  9 12:34:16.122: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-534458fe-32dc-11ea-ac2d-0242ac110005 container client-container: 
STEP: delete the pod
Jan  9 12:34:17.388: INFO: Waiting for pod downwardapi-volume-534458fe-32dc-11ea-ac2d-0242ac110005 to disappear
Jan  9 12:34:17.406: INFO: Pod downwardapi-volume-534458fe-32dc-11ea-ac2d-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:34:17.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-xcvx4" for this suite.
Jan  9 12:34:23.539: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:34:23.952: INFO: namespace: e2e-tests-projected-xcvx4, resource: bindings, ignored listing per whitelist
Jan  9 12:34:24.001: INFO: namespace e2e-tests-projected-xcvx4 deletion completed in 6.585787589s

• [SLOW TEST:18.294 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:34:24.003: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  9 12:34:24.281: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jan  9 12:34:24.369: INFO: Number of nodes with available pods: 0
Jan  9 12:34:24.369: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  9 12:34:25.382: INFO: Number of nodes with available pods: 0
Jan  9 12:34:25.382: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  9 12:34:26.392: INFO: Number of nodes with available pods: 0
Jan  9 12:34:26.392: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  9 12:34:27.552: INFO: Number of nodes with available pods: 0
Jan  9 12:34:27.553: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  9 12:34:28.696: INFO: Number of nodes with available pods: 0
Jan  9 12:34:28.696: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  9 12:34:29.400: INFO: Number of nodes with available pods: 0
Jan  9 12:34:29.400: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  9 12:34:30.384: INFO: Number of nodes with available pods: 0
Jan  9 12:34:30.384: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  9 12:34:31.627: INFO: Number of nodes with available pods: 0
Jan  9 12:34:31.627: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  9 12:34:32.499: INFO: Number of nodes with available pods: 0
Jan  9 12:34:32.499: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  9 12:34:33.390: INFO: Number of nodes with available pods: 1
Jan  9 12:34:33.390: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jan  9 12:34:33.461: INFO: Wrong image for pod: daemon-set-8bcsn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  9 12:34:34.502: INFO: Wrong image for pod: daemon-set-8bcsn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  9 12:34:35.496: INFO: Wrong image for pod: daemon-set-8bcsn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  9 12:34:36.521: INFO: Wrong image for pod: daemon-set-8bcsn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  9 12:34:37.494: INFO: Wrong image for pod: daemon-set-8bcsn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  9 12:34:38.516: INFO: Wrong image for pod: daemon-set-8bcsn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  9 12:34:39.499: INFO: Wrong image for pod: daemon-set-8bcsn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  9 12:34:39.499: INFO: Pod daemon-set-8bcsn is not available
Jan  9 12:34:40.504: INFO: Wrong image for pod: daemon-set-8bcsn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  9 12:34:40.504: INFO: Pod daemon-set-8bcsn is not available
Jan  9 12:34:41.495: INFO: Wrong image for pod: daemon-set-8bcsn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  9 12:34:41.495: INFO: Pod daemon-set-8bcsn is not available
Jan  9 12:34:42.497: INFO: Wrong image for pod: daemon-set-8bcsn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  9 12:34:42.497: INFO: Pod daemon-set-8bcsn is not available
Jan  9 12:34:43.495: INFO: Pod daemon-set-5qxcw is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Jan  9 12:34:43.514: INFO: Number of nodes with available pods: 0
Jan  9 12:34:43.514: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  9 12:34:44.579: INFO: Number of nodes with available pods: 0
Jan  9 12:34:44.579: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  9 12:34:45.536: INFO: Number of nodes with available pods: 0
Jan  9 12:34:45.536: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  9 12:34:46.541: INFO: Number of nodes with available pods: 0
Jan  9 12:34:46.541: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  9 12:34:48.064: INFO: Number of nodes with available pods: 0
Jan  9 12:34:48.064: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  9 12:34:48.546: INFO: Number of nodes with available pods: 0
Jan  9 12:34:48.547: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  9 12:34:49.599: INFO: Number of nodes with available pods: 0
Jan  9 12:34:49.599: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  9 12:34:50.547: INFO: Number of nodes with available pods: 1
Jan  9 12:34:50.547: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-8w7dm, will wait for the garbage collector to delete the pods
Jan  9 12:34:50.652: INFO: Deleting DaemonSet.extensions daemon-set took: 23.185006ms
Jan  9 12:34:50.752: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.58489ms
Jan  9 12:34:59.062: INFO: Number of nodes with available pods: 0
Jan  9 12:34:59.062: INFO: Number of running nodes: 0, number of available pods: 0
Jan  9 12:34:59.073: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-8w7dm/daemonsets","resourceVersion":"17700931"},"items":null}

Jan  9 12:34:59.078: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-8w7dm/pods","resourceVersion":"17700931"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:34:59.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-8w7dm" for this suite.
Jan  9 12:35:07.235: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:35:07.274: INFO: namespace: e2e-tests-daemonsets-8w7dm, resource: bindings, ignored listing per whitelist
Jan  9 12:35:07.464: INFO: namespace e2e-tests-daemonsets-8w7dm deletion completed in 8.363685701s

• [SLOW TEST:43.461 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:35:07.464: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan  9 12:35:07.707: INFO: Waiting up to 5m0s for pod "pod-781118de-32dc-11ea-ac2d-0242ac110005" in namespace "e2e-tests-emptydir-xrmkq" to be "success or failure"
Jan  9 12:35:07.936: INFO: Pod "pod-781118de-32dc-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 228.627622ms
Jan  9 12:35:09.961: INFO: Pod "pod-781118de-32dc-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.253595534s
Jan  9 12:35:11.973: INFO: Pod "pod-781118de-32dc-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.266054129s
Jan  9 12:35:14.166: INFO: Pod "pod-781118de-32dc-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.45875791s
Jan  9 12:35:16.405: INFO: Pod "pod-781118de-32dc-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.698280409s
Jan  9 12:35:18.514: INFO: Pod "pod-781118de-32dc-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.806963619s
STEP: Saw pod success
Jan  9 12:35:18.514: INFO: Pod "pod-781118de-32dc-11ea-ac2d-0242ac110005" satisfied condition "success or failure"
Jan  9 12:35:18.527: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-781118de-32dc-11ea-ac2d-0242ac110005 container test-container: 
STEP: delete the pod
Jan  9 12:35:18.947: INFO: Waiting for pod pod-781118de-32dc-11ea-ac2d-0242ac110005 to disappear
Jan  9 12:35:18.981: INFO: Pod pod-781118de-32dc-11ea-ac2d-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:35:18.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-xrmkq" for this suite.
Jan  9 12:35:25.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:35:25.221: INFO: namespace: e2e-tests-emptydir-xrmkq, resource: bindings, ignored listing per whitelist
Jan  9 12:35:25.433: INFO: namespace e2e-tests-emptydir-xrmkq deletion completed in 6.436427651s

• [SLOW TEST:17.969 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:35:25.433: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Jan  9 12:35:25.582: INFO: namespace e2e-tests-kubectl-cdb9j
Jan  9 12:35:25.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-cdb9j'
Jan  9 12:35:25.950: INFO: stderr: ""
Jan  9 12:35:25.950: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan  9 12:35:26.972: INFO: Selector matched 1 pods for map[app:redis]
Jan  9 12:35:26.972: INFO: Found 0 / 1
Jan  9 12:35:27.978: INFO: Selector matched 1 pods for map[app:redis]
Jan  9 12:35:27.979: INFO: Found 0 / 1
Jan  9 12:35:28.976: INFO: Selector matched 1 pods for map[app:redis]
Jan  9 12:35:28.976: INFO: Found 0 / 1
Jan  9 12:35:29.987: INFO: Selector matched 1 pods for map[app:redis]
Jan  9 12:35:29.987: INFO: Found 0 / 1
Jan  9 12:35:31.321: INFO: Selector matched 1 pods for map[app:redis]
Jan  9 12:35:31.322: INFO: Found 0 / 1
Jan  9 12:35:32.062: INFO: Selector matched 1 pods for map[app:redis]
Jan  9 12:35:32.063: INFO: Found 0 / 1
Jan  9 12:35:32.975: INFO: Selector matched 1 pods for map[app:redis]
Jan  9 12:35:32.975: INFO: Found 0 / 1
Jan  9 12:35:33.991: INFO: Selector matched 1 pods for map[app:redis]
Jan  9 12:35:33.991: INFO: Found 0 / 1
Jan  9 12:35:34.978: INFO: Selector matched 1 pods for map[app:redis]
Jan  9 12:35:34.978: INFO: Found 0 / 1
Jan  9 12:35:35.972: INFO: Selector matched 1 pods for map[app:redis]
Jan  9 12:35:35.972: INFO: Found 1 / 1
Jan  9 12:35:35.972: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan  9 12:35:35.982: INFO: Selector matched 1 pods for map[app:redis]
Jan  9 12:35:35.982: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan  9 12:35:35.982: INFO: wait on redis-master startup in e2e-tests-kubectl-cdb9j 
Jan  9 12:35:35.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-rhtrd redis-master --namespace=e2e-tests-kubectl-cdb9j'
Jan  9 12:35:36.155: INFO: stderr: ""
Jan  9 12:35:36.155: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 09 Jan 12:35:34.113 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 09 Jan 12:35:34.113 # Server started, Redis version 3.2.12\n1:M 09 Jan 12:35:34.114 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 09 Jan 12:35:34.114 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Jan  9 12:35:36.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-cdb9j'
Jan  9 12:35:36.544: INFO: stderr: ""
Jan  9 12:35:36.544: INFO: stdout: "service/rm2 exposed\n"
Jan  9 12:35:36.569: INFO: Service rm2 in namespace e2e-tests-kubectl-cdb9j found.
STEP: exposing service
Jan  9 12:35:38.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-cdb9j'
Jan  9 12:35:39.040: INFO: stderr: ""
Jan  9 12:35:39.040: INFO: stdout: "service/rm3 exposed\n"
Jan  9 12:35:39.147: INFO: Service rm3 in namespace e2e-tests-kubectl-cdb9j found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:35:41.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-cdb9j" for this suite.
Jan  9 12:36:07.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:36:07.470: INFO: namespace: e2e-tests-kubectl-cdb9j, resource: bindings, ignored listing per whitelist
Jan  9 12:36:07.495: INFO: namespace e2e-tests-kubectl-cdb9j deletion completed in 26.307064007s

• [SLOW TEST:42.062 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:36:07.496: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-9bd429de-32dc-11ea-ac2d-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  9 12:36:07.715: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9bd50f1c-32dc-11ea-ac2d-0242ac110005" in namespace "e2e-tests-projected-4w7jq" to be "success or failure"
Jan  9 12:36:07.726: INFO: Pod "pod-projected-secrets-9bd50f1c-32dc-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.877169ms
Jan  9 12:36:09.979: INFO: Pod "pod-projected-secrets-9bd50f1c-32dc-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.263401806s
Jan  9 12:36:12.504: INFO: Pod "pod-projected-secrets-9bd50f1c-32dc-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.789048075s
Jan  9 12:36:14.535: INFO: Pod "pod-projected-secrets-9bd50f1c-32dc-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.819395034s
Jan  9 12:36:16.575: INFO: Pod "pod-projected-secrets-9bd50f1c-32dc-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.86011739s
STEP: Saw pod success
Jan  9 12:36:16.575: INFO: Pod "pod-projected-secrets-9bd50f1c-32dc-11ea-ac2d-0242ac110005" satisfied condition "success or failure"
Jan  9 12:36:16.586: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-9bd50f1c-32dc-11ea-ac2d-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan  9 12:36:16.812: INFO: Waiting for pod pod-projected-secrets-9bd50f1c-32dc-11ea-ac2d-0242ac110005 to disappear
Jan  9 12:36:16.834: INFO: Pod pod-projected-secrets-9bd50f1c-32dc-11ea-ac2d-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:36:16.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-4w7jq" for this suite.
Jan  9 12:36:22.906: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:36:22.983: INFO: namespace: e2e-tests-projected-4w7jq, resource: bindings, ignored listing per whitelist
Jan  9 12:36:23.079: INFO: namespace e2e-tests-projected-4w7jq deletion completed in 6.215301655s

• [SLOW TEST:15.584 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:36:23.080: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan  9 12:36:23.293: INFO: Waiting up to 5m0s for pod "pod-a5208385-32dc-11ea-ac2d-0242ac110005" in namespace "e2e-tests-emptydir-j6svw" to be "success or failure"
Jan  9 12:36:23.321: INFO: Pod "pod-a5208385-32dc-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 28.502658ms
Jan  9 12:36:25.372: INFO: Pod "pod-a5208385-32dc-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079476139s
Jan  9 12:36:28.092: INFO: Pod "pod-a5208385-32dc-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.798889542s
Jan  9 12:36:30.328: INFO: Pod "pod-a5208385-32dc-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.035211922s
Jan  9 12:36:32.340: INFO: Pod "pod-a5208385-32dc-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.047219942s
Jan  9 12:36:34.350: INFO: Pod "pod-a5208385-32dc-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.057117166s
STEP: Saw pod success
Jan  9 12:36:34.350: INFO: Pod "pod-a5208385-32dc-11ea-ac2d-0242ac110005" satisfied condition "success or failure"
Jan  9 12:36:34.353: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-a5208385-32dc-11ea-ac2d-0242ac110005 container test-container: 
STEP: delete the pod
Jan  9 12:36:34.420: INFO: Waiting for pod pod-a5208385-32dc-11ea-ac2d-0242ac110005 to disappear
Jan  9 12:36:34.495: INFO: Pod pod-a5208385-32dc-11ea-ac2d-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:36:34.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-j6svw" for this suite.
Jan  9 12:36:41.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:36:41.992: INFO: namespace: e2e-tests-emptydir-j6svw, resource: bindings, ignored listing per whitelist
Jan  9 12:36:41.992: INFO: namespace e2e-tests-emptydir-j6svw deletion completed in 6.492570094s

• [SLOW TEST:18.912 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:36:41.992: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan  9 12:36:52.927: INFO: Successfully updated pod "annotationupdateb06c60ac-32dc-11ea-ac2d-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:36:55.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-6bsb9" for this suite.
Jan  9 12:37:19.149: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:37:19.298: INFO: namespace: e2e-tests-projected-6bsb9, resource: bindings, ignored listing per whitelist
Jan  9 12:37:19.420: INFO: namespace e2e-tests-projected-6bsb9 deletion completed in 24.327291664s

• [SLOW TEST:37.427 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:37:19.420: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan  9 12:37:19.628: INFO: Waiting up to 5m0s for pod "pod-c6ad8e99-32dc-11ea-ac2d-0242ac110005" in namespace "e2e-tests-emptydir-tnft8" to be "success or failure"
Jan  9 12:37:19.640: INFO: Pod "pod-c6ad8e99-32dc-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.661713ms
Jan  9 12:37:21.648: INFO: Pod "pod-c6ad8e99-32dc-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020646693s
Jan  9 12:37:23.665: INFO: Pod "pod-c6ad8e99-32dc-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037590377s
Jan  9 12:37:25.684: INFO: Pod "pod-c6ad8e99-32dc-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056487796s
Jan  9 12:37:27.897: INFO: Pod "pod-c6ad8e99-32dc-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.269105863s
Jan  9 12:37:29.918: INFO: Pod "pod-c6ad8e99-32dc-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.290048997s
STEP: Saw pod success
Jan  9 12:37:29.918: INFO: Pod "pod-c6ad8e99-32dc-11ea-ac2d-0242ac110005" satisfied condition "success or failure"
Jan  9 12:37:29.925: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-c6ad8e99-32dc-11ea-ac2d-0242ac110005 container test-container: 
STEP: delete the pod
Jan  9 12:37:30.116: INFO: Waiting for pod pod-c6ad8e99-32dc-11ea-ac2d-0242ac110005 to disappear
Jan  9 12:37:30.137: INFO: Pod pod-c6ad8e99-32dc-11ea-ac2d-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:37:30.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-tnft8" for this suite.
Jan  9 12:37:36.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:37:36.767: INFO: namespace: e2e-tests-emptydir-tnft8, resource: bindings, ignored listing per whitelist
Jan  9 12:37:36.810: INFO: namespace e2e-tests-emptydir-tnft8 deletion completed in 6.66472669s

• [SLOW TEST:17.391 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:37:36.811: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-d115cc9f-32dc-11ea-ac2d-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  9 12:37:37.057: INFO: Waiting up to 5m0s for pod "pod-secrets-d1172081-32dc-11ea-ac2d-0242ac110005" in namespace "e2e-tests-secrets-7pbmf" to be "success or failure"
Jan  9 12:37:37.079: INFO: Pod "pod-secrets-d1172081-32dc-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.151206ms
Jan  9 12:37:39.101: INFO: Pod "pod-secrets-d1172081-32dc-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043989837s
Jan  9 12:37:41.128: INFO: Pod "pod-secrets-d1172081-32dc-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07116581s
Jan  9 12:37:43.778: INFO: Pod "pod-secrets-d1172081-32dc-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.720769095s
Jan  9 12:37:45.797: INFO: Pod "pod-secrets-d1172081-32dc-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.739873451s
Jan  9 12:37:47.812: INFO: Pod "pod-secrets-d1172081-32dc-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.754863628s
STEP: Saw pod success
Jan  9 12:37:47.812: INFO: Pod "pod-secrets-d1172081-32dc-11ea-ac2d-0242ac110005" satisfied condition "success or failure"
Jan  9 12:37:47.817: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-d1172081-32dc-11ea-ac2d-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan  9 12:37:48.038: INFO: Waiting for pod pod-secrets-d1172081-32dc-11ea-ac2d-0242ac110005 to disappear
Jan  9 12:37:48.058: INFO: Pod pod-secrets-d1172081-32dc-11ea-ac2d-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:37:48.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-7pbmf" for this suite.
Jan  9 12:37:54.093: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:37:54.114: INFO: namespace: e2e-tests-secrets-7pbmf, resource: bindings, ignored listing per whitelist
Jan  9 12:37:54.194: INFO: namespace e2e-tests-secrets-7pbmf deletion completed in 6.126991906s

• [SLOW TEST:17.383 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:37:54.194: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Jan  9 12:37:54.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-z5lpk'
Jan  9 12:37:54.742: INFO: stderr: ""
Jan  9 12:37:54.743: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  9 12:37:54.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z5lpk'
Jan  9 12:37:54.924: INFO: stderr: ""
Jan  9 12:37:54.924: INFO: stdout: "update-demo-nautilus-d8wvw update-demo-nautilus-dq9bn "
Jan  9 12:37:54.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d8wvw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z5lpk'
Jan  9 12:37:55.192: INFO: stderr: ""
Jan  9 12:37:55.192: INFO: stdout: ""
Jan  9 12:37:55.192: INFO: update-demo-nautilus-d8wvw is created but not running
Jan  9 12:38:00.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z5lpk'
Jan  9 12:38:00.367: INFO: stderr: ""
Jan  9 12:38:00.367: INFO: stdout: "update-demo-nautilus-d8wvw update-demo-nautilus-dq9bn "
Jan  9 12:38:00.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d8wvw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z5lpk'
Jan  9 12:38:00.465: INFO: stderr: ""
Jan  9 12:38:00.465: INFO: stdout: ""
Jan  9 12:38:00.465: INFO: update-demo-nautilus-d8wvw is created but not running
Jan  9 12:38:05.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z5lpk'
Jan  9 12:38:05.640: INFO: stderr: ""
Jan  9 12:38:05.640: INFO: stdout: "update-demo-nautilus-d8wvw update-demo-nautilus-dq9bn "
Jan  9 12:38:05.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d8wvw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z5lpk'
Jan  9 12:38:05.772: INFO: stderr: ""
Jan  9 12:38:05.772: INFO: stdout: "true"
Jan  9 12:38:05.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d8wvw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z5lpk'
Jan  9 12:38:05.905: INFO: stderr: ""
Jan  9 12:38:05.905: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  9 12:38:05.905: INFO: validating pod update-demo-nautilus-d8wvw
Jan  9 12:38:05.941: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  9 12:38:05.941: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  9 12:38:05.941: INFO: update-demo-nautilus-d8wvw is verified up and running
Jan  9 12:38:05.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dq9bn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z5lpk'
Jan  9 12:38:06.058: INFO: stderr: ""
Jan  9 12:38:06.058: INFO: stdout: "true"
Jan  9 12:38:06.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dq9bn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z5lpk'
Jan  9 12:38:06.205: INFO: stderr: ""
Jan  9 12:38:06.205: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  9 12:38:06.205: INFO: validating pod update-demo-nautilus-dq9bn
Jan  9 12:38:06.216: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  9 12:38:06.216: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  9 12:38:06.216: INFO: update-demo-nautilus-dq9bn is verified up and running
STEP: scaling down the replication controller
Jan  9 12:38:06.218: INFO: scanned /root for discovery docs: 
Jan  9 12:38:06.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-z5lpk'
Jan  9 12:38:07.486: INFO: stderr: ""
Jan  9 12:38:07.486: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  9 12:38:07.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z5lpk'
Jan  9 12:38:07.686: INFO: stderr: ""
Jan  9 12:38:07.686: INFO: stdout: "update-demo-nautilus-d8wvw update-demo-nautilus-dq9bn "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan  9 12:38:12.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z5lpk'
Jan  9 12:38:12.864: INFO: stderr: ""
Jan  9 12:38:12.864: INFO: stdout: "update-demo-nautilus-d8wvw update-demo-nautilus-dq9bn "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan  9 12:38:17.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z5lpk'
Jan  9 12:38:18.062: INFO: stderr: ""
Jan  9 12:38:18.062: INFO: stdout: "update-demo-nautilus-d8wvw "
Jan  9 12:38:18.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d8wvw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z5lpk'
Jan  9 12:38:18.188: INFO: stderr: ""
Jan  9 12:38:18.188: INFO: stdout: "true"
Jan  9 12:38:18.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d8wvw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z5lpk'
Jan  9 12:38:18.282: INFO: stderr: ""
Jan  9 12:38:18.283: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  9 12:38:18.283: INFO: validating pod update-demo-nautilus-d8wvw
Jan  9 12:38:18.304: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  9 12:38:18.304: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  9 12:38:18.304: INFO: update-demo-nautilus-d8wvw is verified up and running
STEP: scaling up the replication controller
Jan  9 12:38:18.306: INFO: scanned /root for discovery docs: 
Jan  9 12:38:18.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-z5lpk'
Jan  9 12:38:20.507: INFO: stderr: ""
Jan  9 12:38:20.507: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  9 12:38:20.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z5lpk'
Jan  9 12:38:20.809: INFO: stderr: ""
Jan  9 12:38:20.809: INFO: stdout: "update-demo-nautilus-2zkvl update-demo-nautilus-d8wvw "
Jan  9 12:38:20.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2zkvl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z5lpk'
Jan  9 12:38:20.953: INFO: stderr: ""
Jan  9 12:38:20.953: INFO: stdout: ""
Jan  9 12:38:20.953: INFO: update-demo-nautilus-2zkvl is created but not running
Jan  9 12:38:25.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z5lpk'
Jan  9 12:38:26.131: INFO: stderr: ""
Jan  9 12:38:26.131: INFO: stdout: "update-demo-nautilus-2zkvl update-demo-nautilus-d8wvw "
Jan  9 12:38:26.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2zkvl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z5lpk'
Jan  9 12:38:26.280: INFO: stderr: ""
Jan  9 12:38:26.280: INFO: stdout: ""
Jan  9 12:38:26.280: INFO: update-demo-nautilus-2zkvl is created but not running
Jan  9 12:38:31.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z5lpk'
Jan  9 12:38:31.481: INFO: stderr: ""
Jan  9 12:38:31.481: INFO: stdout: "update-demo-nautilus-2zkvl update-demo-nautilus-d8wvw "
Jan  9 12:38:31.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2zkvl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z5lpk'
Jan  9 12:38:31.625: INFO: stderr: ""
Jan  9 12:38:31.625: INFO: stdout: "true"
Jan  9 12:38:31.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2zkvl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z5lpk'
Jan  9 12:38:31.815: INFO: stderr: ""
Jan  9 12:38:31.815: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  9 12:38:31.815: INFO: validating pod update-demo-nautilus-2zkvl
Jan  9 12:38:31.831: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  9 12:38:31.831: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  9 12:38:31.831: INFO: update-demo-nautilus-2zkvl is verified up and running
Jan  9 12:38:31.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d8wvw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z5lpk'
Jan  9 12:38:31.983: INFO: stderr: ""
Jan  9 12:38:31.983: INFO: stdout: "true"
Jan  9 12:38:31.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d8wvw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z5lpk'
Jan  9 12:38:32.077: INFO: stderr: ""
Jan  9 12:38:32.077: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  9 12:38:32.078: INFO: validating pod update-demo-nautilus-d8wvw
Jan  9 12:38:32.092: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  9 12:38:32.093: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  9 12:38:32.093: INFO: update-demo-nautilus-d8wvw is verified up and running
STEP: using delete to clean up resources
Jan  9 12:38:32.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-z5lpk'
Jan  9 12:38:32.273: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  9 12:38:32.273: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan  9 12:38:32.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-z5lpk'
Jan  9 12:38:32.442: INFO: stderr: "No resources found.\n"
Jan  9 12:38:32.442: INFO: stdout: ""
Jan  9 12:38:32.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-z5lpk -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  9 12:38:32.690: INFO: stderr: ""
Jan  9 12:38:32.691: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:38:32.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-z5lpk" for this suite.
Jan  9 12:38:56.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:38:56.819: INFO: namespace: e2e-tests-kubectl-z5lpk, resource: bindings, ignored listing per whitelist
Jan  9 12:38:56.901: INFO: namespace e2e-tests-kubectl-z5lpk deletion completed in 24.192051435s

• [SLOW TEST:62.707 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:38:56.901: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:38:57.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-hh9lb" for this suite.
Jan  9 12:39:03.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:39:03.491: INFO: namespace: e2e-tests-kubelet-test-hh9lb, resource: bindings, ignored listing per whitelist
Jan  9 12:39:03.687: INFO: namespace e2e-tests-kubelet-test-hh9lb deletion completed in 6.280375581s

• [SLOW TEST:6.786 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:39:03.687: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  9 12:39:04.086: INFO: Waiting up to 5m0s for pod "downwardapi-volume-04e82283-32dd-11ea-ac2d-0242ac110005" in namespace "e2e-tests-downward-api-kmw6r" to be "success or failure"
Jan  9 12:39:04.097: INFO: Pod "downwardapi-volume-04e82283-32dd-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.14314ms
Jan  9 12:39:06.114: INFO: Pod "downwardapi-volume-04e82283-32dd-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027409705s
Jan  9 12:39:08.186: INFO: Pod "downwardapi-volume-04e82283-32dd-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099555754s
Jan  9 12:39:10.276: INFO: Pod "downwardapi-volume-04e82283-32dd-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.189394414s
Jan  9 12:39:12.288: INFO: Pod "downwardapi-volume-04e82283-32dd-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.201995235s
Jan  9 12:39:14.475: INFO: Pod "downwardapi-volume-04e82283-32dd-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.388329633s
STEP: Saw pod success
Jan  9 12:39:14.475: INFO: Pod "downwardapi-volume-04e82283-32dd-11ea-ac2d-0242ac110005" satisfied condition "success or failure"
Jan  9 12:39:14.495: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-04e82283-32dd-11ea-ac2d-0242ac110005 container client-container: 
STEP: delete the pod
Jan  9 12:39:15.219: INFO: Waiting for pod downwardapi-volume-04e82283-32dd-11ea-ac2d-0242ac110005 to disappear
Jan  9 12:39:15.243: INFO: Pod downwardapi-volume-04e82283-32dd-11ea-ac2d-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:39:15.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-kmw6r" for this suite.
Jan  9 12:39:21.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:39:21.452: INFO: namespace: e2e-tests-downward-api-kmw6r, resource: bindings, ignored listing per whitelist
Jan  9 12:39:21.483: INFO: namespace e2e-tests-downward-api-kmw6r deletion completed in 6.227418237s

• [SLOW TEST:17.796 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:39:21.484: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jan  9 12:39:34.731: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:39:35.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-vmdqx" for this suite.
Jan  9 12:40:02.168: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:40:02.236: INFO: namespace: e2e-tests-replicaset-vmdqx, resource: bindings, ignored listing per whitelist
Jan  9 12:40:02.328: INFO: namespace e2e-tests-replicaset-vmdqx deletion completed in 26.543947935s

• [SLOW TEST:40.844 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:40:02.328: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-27e0f9c2-32dd-11ea-ac2d-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  9 12:40:02.685: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-27e33a05-32dd-11ea-ac2d-0242ac110005" in namespace "e2e-tests-projected-cc7s8" to be "success or failure"
Jan  9 12:40:02.708: INFO: Pod "pod-projected-configmaps-27e33a05-32dd-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.166024ms
Jan  9 12:40:05.029: INFO: Pod "pod-projected-configmaps-27e33a05-32dd-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.34400377s
Jan  9 12:40:07.058: INFO: Pod "pod-projected-configmaps-27e33a05-32dd-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.372453129s
Jan  9 12:40:09.372: INFO: Pod "pod-projected-configmaps-27e33a05-32dd-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.686124767s
Jan  9 12:40:12.025: INFO: Pod "pod-projected-configmaps-27e33a05-32dd-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.34003201s
Jan  9 12:40:14.038: INFO: Pod "pod-projected-configmaps-27e33a05-32dd-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.352184707s
STEP: Saw pod success
Jan  9 12:40:14.038: INFO: Pod "pod-projected-configmaps-27e33a05-32dd-11ea-ac2d-0242ac110005" satisfied condition "success or failure"
Jan  9 12:40:14.042: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-27e33a05-32dd-11ea-ac2d-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  9 12:40:14.423: INFO: Waiting for pod pod-projected-configmaps-27e33a05-32dd-11ea-ac2d-0242ac110005 to disappear
Jan  9 12:40:14.454: INFO: Pod pod-projected-configmaps-27e33a05-32dd-11ea-ac2d-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:40:14.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-cc7s8" for this suite.
Jan  9 12:40:20.680: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:40:21.080: INFO: namespace: e2e-tests-projected-cc7s8, resource: bindings, ignored listing per whitelist
Jan  9 12:40:21.084: INFO: namespace e2e-tests-projected-cc7s8 deletion completed in 6.605129885s

• [SLOW TEST:18.756 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:40:21.085: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-330b791c-32dd-11ea-ac2d-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  9 12:40:21.498: INFO: Waiting up to 5m0s for pod "pod-secrets-330e7c7c-32dd-11ea-ac2d-0242ac110005" in namespace "e2e-tests-secrets-wgd9l" to be "success or failure"
Jan  9 12:40:21.509: INFO: Pod "pod-secrets-330e7c7c-32dd-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.711438ms
Jan  9 12:40:23.524: INFO: Pod "pod-secrets-330e7c7c-32dd-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026075463s
Jan  9 12:40:25.533: INFO: Pod "pod-secrets-330e7c7c-32dd-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034805422s
Jan  9 12:40:27.559: INFO: Pod "pod-secrets-330e7c7c-32dd-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060790979s
Jan  9 12:40:29.653: INFO: Pod "pod-secrets-330e7c7c-32dd-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.155022312s
STEP: Saw pod success
Jan  9 12:40:29.653: INFO: Pod "pod-secrets-330e7c7c-32dd-11ea-ac2d-0242ac110005" satisfied condition "success or failure"
Jan  9 12:40:29.660: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-330e7c7c-32dd-11ea-ac2d-0242ac110005 container secret-env-test: 
STEP: delete the pod
Jan  9 12:40:29.738: INFO: Waiting for pod pod-secrets-330e7c7c-32dd-11ea-ac2d-0242ac110005 to disappear
Jan  9 12:40:29.924: INFO: Pod pod-secrets-330e7c7c-32dd-11ea-ac2d-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:40:29.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-wgd9l" for this suite.
Jan  9 12:40:37.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:40:38.132: INFO: namespace: e2e-tests-secrets-wgd9l, resource: bindings, ignored listing per whitelist
Jan  9 12:40:38.132: INFO: namespace e2e-tests-secrets-wgd9l deletion completed in 8.198266671s

• [SLOW TEST:17.047 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:40:38.133: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan  9 12:40:50.941: INFO: Successfully updated pod "pod-update-activedeadlineseconds-3d1e619d-32dd-11ea-ac2d-0242ac110005"
Jan  9 12:40:50.941: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-3d1e619d-32dd-11ea-ac2d-0242ac110005" in namespace "e2e-tests-pods-vk8lh" to be "terminated due to deadline exceeded"
Jan  9 12:40:50.957: INFO: Pod "pod-update-activedeadlineseconds-3d1e619d-32dd-11ea-ac2d-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 15.510897ms
Jan  9 12:40:52.977: INFO: Pod "pod-update-activedeadlineseconds-3d1e619d-32dd-11ea-ac2d-0242ac110005": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.035737575s
Jan  9 12:40:52.977: INFO: Pod "pod-update-activedeadlineseconds-3d1e619d-32dd-11ea-ac2d-0242ac110005" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:40:52.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-vk8lh" for this suite.
Jan  9 12:40:59.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:40:59.755: INFO: namespace: e2e-tests-pods-vk8lh, resource: bindings, ignored listing per whitelist
Jan  9 12:40:59.759: INFO: namespace e2e-tests-pods-vk8lh deletion completed in 6.762135773s

• [SLOW TEST:21.626 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:40:59.759: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-4a0b1add-32dd-11ea-ac2d-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  9 12:41:00.096: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4a0bd143-32dd-11ea-ac2d-0242ac110005" in namespace "e2e-tests-projected-zd7tg" to be "success or failure"
Jan  9 12:41:00.107: INFO: Pod "pod-projected-configmaps-4a0bd143-32dd-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.708795ms
Jan  9 12:41:02.142: INFO: Pod "pod-projected-configmaps-4a0bd143-32dd-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045941746s
Jan  9 12:41:04.165: INFO: Pod "pod-projected-configmaps-4a0bd143-32dd-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068627919s
Jan  9 12:41:07.129: INFO: Pod "pod-projected-configmaps-4a0bd143-32dd-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.032489817s
Jan  9 12:41:09.590: INFO: Pod "pod-projected-configmaps-4a0bd143-32dd-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.493338076s
Jan  9 12:41:11.744: INFO: Pod "pod-projected-configmaps-4a0bd143-32dd-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.647207542s
STEP: Saw pod success
Jan  9 12:41:11.744: INFO: Pod "pod-projected-configmaps-4a0bd143-32dd-11ea-ac2d-0242ac110005" satisfied condition "success or failure"
Jan  9 12:41:11.769: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-4a0bd143-32dd-11ea-ac2d-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  9 12:41:12.127: INFO: Waiting for pod pod-projected-configmaps-4a0bd143-32dd-11ea-ac2d-0242ac110005 to disappear
Jan  9 12:41:12.227: INFO: Pod pod-projected-configmaps-4a0bd143-32dd-11ea-ac2d-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:41:12.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-zd7tg" for this suite.
Jan  9 12:41:18.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:41:18.510: INFO: namespace: e2e-tests-projected-zd7tg, resource: bindings, ignored listing per whitelist
Jan  9 12:41:18.579: INFO: namespace e2e-tests-projected-zd7tg deletion completed in 6.333463972s

• [SLOW TEST:18.820 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:41:18.580: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-7ftbd
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  9 12:41:18.829: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  9 12:41:51.090: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-7ftbd PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  9 12:41:51.090: INFO: >>> kubeConfig: /root/.kube/config
I0109 12:41:51.214327       9 log.go:172] (0xc000176a50) (0xc001698aa0) Create stream
I0109 12:41:51.214408       9 log.go:172] (0xc000176a50) (0xc001698aa0) Stream added, broadcasting: 1
I0109 12:41:51.219979       9 log.go:172] (0xc000176a50) Reply frame received for 1
I0109 12:41:51.220053       9 log.go:172] (0xc000176a50) (0xc0012c6820) Create stream
I0109 12:41:51.220065       9 log.go:172] (0xc000176a50) (0xc0012c6820) Stream added, broadcasting: 3
I0109 12:41:51.226342       9 log.go:172] (0xc000176a50) Reply frame received for 3
I0109 12:41:51.226378       9 log.go:172] (0xc000176a50) (0xc0012c68c0) Create stream
I0109 12:41:51.226389       9 log.go:172] (0xc000176a50) (0xc0012c68c0) Stream added, broadcasting: 5
I0109 12:41:51.230884       9 log.go:172] (0xc000176a50) Reply frame received for 5
I0109 12:41:51.473865       9 log.go:172] (0xc000176a50) Data frame received for 3
I0109 12:41:51.473965       9 log.go:172] (0xc0012c6820) (3) Data frame handling
I0109 12:41:51.473998       9 log.go:172] (0xc0012c6820) (3) Data frame sent
I0109 12:41:51.606096       9 log.go:172] (0xc000176a50) (0xc0012c6820) Stream removed, broadcasting: 3
I0109 12:41:51.606185       9 log.go:172] (0xc000176a50) Data frame received for 1
I0109 12:41:51.606202       9 log.go:172] (0xc001698aa0) (1) Data frame handling
I0109 12:41:51.606220       9 log.go:172] (0xc000176a50) (0xc0012c68c0) Stream removed, broadcasting: 5
I0109 12:41:51.606264       9 log.go:172] (0xc001698aa0) (1) Data frame sent
I0109 12:41:51.606276       9 log.go:172] (0xc000176a50) (0xc001698aa0) Stream removed, broadcasting: 1
I0109 12:41:51.606293       9 log.go:172] (0xc000176a50) Go away received
I0109 12:41:51.606473       9 log.go:172] (0xc000176a50) (0xc001698aa0) Stream removed, broadcasting: 1
I0109 12:41:51.606502       9 log.go:172] (0xc000176a50) (0xc0012c6820) Stream removed, broadcasting: 3
I0109 12:41:51.606517       9 log.go:172] (0xc000176a50) (0xc0012c68c0) Stream removed, broadcasting: 5
Jan  9 12:41:51.606: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:41:51.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-7ftbd" for this suite.
Jan  9 12:42:17.656: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:42:17.832: INFO: namespace: e2e-tests-pod-network-test-7ftbd, resource: bindings, ignored listing per whitelist
Jan  9 12:42:17.840: INFO: namespace e2e-tests-pod-network-test-7ftbd deletion completed in 26.216376554s

• [SLOW TEST:59.261 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:42:17.841: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-tqpnv/configmap-test-78b84318-32dd-11ea-ac2d-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  9 12:42:18.326: INFO: Waiting up to 5m0s for pod "pod-configmaps-78bbbddc-32dd-11ea-ac2d-0242ac110005" in namespace "e2e-tests-configmap-tqpnv" to be "success or failure"
Jan  9 12:42:18.539: INFO: Pod "pod-configmaps-78bbbddc-32dd-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 213.033373ms
Jan  9 12:42:20.565: INFO: Pod "pod-configmaps-78bbbddc-32dd-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.23876036s
Jan  9 12:42:22.576: INFO: Pod "pod-configmaps-78bbbddc-32dd-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.250197313s
Jan  9 12:42:24.603: INFO: Pod "pod-configmaps-78bbbddc-32dd-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.277318152s
Jan  9 12:42:26.621: INFO: Pod "pod-configmaps-78bbbddc-32dd-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.294943224s
Jan  9 12:42:28.649: INFO: Pod "pod-configmaps-78bbbddc-32dd-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.323316358s
Jan  9 12:42:30.657: INFO: Pod "pod-configmaps-78bbbddc-32dd-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.33121997s
STEP: Saw pod success
Jan  9 12:42:30.657: INFO: Pod "pod-configmaps-78bbbddc-32dd-11ea-ac2d-0242ac110005" satisfied condition "success or failure"
Jan  9 12:42:30.664: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-78bbbddc-32dd-11ea-ac2d-0242ac110005 container env-test: 
STEP: delete the pod
Jan  9 12:42:30.744: INFO: Waiting for pod pod-configmaps-78bbbddc-32dd-11ea-ac2d-0242ac110005 to disappear
Jan  9 12:42:30.758: INFO: Pod pod-configmaps-78bbbddc-32dd-11ea-ac2d-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:42:30.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-tqpnv" for this suite.
Jan  9 12:42:38.814: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:42:38.882: INFO: namespace: e2e-tests-configmap-tqpnv, resource: bindings, ignored listing per whitelist
Jan  9 12:42:38.966: INFO: namespace e2e-tests-configmap-tqpnv deletion completed in 8.191237699s

• [SLOW TEST:21.126 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:42:38.966: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  9 12:42:39.169: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8528b18c-32dd-11ea-ac2d-0242ac110005" in namespace "e2e-tests-projected-frlz2" to be "success or failure"
Jan  9 12:42:39.179: INFO: Pod "downwardapi-volume-8528b18c-32dd-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.684253ms
Jan  9 12:42:41.377: INFO: Pod "downwardapi-volume-8528b18c-32dd-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208661781s
Jan  9 12:42:43.393: INFO: Pod "downwardapi-volume-8528b18c-32dd-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.224795582s
Jan  9 12:42:45.405: INFO: Pod "downwardapi-volume-8528b18c-32dd-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.236427458s
Jan  9 12:42:47.427: INFO: Pod "downwardapi-volume-8528b18c-32dd-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.258564748s
STEP: Saw pod success
Jan  9 12:42:47.427: INFO: Pod "downwardapi-volume-8528b18c-32dd-11ea-ac2d-0242ac110005" satisfied condition "success or failure"
Jan  9 12:42:47.438: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-8528b18c-32dd-11ea-ac2d-0242ac110005 container client-container: 
STEP: delete the pod
Jan  9 12:42:47.664: INFO: Waiting for pod downwardapi-volume-8528b18c-32dd-11ea-ac2d-0242ac110005 to disappear
Jan  9 12:42:47.673: INFO: Pod downwardapi-volume-8528b18c-32dd-11ea-ac2d-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:42:47.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-frlz2" for this suite.
Jan  9 12:42:53.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:42:53.954: INFO: namespace: e2e-tests-projected-frlz2, resource: bindings, ignored listing per whitelist
Jan  9 12:42:54.038: INFO: namespace e2e-tests-projected-frlz2 deletion completed in 6.356984214s

• [SLOW TEST:15.071 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:42:54.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-8e2d0055-32dd-11ea-ac2d-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  9 12:42:54.295: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8e2e0eb7-32dd-11ea-ac2d-0242ac110005" in namespace "e2e-tests-projected-xr625" to be "success or failure"
Jan  9 12:42:54.327: INFO: Pod "pod-projected-configmaps-8e2e0eb7-32dd-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 32.03404ms
Jan  9 12:42:56.413: INFO: Pod "pod-projected-configmaps-8e2e0eb7-32dd-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117292016s
Jan  9 12:42:58.443: INFO: Pod "pod-projected-configmaps-8e2e0eb7-32dd-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.147204355s
Jan  9 12:43:00.501: INFO: Pod "pod-projected-configmaps-8e2e0eb7-32dd-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.20521679s
Jan  9 12:43:02.519: INFO: Pod "pod-projected-configmaps-8e2e0eb7-32dd-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.223407939s
Jan  9 12:43:04.539: INFO: Pod "pod-projected-configmaps-8e2e0eb7-32dd-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.243801646s
STEP: Saw pod success
Jan  9 12:43:04.539: INFO: Pod "pod-projected-configmaps-8e2e0eb7-32dd-11ea-ac2d-0242ac110005" satisfied condition "success or failure"
Jan  9 12:43:04.546: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-8e2e0eb7-32dd-11ea-ac2d-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  9 12:43:04.678: INFO: Waiting for pod pod-projected-configmaps-8e2e0eb7-32dd-11ea-ac2d-0242ac110005 to disappear
Jan  9 12:43:04.785: INFO: Pod pod-projected-configmaps-8e2e0eb7-32dd-11ea-ac2d-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:43:04.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-xr625" for this suite.
Jan  9 12:43:10.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:43:10.948: INFO: namespace: e2e-tests-projected-xr625, resource: bindings, ignored listing per whitelist
Jan  9 12:43:11.029: INFO: namespace e2e-tests-projected-xr625 deletion completed in 6.231995636s

• [SLOW TEST:16.991 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:43:11.029: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-984a015f-32dd-11ea-ac2d-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  9 12:43:11.370: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-984efb34-32dd-11ea-ac2d-0242ac110005" in namespace "e2e-tests-projected-g44sm" to be "success or failure"
Jan  9 12:43:11.378: INFO: Pod "pod-projected-configmaps-984efb34-32dd-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.876218ms
Jan  9 12:43:13.388: INFO: Pod "pod-projected-configmaps-984efb34-32dd-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017446362s
Jan  9 12:43:15.410: INFO: Pod "pod-projected-configmaps-984efb34-32dd-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040211484s
Jan  9 12:43:17.426: INFO: Pod "pod-projected-configmaps-984efb34-32dd-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055829539s
Jan  9 12:43:19.457: INFO: Pod "pod-projected-configmaps-984efb34-32dd-11ea-ac2d-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.087095865s
Jan  9 12:43:21.489: INFO: Pod "pod-projected-configmaps-984efb34-32dd-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.118895818s
STEP: Saw pod success
Jan  9 12:43:21.489: INFO: Pod "pod-projected-configmaps-984efb34-32dd-11ea-ac2d-0242ac110005" satisfied condition "success or failure"
Jan  9 12:43:21.689: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-984efb34-32dd-11ea-ac2d-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  9 12:43:21.792: INFO: Waiting for pod pod-projected-configmaps-984efb34-32dd-11ea-ac2d-0242ac110005 to disappear
Jan  9 12:43:21.948: INFO: Pod pod-projected-configmaps-984efb34-32dd-11ea-ac2d-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:43:21.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-g44sm" for this suite.
Jan  9 12:43:28.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:43:28.074: INFO: namespace: e2e-tests-projected-g44sm, resource: bindings, ignored listing per whitelist
Jan  9 12:43:28.155: INFO: namespace e2e-tests-projected-g44sm deletion completed in 6.195249239s

• [SLOW TEST:17.126 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:43:28.155: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-766ds
Jan  9 12:43:38.388: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-766ds
STEP: checking the pod's current state and verifying that restartCount is present
Jan  9 12:43:38.394: INFO: Initial restart count of pod liveness-http is 0
Jan  9 12:43:58.691: INFO: Restart count of pod e2e-tests-container-probe-766ds/liveness-http is now 1 (20.296245344s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:43:58.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-766ds" for this suite.
Jan  9 12:44:04.775: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:44:04.846: INFO: namespace: e2e-tests-container-probe-766ds, resource: bindings, ignored listing per whitelist
Jan  9 12:44:04.922: INFO: namespace e2e-tests-container-probe-766ds deletion completed in 6.18917613s

• [SLOW TEST:36.767 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:44:04.922: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  9 12:44:05.058: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Jan  9 12:44:05.071: INFO: Pod name sample-pod: Found 0 pods out of 1
Jan  9 12:44:10.097: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan  9 12:44:14.116: INFO: Creating deployment "test-rolling-update-deployment"
Jan  9 12:44:14.138: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Jan  9 12:44:14.182: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Jan  9 12:44:16.200: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Jan  9 12:44:16.204: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714170654, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714170654, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714170654, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714170654, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  9 12:44:18.222: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714170654, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714170654, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714170654, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714170654, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  9 12:44:20.217: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714170654, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714170654, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714170654, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714170654, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  9 12:44:22.218: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan  9 12:44:22.236: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-whswh,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-whswh/deployments/test-rolling-update-deployment,UID:bdc567d2-32dd-11ea-a994-fa163e34d433,ResourceVersion:17702303,Generation:1,CreationTimestamp:2020-01-09 12:44:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-09 12:44:14 +0000 UTC 2020-01-09 12:44:14 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-09 12:44:21 +0000 UTC 2020-01-09 12:44:14 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan  9 12:44:22.240: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-whswh,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-whswh/replicasets/test-rolling-update-deployment-75db98fb4c,UID:bdd35a64-32dd-11ea-a994-fa163e34d433,ResourceVersion:17702294,Generation:1,CreationTimestamp:2020-01-09 12:44:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment bdc567d2-32dd-11ea-a994-fa163e34d433 0xc00171ec57 0xc00171ec58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan  9 12:44:22.240: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Jan  9 12:44:22.241: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-whswh,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-whswh/replicasets/test-rolling-update-controller,UID:b85ebfd0-32dd-11ea-a994-fa163e34d433,ResourceVersion:17702302,Generation:2,CreationTimestamp:2020-01-09 12:44:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment bdc567d2-32dd-11ea-a994-fa163e34d433 0xc00171eb0f 0xc00171eb20}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  9 12:44:22.247: INFO: Pod "test-rolling-update-deployment-75db98fb4c-5r965" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-5r965,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-whswh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-whswh/pods/test-rolling-update-deployment-75db98fb4c-5r965,UID:bdd44f96-32dd-11ea-a994-fa163e34d433,ResourceVersion:17702293,Generation:0,CreationTimestamp:2020-01-09 12:44:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c bdd35a64-32dd-11ea-a994-fa163e34d433 0xc00171f917 0xc00171f918}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-7g64p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7g64p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-7g64p true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00171f980} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00171f9a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 12:44:14 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 12:44:21 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 12:44:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 12:44:14 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-09 12:44:14 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-09 12:44:21 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://85636885c0823240c7f71781ccd529c073f04a0055261162f438e8d6a88fac59}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:44:22.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-whswh" for this suite.
Jan  9 12:44:31.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:44:31.161: INFO: namespace: e2e-tests-deployment-whswh, resource: bindings, ignored listing per whitelist
Jan  9 12:44:31.236: INFO: namespace e2e-tests-deployment-whswh deletion completed in 8.984505854s

• [SLOW TEST:26.314 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:44:31.237: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-8c2gp
Jan  9 12:44:41.464: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-8c2gp
STEP: checking the pod's current state and verifying that restartCount is present
Jan  9 12:44:41.470: INFO: Initial restart count of pod liveness-exec is 0
Jan  9 12:45:38.651: INFO: Restart count of pod e2e-tests-container-probe-8c2gp/liveness-exec is now 1 (57.180567101s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:45:38.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-8c2gp" for this suite.
Jan  9 12:45:47.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:45:47.244: INFO: namespace: e2e-tests-container-probe-8c2gp, resource: bindings, ignored listing per whitelist
Jan  9 12:45:47.244: INFO: namespace e2e-tests-container-probe-8c2gp deletion completed in 8.413579681s

• [SLOW TEST:76.007 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:45:47.244: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan  9 12:45:58.227: INFO: Successfully updated pod "pod-update-f5812623-32dd-11ea-ac2d-0242ac110005"
STEP: verifying the updated pod is in kubernetes
Jan  9 12:45:58.256: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:45:58.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-8xxq4" for this suite.
Jan  9 12:46:22.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:46:22.423: INFO: namespace: e2e-tests-pods-8xxq4, resource: bindings, ignored listing per whitelist
Jan  9 12:46:22.495: INFO: namespace e2e-tests-pods-8xxq4 deletion completed in 24.232056724s

• [SLOW TEST:35.251 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:46:22.496: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Jan  9 12:46:32.863: INFO: Pod pod-hostip-0a790b8f-32de-11ea-ac2d-0242ac110005 has hostIP: 10.96.1.240
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:46:32.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-nmkqg" for this suite.
Jan  9 12:46:56.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:46:57.021: INFO: namespace: e2e-tests-pods-nmkqg, resource: bindings, ignored listing per whitelist
Jan  9 12:46:57.030: INFO: namespace e2e-tests-pods-nmkqg deletion completed in 24.161741368s

• [SLOW TEST:34.534 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:46:57.031: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Jan  9 12:46:57.170: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-m8hcb" to be "success or failure"
Jan  9 12:46:57.178: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.373873ms
Jan  9 12:46:59.216: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046429604s
Jan  9 12:47:01.362: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.192431586s
Jan  9 12:47:03.912: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.742572404s
Jan  9 12:47:06.227: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 9.057438697s
Jan  9 12:47:08.257: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 11.087215806s
Jan  9 12:47:10.300: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.130101778s
STEP: Saw pod success
Jan  9 12:47:10.300: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jan  9 12:47:10.749: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jan  9 12:47:11.088: INFO: Waiting for pod pod-host-path-test to disappear
Jan  9 12:47:11.107: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:47:11.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-m8hcb" for this suite.
Jan  9 12:47:17.140: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:47:17.241: INFO: namespace: e2e-tests-hostpath-m8hcb, resource: bindings, ignored listing per whitelist
Jan  9 12:47:17.282: INFO: namespace e2e-tests-hostpath-m8hcb deletion completed in 6.166180501s

• [SLOW TEST:20.251 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:47:17.282: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  9 12:47:17.533: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 20.630719ms)
Jan  9 12:47:17.541: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.74373ms)
Jan  9 12:47:17.567: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 26.136037ms)
Jan  9 12:47:17.714: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 146.859193ms)
Jan  9 12:47:17.726: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.375463ms)
Jan  9 12:47:17.733: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.57185ms)
Jan  9 12:47:17.741: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.306254ms)
Jan  9 12:47:17.748: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.97896ms)
Jan  9 12:47:17.751: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.911325ms)
Jan  9 12:47:17.756: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.709695ms)
Jan  9 12:47:17.760: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.103697ms)
Jan  9 12:47:17.765: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.595297ms)
Jan  9 12:47:17.769: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.335581ms)
Jan  9 12:47:17.774: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.829973ms)
Jan  9 12:47:17.779: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.6878ms)
Jan  9 12:47:17.784: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.399604ms)
Jan  9 12:47:17.789: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.551687ms)
Jan  9 12:47:17.794: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.536529ms)
Jan  9 12:47:17.800: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.442045ms)
Jan  9 12:47:17.805: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.646339ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:47:17.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-97s5x" for this suite.
Jan  9 12:47:23.876: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:47:23.908: INFO: namespace: e2e-tests-proxy-97s5x, resource: bindings, ignored listing per whitelist
Jan  9 12:47:24.087: INFO: namespace e2e-tests-proxy-97s5x deletion completed in 6.278083887s

• [SLOW TEST:6.805 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:47:24.087: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-bmfbz
Jan  9 12:47:34.359: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-bmfbz
STEP: checking the pod's current state and verifying that restartCount is present
Jan  9 12:47:34.366: INFO: Initial restart count of pod liveness-http is 0
Jan  9 12:47:54.618: INFO: Restart count of pod e2e-tests-container-probe-bmfbz/liveness-http is now 1 (20.252456347s elapsed)
Jan  9 12:48:15.169: INFO: Restart count of pod e2e-tests-container-probe-bmfbz/liveness-http is now 2 (40.803004603s elapsed)
Jan  9 12:48:35.331: INFO: Restart count of pod e2e-tests-container-probe-bmfbz/liveness-http is now 3 (1m0.965162922s elapsed)
Jan  9 12:48:55.504: INFO: Restart count of pod e2e-tests-container-probe-bmfbz/liveness-http is now 4 (1m21.138036303s elapsed)
Jan  9 12:49:55.155: INFO: Restart count of pod e2e-tests-container-probe-bmfbz/liveness-http is now 5 (2m20.789065777s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:49:55.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-bmfbz" for this suite.
Jan  9 12:50:01.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:50:01.455: INFO: namespace: e2e-tests-container-probe-bmfbz, resource: bindings, ignored listing per whitelist
Jan  9 12:50:01.518: INFO: namespace e2e-tests-container-probe-bmfbz deletion completed in 6.210604416s

• [SLOW TEST:157.431 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:50:01.518: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  9 12:50:11.804: INFO: Waiting up to 5m0s for pod "client-envvars-92ecbb48-32de-11ea-ac2d-0242ac110005" in namespace "e2e-tests-pods-mxzr9" to be "success or failure"
Jan  9 12:50:11.912: INFO: Pod "client-envvars-92ecbb48-32de-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 108.171194ms
Jan  9 12:50:14.023: INFO: Pod "client-envvars-92ecbb48-32de-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219509975s
Jan  9 12:50:16.038: INFO: Pod "client-envvars-92ecbb48-32de-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.234264902s
Jan  9 12:50:18.083: INFO: Pod "client-envvars-92ecbb48-32de-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.279375129s
Jan  9 12:50:20.100: INFO: Pod "client-envvars-92ecbb48-32de-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.296531191s
Jan  9 12:50:22.231: INFO: Pod "client-envvars-92ecbb48-32de-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.427093241s
STEP: Saw pod success
Jan  9 12:50:22.231: INFO: Pod "client-envvars-92ecbb48-32de-11ea-ac2d-0242ac110005" satisfied condition "success or failure"
Jan  9 12:50:22.241: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-92ecbb48-32de-11ea-ac2d-0242ac110005 container env3cont: 
STEP: delete the pod
Jan  9 12:50:22.632: INFO: Waiting for pod client-envvars-92ecbb48-32de-11ea-ac2d-0242ac110005 to disappear
Jan  9 12:50:22.676: INFO: Pod client-envvars-92ecbb48-32de-11ea-ac2d-0242ac110005 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:50:22.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-mxzr9" for this suite.
Jan  9 12:51:16.763: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:51:17.059: INFO: namespace: e2e-tests-pods-mxzr9, resource: bindings, ignored listing per whitelist
Jan  9 12:51:17.113: INFO: namespace e2e-tests-pods-mxzr9 deletion completed in 54.420735455s

• [SLOW TEST:75.594 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:51:17.113: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating secret e2e-tests-secrets-rnc4z/secret-test-b9fb4f60-32de-11ea-ac2d-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  9 12:51:17.307: INFO: Waiting up to 5m0s for pod "pod-configmaps-ba0051c4-32de-11ea-ac2d-0242ac110005" in namespace "e2e-tests-secrets-rnc4z" to be "success or failure"
Jan  9 12:51:17.322: INFO: Pod "pod-configmaps-ba0051c4-32de-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.128598ms
Jan  9 12:51:19.332: INFO: Pod "pod-configmaps-ba0051c4-32de-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024511676s
Jan  9 12:51:21.345: INFO: Pod "pod-configmaps-ba0051c4-32de-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037354312s
Jan  9 12:51:23.358: INFO: Pod "pod-configmaps-ba0051c4-32de-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050637659s
Jan  9 12:51:25.492: INFO: Pod "pod-configmaps-ba0051c4-32de-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.184558716s
Jan  9 12:51:27.597: INFO: Pod "pod-configmaps-ba0051c4-32de-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.290180551s
STEP: Saw pod success
Jan  9 12:51:27.598: INFO: Pod "pod-configmaps-ba0051c4-32de-11ea-ac2d-0242ac110005" satisfied condition "success or failure"
Jan  9 12:51:27.608: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-ba0051c4-32de-11ea-ac2d-0242ac110005 container env-test: 
STEP: delete the pod
Jan  9 12:51:27.893: INFO: Waiting for pod pod-configmaps-ba0051c4-32de-11ea-ac2d-0242ac110005 to disappear
Jan  9 12:51:27.981: INFO: Pod pod-configmaps-ba0051c4-32de-11ea-ac2d-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:51:27.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-rnc4z" for this suite.
Jan  9 12:51:34.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:51:34.194: INFO: namespace: e2e-tests-secrets-rnc4z, resource: bindings, ignored listing per whitelist
Jan  9 12:51:34.198: INFO: namespace e2e-tests-secrets-rnc4z deletion completed in 6.208256604s

• [SLOW TEST:17.085 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:51:34.198: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-4mjwr
I0109 12:51:34.359986       9 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-4mjwr, replica count: 1
I0109 12:51:35.410826       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0109 12:51:36.411287       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0109 12:51:37.411645       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0109 12:51:38.412029       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0109 12:51:39.412398       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0109 12:51:40.412778       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0109 12:51:41.413069       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0109 12:51:42.413430       9 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan  9 12:51:42.630: INFO: Created: latency-svc-l8kc2
Jan  9 12:51:42.810: INFO: Got endpoints: latency-svc-l8kc2 [296.707214ms]
Jan  9 12:51:43.125: INFO: Created: latency-svc-s5mdn
Jan  9 12:51:43.155: INFO: Got endpoints: latency-svc-s5mdn [343.715486ms]
Jan  9 12:51:43.282: INFO: Created: latency-svc-xrf2m
Jan  9 12:51:43.320: INFO: Got endpoints: latency-svc-xrf2m [509.212758ms]
Jan  9 12:51:43.518: INFO: Created: latency-svc-4vrc8
Jan  9 12:51:43.518: INFO: Got endpoints: latency-svc-4vrc8 [707.209075ms]
Jan  9 12:51:43.664: INFO: Created: latency-svc-zbt6s
Jan  9 12:51:43.684: INFO: Got endpoints: latency-svc-zbt6s [873.44424ms]
Jan  9 12:51:43.744: INFO: Created: latency-svc-vlczt
Jan  9 12:51:43.910: INFO: Got endpoints: latency-svc-vlczt [1.099028031s]
Jan  9 12:51:43.989: INFO: Created: latency-svc-v7tdj
Jan  9 12:51:44.088: INFO: Got endpoints: latency-svc-v7tdj [1.276958361s]
Jan  9 12:51:44.136: INFO: Created: latency-svc-ns2ns
Jan  9 12:51:44.151: INFO: Got endpoints: latency-svc-ns2ns [1.34062177s]
Jan  9 12:51:44.345: INFO: Created: latency-svc-56vl9
Jan  9 12:51:44.383: INFO: Got endpoints: latency-svc-56vl9 [1.571709711s]
Jan  9 12:51:44.557: INFO: Created: latency-svc-pdrwb
Jan  9 12:51:44.563: INFO: Got endpoints: latency-svc-pdrwb [1.752037817s]
Jan  9 12:51:44.737: INFO: Created: latency-svc-72pmn
Jan  9 12:51:44.745: INFO: Got endpoints: latency-svc-72pmn [1.93396847s]
Jan  9 12:51:44.795: INFO: Created: latency-svc-jsf77
Jan  9 12:51:44.969: INFO: Got endpoints: latency-svc-jsf77 [2.157819778s]
Jan  9 12:51:44.990: INFO: Created: latency-svc-w9bms
Jan  9 12:51:45.006: INFO: Got endpoints: latency-svc-w9bms [2.195109521s]
Jan  9 12:51:45.204: INFO: Created: latency-svc-dr246
Jan  9 12:51:45.250: INFO: Got endpoints: latency-svc-dr246 [2.438342112s]
Jan  9 12:51:45.256: INFO: Created: latency-svc-8ljft
Jan  9 12:51:45.278: INFO: Got endpoints: latency-svc-8ljft [2.466411746s]
Jan  9 12:51:45.445: INFO: Created: latency-svc-5ph6g
Jan  9 12:51:45.475: INFO: Got endpoints: latency-svc-5ph6g [2.6639973s]
Jan  9 12:51:45.640: INFO: Created: latency-svc-5wfmw
Jan  9 12:51:45.640: INFO: Got endpoints: latency-svc-5wfmw [2.484755707s]
Jan  9 12:51:45.709: INFO: Created: latency-svc-5kv2s
Jan  9 12:51:45.821: INFO: Got endpoints: latency-svc-5kv2s [2.500528412s]
Jan  9 12:51:45.845: INFO: Created: latency-svc-6vkcl
Jan  9 12:51:45.875: INFO: Got endpoints: latency-svc-6vkcl [2.357025064s]
Jan  9 12:51:46.013: INFO: Created: latency-svc-q2czh
Jan  9 12:51:46.013: INFO: Got endpoints: latency-svc-q2czh [2.328598193s]
Jan  9 12:51:46.060: INFO: Created: latency-svc-hptmp
Jan  9 12:51:46.077: INFO: Got endpoints: latency-svc-hptmp [2.166234542s]
Jan  9 12:51:46.252: INFO: Created: latency-svc-9crnk
Jan  9 12:51:46.259: INFO: Got endpoints: latency-svc-9crnk [2.171246548s]
Jan  9 12:51:46.311: INFO: Created: latency-svc-qv2mv
Jan  9 12:51:46.417: INFO: Got endpoints: latency-svc-qv2mv [2.265454714s]
Jan  9 12:51:46.432: INFO: Created: latency-svc-tf56q
Jan  9 12:51:46.467: INFO: Got endpoints: latency-svc-tf56q [2.083638635s]
Jan  9 12:51:46.681: INFO: Created: latency-svc-g9qfc
Jan  9 12:51:46.740: INFO: Created: latency-svc-dhzd8
Jan  9 12:51:46.749: INFO: Got endpoints: latency-svc-g9qfc [2.1864274s]
Jan  9 12:51:46.758: INFO: Got endpoints: latency-svc-dhzd8 [2.012695511s]
Jan  9 12:51:46.910: INFO: Created: latency-svc-blzc5
Jan  9 12:51:46.932: INFO: Got endpoints: latency-svc-blzc5 [1.962514418s]
Jan  9 12:51:47.144: INFO: Created: latency-svc-lhps7
Jan  9 12:51:47.161: INFO: Got endpoints: latency-svc-lhps7 [2.154878525s]
Jan  9 12:51:47.240: INFO: Created: latency-svc-jfptk
Jan  9 12:51:47.368: INFO: Got endpoints: latency-svc-jfptk [2.118028799s]
Jan  9 12:51:47.411: INFO: Created: latency-svc-gpkc9
Jan  9 12:51:47.418: INFO: Got endpoints: latency-svc-gpkc9 [2.140485325s]
Jan  9 12:51:47.584: INFO: Created: latency-svc-49t6t
Jan  9 12:51:47.609: INFO: Got endpoints: latency-svc-49t6t [2.133846802s]
Jan  9 12:51:47.865: INFO: Created: latency-svc-jfmz6
Jan  9 12:51:47.880: INFO: Got endpoints: latency-svc-jfmz6 [2.24044232s]
Jan  9 12:51:48.235: INFO: Created: latency-svc-vtr6k
Jan  9 12:51:48.485: INFO: Got endpoints: latency-svc-vtr6k [2.663170735s]
Jan  9 12:51:48.544: INFO: Created: latency-svc-6m6xl
Jan  9 12:51:48.701: INFO: Got endpoints: latency-svc-6m6xl [2.825573601s]
Jan  9 12:51:48.740: INFO: Created: latency-svc-9v6nt
Jan  9 12:51:48.900: INFO: Got endpoints: latency-svc-9v6nt [2.887025363s]
Jan  9 12:51:48.923: INFO: Created: latency-svc-lg8db
Jan  9 12:51:48.949: INFO: Got endpoints: latency-svc-lg8db [2.871910341s]
Jan  9 12:51:49.161: INFO: Created: latency-svc-9tsc6
Jan  9 12:51:49.181: INFO: Got endpoints: latency-svc-9tsc6 [2.921547566s]
Jan  9 12:51:49.338: INFO: Created: latency-svc-zls4m
Jan  9 12:51:49.403: INFO: Got endpoints: latency-svc-zls4m [2.986397992s]
Jan  9 12:51:49.565: INFO: Created: latency-svc-55mw6
Jan  9 12:51:49.565: INFO: Got endpoints: latency-svc-55mw6 [3.09779101s]
Jan  9 12:51:49.712: INFO: Created: latency-svc-ntk2s
Jan  9 12:51:49.745: INFO: Got endpoints: latency-svc-ntk2s [2.995354736s]
Jan  9 12:51:49.801: INFO: Created: latency-svc-jnlgc
Jan  9 12:51:49.933: INFO: Got endpoints: latency-svc-jnlgc [3.175004789s]
Jan  9 12:51:49.985: INFO: Created: latency-svc-lfbbl
Jan  9 12:51:50.011: INFO: Got endpoints: latency-svc-lfbbl [3.079320734s]
Jan  9 12:51:50.190: INFO: Created: latency-svc-hmxs9
Jan  9 12:51:50.191: INFO: Got endpoints: latency-svc-hmxs9 [3.029444997s]
Jan  9 12:51:50.277: INFO: Created: latency-svc-j2xsw
Jan  9 12:51:50.386: INFO: Got endpoints: latency-svc-j2xsw [3.017473237s]
Jan  9 12:51:50.399: INFO: Created: latency-svc-pz5qj
Jan  9 12:51:50.417: INFO: Got endpoints: latency-svc-pz5qj [2.998153516s]
Jan  9 12:51:50.498: INFO: Created: latency-svc-mgfcw
Jan  9 12:51:50.710: INFO: Got endpoints: latency-svc-mgfcw [3.101375129s]
Jan  9 12:51:50.736: INFO: Created: latency-svc-6sb5z
Jan  9 12:51:50.926: INFO: Got endpoints: latency-svc-6sb5z [3.04579313s]
Jan  9 12:51:51.018: INFO: Created: latency-svc-8tckb
Jan  9 12:51:51.321: INFO: Got endpoints: latency-svc-8tckb [2.836475752s]
Jan  9 12:51:51.359: INFO: Created: latency-svc-gw2zk
Jan  9 12:51:51.387: INFO: Got endpoints: latency-svc-gw2zk [2.685991603s]
Jan  9 12:51:51.604: INFO: Created: latency-svc-pqzr8
Jan  9 12:51:51.630: INFO: Got endpoints: latency-svc-pqzr8 [2.729133764s]
Jan  9 12:51:51.815: INFO: Created: latency-svc-jm2wk
Jan  9 12:51:51.820: INFO: Got endpoints: latency-svc-jm2wk [431.747269ms]
Jan  9 12:51:51.891: INFO: Created: latency-svc-qfkfv
Jan  9 12:51:52.126: INFO: Got endpoints: latency-svc-qfkfv [3.176894134s]
Jan  9 12:51:52.172: INFO: Created: latency-svc-f9sq2
Jan  9 12:51:52.204: INFO: Got endpoints: latency-svc-f9sq2 [3.023035328s]
Jan  9 12:51:52.334: INFO: Created: latency-svc-88wl4
Jan  9 12:51:52.347: INFO: Got endpoints: latency-svc-88wl4 [2.944059867s]
Jan  9 12:51:52.401: INFO: Created: latency-svc-wbt2x
Jan  9 12:51:52.531: INFO: Got endpoints: latency-svc-wbt2x [2.965904889s]
Jan  9 12:51:52.587: INFO: Created: latency-svc-5tzmj
Jan  9 12:51:52.610: INFO: Got endpoints: latency-svc-5tzmj [2.864661355s]
Jan  9 12:51:52.778: INFO: Created: latency-svc-xjwj9
Jan  9 12:51:52.817: INFO: Got endpoints: latency-svc-xjwj9 [2.883371417s]
Jan  9 12:51:53.083: INFO: Created: latency-svc-ldb54
Jan  9 12:51:53.109: INFO: Got endpoints: latency-svc-ldb54 [3.097453962s]
Jan  9 12:51:53.532: INFO: Created: latency-svc-frg5x
Jan  9 12:51:53.542: INFO: Got endpoints: latency-svc-frg5x [3.351388351s]
Jan  9 12:51:53.768: INFO: Created: latency-svc-bmt84
Jan  9 12:51:53.786: INFO: Got endpoints: latency-svc-bmt84 [3.400171797s]
Jan  9 12:51:53.857: INFO: Created: latency-svc-g7fs5
Jan  9 12:51:53.949: INFO: Got endpoints: latency-svc-g7fs5 [3.532154572s]
Jan  9 12:51:53.979: INFO: Created: latency-svc-bkxlh
Jan  9 12:51:54.012: INFO: Got endpoints: latency-svc-bkxlh [3.301747448s]
Jan  9 12:51:54.165: INFO: Created: latency-svc-l8nsr
Jan  9 12:51:54.203: INFO: Got endpoints: latency-svc-l8nsr [3.276840543s]
Jan  9 12:51:54.317: INFO: Created: latency-svc-2grpl
Jan  9 12:51:54.333: INFO: Got endpoints: latency-svc-2grpl [3.011496745s]
Jan  9 12:51:54.515: INFO: Created: latency-svc-xw4jc
Jan  9 12:51:54.557: INFO: Got endpoints: latency-svc-xw4jc [2.926806716s]
Jan  9 12:51:54.597: INFO: Created: latency-svc-txrx2
Jan  9 12:51:54.663: INFO: Got endpoints: latency-svc-txrx2 [2.843379771s]
Jan  9 12:51:54.674: INFO: Created: latency-svc-lqsr6
Jan  9 12:51:54.695: INFO: Got endpoints: latency-svc-lqsr6 [2.56944909s]
Jan  9 12:51:55.329: INFO: Created: latency-svc-nsr42
Jan  9 12:51:55.431: INFO: Got endpoints: latency-svc-nsr42 [3.227117514s]
Jan  9 12:51:55.846: INFO: Created: latency-svc-m48lx
Jan  9 12:51:55.851: INFO: Got endpoints: latency-svc-m48lx [3.503639541s]
Jan  9 12:51:55.998: INFO: Created: latency-svc-shpjm
Jan  9 12:51:56.013: INFO: Got endpoints: latency-svc-shpjm [3.482578032s]
Jan  9 12:51:56.048: INFO: Created: latency-svc-7kcjz
Jan  9 12:51:56.171: INFO: Got endpoints: latency-svc-7kcjz [3.560561089s]
Jan  9 12:51:56.234: INFO: Created: latency-svc-k58rb
Jan  9 12:51:56.276: INFO: Got endpoints: latency-svc-k58rb [3.458645478s]
Jan  9 12:51:56.392: INFO: Created: latency-svc-6kcss
Jan  9 12:51:56.400: INFO: Got endpoints: latency-svc-6kcss [3.291260684s]
Jan  9 12:51:56.463: INFO: Created: latency-svc-cbhlg
Jan  9 12:51:56.598: INFO: Got endpoints: latency-svc-cbhlg [3.056163556s]
Jan  9 12:51:56.655: INFO: Created: latency-svc-8dzf4
Jan  9 12:51:56.801: INFO: Got endpoints: latency-svc-8dzf4 [3.014764036s]
Jan  9 12:51:56.842: INFO: Created: latency-svc-lwmxh
Jan  9 12:51:56.977: INFO: Got endpoints: latency-svc-lwmxh [3.027459119s]
Jan  9 12:51:56.985: INFO: Created: latency-svc-xc4qn
Jan  9 12:51:57.014: INFO: Got endpoints: latency-svc-xc4qn [3.002139169s]
Jan  9 12:51:57.230: INFO: Created: latency-svc-84khr
Jan  9 12:51:57.277: INFO: Created: latency-svc-l9b9c
Jan  9 12:51:57.279: INFO: Got endpoints: latency-svc-84khr [3.075309979s]
Jan  9 12:51:57.302: INFO: Got endpoints: latency-svc-l9b9c [2.968220633s]
Jan  9 12:51:57.473: INFO: Created: latency-svc-58fl9
Jan  9 12:51:57.493: INFO: Got endpoints: latency-svc-58fl9 [2.936203046s]
Jan  9 12:51:57.667: INFO: Created: latency-svc-lks59
Jan  9 12:51:57.674: INFO: Got endpoints: latency-svc-lks59 [3.010812747s]
Jan  9 12:51:57.853: INFO: Created: latency-svc-j6qw2
Jan  9 12:51:57.870: INFO: Got endpoints: latency-svc-j6qw2 [3.174545157s]
Jan  9 12:51:57.937: INFO: Created: latency-svc-nc2tx
Jan  9 12:51:58.050: INFO: Got endpoints: latency-svc-nc2tx [2.618479895s]
Jan  9 12:51:58.058: INFO: Created: latency-svc-4vkv6
Jan  9 12:51:58.086: INFO: Got endpoints: latency-svc-4vkv6 [2.235280066s]
Jan  9 12:51:58.165: INFO: Created: latency-svc-kwmw8
Jan  9 12:51:58.248: INFO: Got endpoints: latency-svc-kwmw8 [2.234819357s]
Jan  9 12:51:58.300: INFO: Created: latency-svc-55gm2
Jan  9 12:51:58.471: INFO: Got endpoints: latency-svc-55gm2 [2.300023476s]
Jan  9 12:51:58.477: INFO: Created: latency-svc-vhbtj
Jan  9 12:51:58.540: INFO: Got endpoints: latency-svc-vhbtj [2.263931314s]
Jan  9 12:51:58.707: INFO: Created: latency-svc-z76m9
Jan  9 12:51:58.722: INFO: Got endpoints: latency-svc-z76m9 [2.321833504s]
Jan  9 12:51:58.793: INFO: Created: latency-svc-ql5j6
Jan  9 12:51:58.873: INFO: Got endpoints: latency-svc-ql5j6 [2.274291361s]
Jan  9 12:51:59.054: INFO: Created: latency-svc-bcntz
Jan  9 12:51:59.089: INFO: Got endpoints: latency-svc-bcntz [2.288445596s]
Jan  9 12:51:59.222: INFO: Created: latency-svc-h946b
Jan  9 12:51:59.229: INFO: Got endpoints: latency-svc-h946b [2.252438825s]
Jan  9 12:51:59.302: INFO: Created: latency-svc-zgf49
Jan  9 12:51:59.480: INFO: Got endpoints: latency-svc-zgf49 [2.46504237s]
Jan  9 12:51:59.501: INFO: Created: latency-svc-qbw8n
Jan  9 12:51:59.532: INFO: Got endpoints: latency-svc-qbw8n [2.252986658s]
Jan  9 12:51:59.676: INFO: Created: latency-svc-gc5qc
Jan  9 12:51:59.694: INFO: Got endpoints: latency-svc-gc5qc [2.392045514s]
Jan  9 12:51:59.769: INFO: Created: latency-svc-jzg2x
Jan  9 12:51:59.936: INFO: Got endpoints: latency-svc-jzg2x [2.442646928s]
Jan  9 12:51:59.940: INFO: Created: latency-svc-gbxzd
Jan  9 12:52:00.062: INFO: Got endpoints: latency-svc-gbxzd [2.38785219s]
Jan  9 12:52:00.101: INFO: Created: latency-svc-wzt4r
Jan  9 12:52:00.106: INFO: Got endpoints: latency-svc-wzt4r [2.236022224s]
Jan  9 12:52:00.242: INFO: Created: latency-svc-ftgzh
Jan  9 12:52:00.259: INFO: Got endpoints: latency-svc-ftgzh [2.209360896s]
Jan  9 12:52:00.321: INFO: Created: latency-svc-7rt2t
Jan  9 12:52:00.415: INFO: Got endpoints: latency-svc-7rt2t [2.328765267s]
Jan  9 12:52:00.449: INFO: Created: latency-svc-nkhsr
Jan  9 12:52:00.597: INFO: Got endpoints: latency-svc-nkhsr [2.34865998s]
Jan  9 12:52:00.630: INFO: Created: latency-svc-hzkhv
Jan  9 12:52:00.671: INFO: Got endpoints: latency-svc-hzkhv [2.199802948s]
Jan  9 12:52:00.786: INFO: Created: latency-svc-h99rr
Jan  9 12:52:00.816: INFO: Got endpoints: latency-svc-h99rr [2.276270299s]
Jan  9 12:52:00.862: INFO: Created: latency-svc-rjcvd
Jan  9 12:52:01.017: INFO: Got endpoints: latency-svc-rjcvd [2.294718144s]
Jan  9 12:52:01.044: INFO: Created: latency-svc-5kzwn
Jan  9 12:52:01.078: INFO: Got endpoints: latency-svc-5kzwn [2.204433817s]
Jan  9 12:52:01.259: INFO: Created: latency-svc-lxpnb
Jan  9 12:52:01.266: INFO: Got endpoints: latency-svc-lxpnb [2.176871404s]
Jan  9 12:52:01.317: INFO: Created: latency-svc-wns47
Jan  9 12:52:01.336: INFO: Got endpoints: latency-svc-wns47 [2.106937143s]
Jan  9 12:52:01.493: INFO: Created: latency-svc-46qcg
Jan  9 12:52:01.505: INFO: Got endpoints: latency-svc-46qcg [2.025269668s]
Jan  9 12:52:01.614: INFO: Created: latency-svc-mcv7c
Jan  9 12:52:01.629: INFO: Got endpoints: latency-svc-mcv7c [2.09626405s]
Jan  9 12:52:01.812: INFO: Created: latency-svc-kz8l6
Jan  9 12:52:01.846: INFO: Got endpoints: latency-svc-kz8l6 [2.151635032s]
Jan  9 12:52:02.080: INFO: Created: latency-svc-8x8b9
Jan  9 12:52:02.113: INFO: Got endpoints: latency-svc-8x8b9 [2.176641814s]
Jan  9 12:52:02.314: INFO: Created: latency-svc-pmln6
Jan  9 12:52:02.342: INFO: Got endpoints: latency-svc-pmln6 [2.279895069s]
Jan  9 12:52:02.479: INFO: Created: latency-svc-clgbt
Jan  9 12:52:02.550: INFO: Got endpoints: latency-svc-clgbt [2.443346586s]
Jan  9 12:52:02.565: INFO: Created: latency-svc-4p5hk
Jan  9 12:52:02.681: INFO: Got endpoints: latency-svc-4p5hk [2.420959935s]
Jan  9 12:52:02.721: INFO: Created: latency-svc-wrrw6
Jan  9 12:52:02.766: INFO: Got endpoints: latency-svc-wrrw6 [2.350910984s]
Jan  9 12:52:03.011: INFO: Created: latency-svc-dl4m9
Jan  9 12:52:03.276: INFO: Created: latency-svc-mvmnw
Jan  9 12:52:03.328: INFO: Got endpoints: latency-svc-dl4m9 [2.730413787s]
Jan  9 12:52:03.554: INFO: Created: latency-svc-wlkbf
Jan  9 12:52:03.559: INFO: Got endpoints: latency-svc-wlkbf [2.743263909s]
Jan  9 12:52:03.723: INFO: Got endpoints: latency-svc-mvmnw [3.051795121s]
Jan  9 12:52:03.726: INFO: Created: latency-svc-vbtwp
Jan  9 12:52:03.813: INFO: Got endpoints: latency-svc-vbtwp [2.795519823s]
Jan  9 12:52:03.901: INFO: Created: latency-svc-kvk2v
Jan  9 12:52:03.930: INFO: Got endpoints: latency-svc-kvk2v [2.851919845s]
Jan  9 12:52:03.985: INFO: Created: latency-svc-jdn2m
Jan  9 12:52:04.085: INFO: Got endpoints: latency-svc-jdn2m [2.818566836s]
Jan  9 12:52:04.119: INFO: Created: latency-svc-gpnfb
Jan  9 12:52:04.138: INFO: Got endpoints: latency-svc-gpnfb [2.801857044s]
Jan  9 12:52:04.305: INFO: Created: latency-svc-q6jmk
Jan  9 12:52:04.441: INFO: Got endpoints: latency-svc-q6jmk [2.935635015s]
Jan  9 12:52:04.465: INFO: Created: latency-svc-89n54
Jan  9 12:52:04.527: INFO: Created: latency-svc-dgjq7
Jan  9 12:52:04.532: INFO: Got endpoints: latency-svc-89n54 [2.903093543s]
Jan  9 12:52:04.623: INFO: Got endpoints: latency-svc-dgjq7 [2.776870437s]
Jan  9 12:52:04.670: INFO: Created: latency-svc-fzwxk
Jan  9 12:52:04.683: INFO: Got endpoints: latency-svc-fzwxk [2.569106864s]
Jan  9 12:52:04.837: INFO: Created: latency-svc-c7xgh
Jan  9 12:52:04.862: INFO: Got endpoints: latency-svc-c7xgh [2.519960966s]
Jan  9 12:52:05.008: INFO: Created: latency-svc-xxxzq
Jan  9 12:52:05.034: INFO: Got endpoints: latency-svc-xxxzq [2.4843276s]
Jan  9 12:52:05.069: INFO: Created: latency-svc-jrgbk
Jan  9 12:52:05.082: INFO: Got endpoints: latency-svc-jrgbk [2.401670452s]
Jan  9 12:52:05.222: INFO: Created: latency-svc-shrmn
Jan  9 12:52:05.222: INFO: Got endpoints: latency-svc-shrmn [2.455966142s]
Jan  9 12:52:05.277: INFO: Created: latency-svc-n62dd
Jan  9 12:52:05.345: INFO: Got endpoints: latency-svc-n62dd [2.017517293s]
Jan  9 12:52:05.363: INFO: Created: latency-svc-snmwg
Jan  9 12:52:05.367: INFO: Got endpoints: latency-svc-snmwg [1.807738934s]
Jan  9 12:52:05.417: INFO: Created: latency-svc-hhcdf
Jan  9 12:52:05.431: INFO: Got endpoints: latency-svc-hhcdf [1.708282044s]
Jan  9 12:52:05.549: INFO: Created: latency-svc-g97v5
Jan  9 12:52:05.572: INFO: Got endpoints: latency-svc-g97v5 [1.758796026s]
Jan  9 12:52:05.625: INFO: Created: latency-svc-27xrn
Jan  9 12:52:05.758: INFO: Got endpoints: latency-svc-27xrn [1.827713106s]
Jan  9 12:52:05.771: INFO: Created: latency-svc-c8rp8
Jan  9 12:52:05.856: INFO: Got endpoints: latency-svc-c8rp8 [1.770203024s]
Jan  9 12:52:06.011: INFO: Created: latency-svc-fcxrc
Jan  9 12:52:06.061: INFO: Got endpoints: latency-svc-fcxrc [1.922406751s]
Jan  9 12:52:06.065: INFO: Created: latency-svc-t66rw
Jan  9 12:52:06.079: INFO: Got endpoints: latency-svc-t66rw [1.63814486s]
Jan  9 12:52:06.239: INFO: Created: latency-svc-gqb9n
Jan  9 12:52:06.264: INFO: Got endpoints: latency-svc-gqb9n [1.731766158s]
Jan  9 12:52:06.421: INFO: Created: latency-svc-vz2hd
Jan  9 12:52:06.444: INFO: Got endpoints: latency-svc-vz2hd [1.820782552s]
Jan  9 12:52:06.597: INFO: Created: latency-svc-g8gwx
Jan  9 12:52:06.602: INFO: Got endpoints: latency-svc-g8gwx [1.919447998s]
Jan  9 12:52:06.788: INFO: Created: latency-svc-6hs67
Jan  9 12:52:06.800: INFO: Got endpoints: latency-svc-6hs67 [1.937202983s]
Jan  9 12:52:06.882: INFO: Created: latency-svc-fbbvk
Jan  9 12:52:07.000: INFO: Got endpoints: latency-svc-fbbvk [1.965774098s]
Jan  9 12:52:07.024: INFO: Created: latency-svc-c7kbz
Jan  9 12:52:07.053: INFO: Got endpoints: latency-svc-c7kbz [1.970178165s]
Jan  9 12:52:07.190: INFO: Created: latency-svc-8klbd
Jan  9 12:52:07.202: INFO: Got endpoints: latency-svc-8klbd [1.979793032s]
Jan  9 12:52:07.279: INFO: Created: latency-svc-ql6w6
Jan  9 12:52:07.346: INFO: Got endpoints: latency-svc-ql6w6 [2.000144489s]
Jan  9 12:52:07.428: INFO: Created: latency-svc-lmb48
Jan  9 12:52:07.538: INFO: Got endpoints: latency-svc-lmb48 [2.171064685s]
Jan  9 12:52:07.595: INFO: Created: latency-svc-szqkc
Jan  9 12:52:07.758: INFO: Got endpoints: latency-svc-szqkc [2.32600351s]
Jan  9 12:52:07.782: INFO: Created: latency-svc-2s9pt
Jan  9 12:52:07.805: INFO: Got endpoints: latency-svc-2s9pt [2.232412759s]
Jan  9 12:52:07.978: INFO: Created: latency-svc-4nn5d
Jan  9 12:52:07.999: INFO: Got endpoints: latency-svc-4nn5d [2.240834854s]
Jan  9 12:52:08.156: INFO: Created: latency-svc-mjjw4
Jan  9 12:52:08.187: INFO: Got endpoints: latency-svc-mjjw4 [2.330823497s]
Jan  9 12:52:08.327: INFO: Created: latency-svc-gg5kr
Jan  9 12:52:08.338: INFO: Got endpoints: latency-svc-gg5kr [2.277253767s]
Jan  9 12:52:08.531: INFO: Created: latency-svc-9gj7h
Jan  9 12:52:08.576: INFO: Got endpoints: latency-svc-9gj7h [2.496920907s]
Jan  9 12:52:08.703: INFO: Created: latency-svc-p992d
Jan  9 12:52:08.718: INFO: Got endpoints: latency-svc-p992d [2.454487752s]
Jan  9 12:52:09.714: INFO: Created: latency-svc-jmv6t
Jan  9 12:52:09.724: INFO: Got endpoints: latency-svc-jmv6t [3.280241541s]
Jan  9 12:52:09.804: INFO: Created: latency-svc-xc96z
Jan  9 12:52:09.958: INFO: Got endpoints: latency-svc-xc96z [3.355849162s]
Jan  9 12:52:10.016: INFO: Created: latency-svc-7w8td
Jan  9 12:52:10.016: INFO: Got endpoints: latency-svc-7w8td [3.216698038s]
Jan  9 12:52:10.146: INFO: Created: latency-svc-jkcvt
Jan  9 12:52:10.155: INFO: Got endpoints: latency-svc-jkcvt [3.155121824s]
Jan  9 12:52:10.310: INFO: Created: latency-svc-bl7rz
Jan  9 12:52:10.319: INFO: Got endpoints: latency-svc-bl7rz [3.266597946s]
Jan  9 12:52:10.365: INFO: Created: latency-svc-lcd2m
Jan  9 12:52:10.380: INFO: Got endpoints: latency-svc-lcd2m [3.177978998s]
Jan  9 12:52:10.492: INFO: Created: latency-svc-9l2kt
Jan  9 12:52:10.498: INFO: Got endpoints: latency-svc-9l2kt [3.151715245s]
Jan  9 12:52:10.551: INFO: Created: latency-svc-m64bw
Jan  9 12:52:10.565: INFO: Got endpoints: latency-svc-m64bw [3.026324116s]
Jan  9 12:52:10.673: INFO: Created: latency-svc-xx6v9
Jan  9 12:52:10.693: INFO: Got endpoints: latency-svc-xx6v9 [2.935110308s]
Jan  9 12:52:10.737: INFO: Created: latency-svc-7hh2k
Jan  9 12:52:10.863: INFO: Got endpoints: latency-svc-7hh2k [3.058560794s]
Jan  9 12:52:10.888: INFO: Created: latency-svc-s7r7l
Jan  9 12:52:11.155: INFO: Got endpoints: latency-svc-s7r7l [3.156088212s]
Jan  9 12:52:11.186: INFO: Created: latency-svc-pzhfp
Jan  9 12:52:11.234: INFO: Got endpoints: latency-svc-pzhfp [3.046746528s]
Jan  9 12:52:11.974: INFO: Created: latency-svc-fplpm
Jan  9 12:52:12.009: INFO: Got endpoints: latency-svc-fplpm [3.670124449s]
Jan  9 12:52:12.847: INFO: Created: latency-svc-rx747
Jan  9 12:52:12.888: INFO: Got endpoints: latency-svc-rx747 [4.3111904s]
Jan  9 12:52:13.133: INFO: Created: latency-svc-vwtpw
Jan  9 12:52:13.158: INFO: Got endpoints: latency-svc-vwtpw [4.439197559s]
Jan  9 12:52:13.216: INFO: Created: latency-svc-d56vc
Jan  9 12:52:13.302: INFO: Got endpoints: latency-svc-d56vc [3.57788024s]
Jan  9 12:52:13.336: INFO: Created: latency-svc-d8882
Jan  9 12:52:13.344: INFO: Got endpoints: latency-svc-d8882 [3.386096383s]
Jan  9 12:52:13.572: INFO: Created: latency-svc-5jv2q
Jan  9 12:52:14.290: INFO: Got endpoints: latency-svc-5jv2q [4.27314108s]
Jan  9 12:52:14.303: INFO: Created: latency-svc-4r79b
Jan  9 12:52:14.329: INFO: Got endpoints: latency-svc-4r79b [4.173141256s]
Jan  9 12:52:14.605: INFO: Created: latency-svc-nwgrs
Jan  9 12:52:14.649: INFO: Got endpoints: latency-svc-nwgrs [4.329825208s]
Jan  9 12:52:14.809: INFO: Created: latency-svc-zrzp8
Jan  9 12:52:14.882: INFO: Created: latency-svc-7h6mw
Jan  9 12:52:14.882: INFO: Got endpoints: latency-svc-zrzp8 [4.501862857s]
Jan  9 12:52:15.010: INFO: Got endpoints: latency-svc-7h6mw [4.512691159s]
Jan  9 12:52:15.016: INFO: Created: latency-svc-8rkmq
Jan  9 12:52:15.055: INFO: Got endpoints: latency-svc-8rkmq [4.489553441s]
Jan  9 12:52:15.082: INFO: Created: latency-svc-bzl2n
Jan  9 12:52:15.086: INFO: Got endpoints: latency-svc-bzl2n [4.392593325s]
Jan  9 12:52:15.175: INFO: Created: latency-svc-6hmq6
Jan  9 12:52:15.196: INFO: Got endpoints: latency-svc-6hmq6 [4.33221817s]
Jan  9 12:52:15.263: INFO: Created: latency-svc-9znrv
Jan  9 12:52:15.343: INFO: Created: latency-svc-rllt8
Jan  9 12:52:15.351: INFO: Got endpoints: latency-svc-9znrv [4.196257278s]
Jan  9 12:52:15.352: INFO: Got endpoints: latency-svc-rllt8 [4.11808453s]
Jan  9 12:52:15.415: INFO: Created: latency-svc-scmh8
Jan  9 12:52:15.484: INFO: Got endpoints: latency-svc-scmh8 [3.474818327s]
Jan  9 12:52:15.498: INFO: Created: latency-svc-zckvd
Jan  9 12:52:15.511: INFO: Got endpoints: latency-svc-zckvd [2.622996495s]
Jan  9 12:52:15.556: INFO: Created: latency-svc-r25lk
Jan  9 12:52:15.564: INFO: Got endpoints: latency-svc-r25lk [2.406373485s]
Jan  9 12:52:15.641: INFO: Created: latency-svc-4n4wt
Jan  9 12:52:15.663: INFO: Got endpoints: latency-svc-4n4wt [2.360699013s]
Jan  9 12:52:15.750: INFO: Created: latency-svc-n54qb
Jan  9 12:52:15.832: INFO: Got endpoints: latency-svc-n54qb [2.487428177s]
Jan  9 12:52:15.862: INFO: Created: latency-svc-stftr
Jan  9 12:52:15.874: INFO: Got endpoints: latency-svc-stftr [1.584442453s]
Jan  9 12:52:16.079: INFO: Created: latency-svc-47b9w
Jan  9 12:52:16.103: INFO: Got endpoints: latency-svc-47b9w [1.774154757s]
Jan  9 12:52:16.184: INFO: Created: latency-svc-rhfws
Jan  9 12:52:16.293: INFO: Got endpoints: latency-svc-rhfws [1.643651353s]
Jan  9 12:52:16.397: INFO: Created: latency-svc-4xw4x
Jan  9 12:52:16.470: INFO: Got endpoints: latency-svc-4xw4x [1.587485103s]
Jan  9 12:52:16.493: INFO: Created: latency-svc-hd6l6
Jan  9 12:52:16.522: INFO: Got endpoints: latency-svc-hd6l6 [1.511795182s]
Jan  9 12:52:16.682: INFO: Created: latency-svc-tnm6z
Jan  9 12:52:16.702: INFO: Got endpoints: latency-svc-tnm6z [1.64778322s]
Jan  9 12:52:16.754: INFO: Created: latency-svc-k4tvq
Jan  9 12:52:16.759: INFO: Got endpoints: latency-svc-k4tvq [1.673363s]
Jan  9 12:52:16.858: INFO: Created: latency-svc-hzrh8
Jan  9 12:52:16.880: INFO: Got endpoints: latency-svc-hzrh8 [1.683594543s]
Jan  9 12:52:17.049: INFO: Created: latency-svc-82bjb
Jan  9 12:52:17.067: INFO: Got endpoints: latency-svc-82bjb [1.715658301s]
Jan  9 12:52:17.156: INFO: Created: latency-svc-sxlhb
Jan  9 12:52:17.204: INFO: Got endpoints: latency-svc-sxlhb [1.852117028s]
Jan  9 12:52:17.249: INFO: Created: latency-svc-fcr5v
Jan  9 12:52:17.264: INFO: Got endpoints: latency-svc-fcr5v [1.78031543s]
Jan  9 12:52:17.418: INFO: Created: latency-svc-47db6
Jan  9 12:52:17.418: INFO: Got endpoints: latency-svc-47db6 [1.907091345s]
Jan  9 12:52:17.452: INFO: Created: latency-svc-l46nn
Jan  9 12:52:17.454: INFO: Got endpoints: latency-svc-l46nn [1.889486189s]
Jan  9 12:52:17.661: INFO: Created: latency-svc-gskcw
Jan  9 12:52:17.835: INFO: Got endpoints: latency-svc-gskcw [2.17209117s]
Jan  9 12:52:17.863: INFO: Created: latency-svc-pnmqm
Jan  9 12:52:17.888: INFO: Got endpoints: latency-svc-pnmqm [2.055831885s]
Jan  9 12:52:18.060: INFO: Created: latency-svc-7j9vs
Jan  9 12:52:18.097: INFO: Got endpoints: latency-svc-7j9vs [2.222800985s]
Jan  9 12:52:18.098: INFO: Latencies: [343.715486ms 431.747269ms 509.212758ms 707.209075ms 873.44424ms 1.099028031s 1.276958361s 1.34062177s 1.511795182s 1.571709711s 1.584442453s 1.587485103s 1.63814486s 1.643651353s 1.64778322s 1.673363s 1.683594543s 1.708282044s 1.715658301s 1.731766158s 1.752037817s 1.758796026s 1.770203024s 1.774154757s 1.78031543s 1.807738934s 1.820782552s 1.827713106s 1.852117028s 1.889486189s 1.907091345s 1.919447998s 1.922406751s 1.93396847s 1.937202983s 1.962514418s 1.965774098s 1.970178165s 1.979793032s 2.000144489s 2.012695511s 2.017517293s 2.025269668s 2.055831885s 2.083638635s 2.09626405s 2.106937143s 2.118028799s 2.133846802s 2.140485325s 2.151635032s 2.154878525s 2.157819778s 2.166234542s 2.171064685s 2.171246548s 2.17209117s 2.176641814s 2.176871404s 2.1864274s 2.195109521s 2.199802948s 2.204433817s 2.209360896s 2.222800985s 2.232412759s 2.234819357s 2.235280066s 2.236022224s 2.24044232s 2.240834854s 2.252438825s 2.252986658s 2.263931314s 2.265454714s 2.274291361s 2.276270299s 2.277253767s 2.279895069s 2.288445596s 2.294718144s 2.300023476s 2.321833504s 2.32600351s 2.328598193s 2.328765267s 2.330823497s 2.34865998s 2.350910984s 2.357025064s 2.360699013s 2.38785219s 2.392045514s 2.401670452s 2.406373485s 2.420959935s 2.438342112s 2.442646928s 2.443346586s 2.454487752s 2.455966142s 2.46504237s 2.466411746s 2.4843276s 2.484755707s 2.487428177s 2.496920907s 2.500528412s 2.519960966s 2.569106864s 2.56944909s 2.618479895s 2.622996495s 2.663170735s 2.6639973s 2.685991603s 2.729133764s 2.730413787s 2.743263909s 2.776870437s 2.795519823s 2.801857044s 2.818566836s 2.825573601s 2.836475752s 2.843379771s 2.851919845s 2.864661355s 2.871910341s 2.883371417s 2.887025363s 2.903093543s 2.921547566s 2.926806716s 2.935110308s 2.935635015s 2.936203046s 2.944059867s 2.965904889s 2.968220633s 2.986397992s 2.995354736s 2.998153516s 3.002139169s 3.010812747s 3.011496745s 3.014764036s 3.017473237s 3.023035328s 3.026324116s 3.027459119s 3.029444997s 3.04579313s 3.046746528s 3.051795121s 3.056163556s 3.058560794s 3.075309979s 3.079320734s 3.097453962s 3.09779101s 3.101375129s 3.151715245s 3.155121824s 3.156088212s 3.174545157s 3.175004789s 3.176894134s 3.177978998s 3.216698038s 3.227117514s 3.266597946s 3.276840543s 3.280241541s 3.291260684s 3.301747448s 3.351388351s 3.355849162s 3.386096383s 3.400171797s 3.458645478s 3.474818327s 3.482578032s 3.503639541s 3.532154572s 3.560561089s 3.57788024s 3.670124449s 4.11808453s 4.173141256s 4.196257278s 4.27314108s 4.3111904s 4.329825208s 4.33221817s 4.392593325s 4.439197559s 4.489553441s 4.501862857s 4.512691159s]
Jan  9 12:52:18.099: INFO: 50 %ile: 2.455966142s
Jan  9 12:52:18.099: INFO: 90 %ile: 3.458645478s
Jan  9 12:52:18.099: INFO: 99 %ile: 4.501862857s
Jan  9 12:52:18.099: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:52:18.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svc-latency-4mjwr" for this suite.
Jan  9 12:53:14.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:53:14.265: INFO: namespace: e2e-tests-svc-latency-4mjwr, resource: bindings, ignored listing per whitelist
Jan  9 12:53:14.398: INFO: namespace e2e-tests-svc-latency-4mjwr deletion completed in 56.285604447s

• [SLOW TEST:100.200 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:53:14.399: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-0014995f-32df-11ea-ac2d-0242ac110005
STEP: Creating secret with name s-test-opt-upd-00149b3e-32df-11ea-ac2d-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-0014995f-32df-11ea-ac2d-0242ac110005
STEP: Updating secret s-test-opt-upd-00149b3e-32df-11ea-ac2d-0242ac110005
STEP: Creating secret with name s-test-opt-create-00149b9e-32df-11ea-ac2d-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:53:31.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-nzk2j" for this suite.
Jan  9 12:53:55.242: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:53:55.385: INFO: namespace: e2e-tests-projected-nzk2j, resource: bindings, ignored listing per whitelist
Jan  9 12:53:55.393: INFO: namespace e2e-tests-projected-nzk2j deletion completed in 24.185218684s

• [SLOW TEST:40.994 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:53:55.394: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan  9 12:53:55.524: INFO: Waiting up to 5m0s for pod "pod-184f2968-32df-11ea-ac2d-0242ac110005" in namespace "e2e-tests-emptydir-4tgb7" to be "success or failure"
Jan  9 12:53:55.615: INFO: Pod "pod-184f2968-32df-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 91.455256ms
Jan  9 12:53:57.641: INFO: Pod "pod-184f2968-32df-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1174906s
Jan  9 12:53:59.651: INFO: Pod "pod-184f2968-32df-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.12780652s
Jan  9 12:54:01.672: INFO: Pod "pod-184f2968-32df-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.148387837s
Jan  9 12:54:03.686: INFO: Pod "pod-184f2968-32df-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.161997947s
STEP: Saw pod success
Jan  9 12:54:03.686: INFO: Pod "pod-184f2968-32df-11ea-ac2d-0242ac110005" satisfied condition "success or failure"
Jan  9 12:54:03.695: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-184f2968-32df-11ea-ac2d-0242ac110005 container test-container: 
STEP: delete the pod
Jan  9 12:54:03.814: INFO: Waiting for pod pod-184f2968-32df-11ea-ac2d-0242ac110005 to disappear
Jan  9 12:54:03.829: INFO: Pod pod-184f2968-32df-11ea-ac2d-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:54:03.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-4tgb7" for this suite.
Jan  9 12:54:10.228: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:54:10.398: INFO: namespace: e2e-tests-emptydir-4tgb7, resource: bindings, ignored listing per whitelist
Jan  9 12:54:10.451: INFO: namespace e2e-tests-emptydir-4tgb7 deletion completed in 6.594653185s

• [SLOW TEST:15.058 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:54:10.452: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan  9 12:54:10.682: INFO: Waiting up to 5m0s for pod "pod-2158170a-32df-11ea-ac2d-0242ac110005" in namespace "e2e-tests-emptydir-k287c" to be "success or failure"
Jan  9 12:54:10.692: INFO: Pod "pod-2158170a-32df-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.492604ms
Jan  9 12:54:12.712: INFO: Pod "pod-2158170a-32df-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030500568s
Jan  9 12:54:14.739: INFO: Pod "pod-2158170a-32df-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05722746s
Jan  9 12:54:16.971: INFO: Pod "pod-2158170a-32df-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.289267007s
Jan  9 12:54:19.455: INFO: Pod "pod-2158170a-32df-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.772930132s
Jan  9 12:54:21.469: INFO: Pod "pod-2158170a-32df-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.787543856s
STEP: Saw pod success
Jan  9 12:54:21.469: INFO: Pod "pod-2158170a-32df-11ea-ac2d-0242ac110005" satisfied condition "success or failure"
Jan  9 12:54:21.475: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-2158170a-32df-11ea-ac2d-0242ac110005 container test-container: 
STEP: delete the pod
Jan  9 12:54:22.054: INFO: Waiting for pod pod-2158170a-32df-11ea-ac2d-0242ac110005 to disappear
Jan  9 12:54:22.285: INFO: Pod pod-2158170a-32df-11ea-ac2d-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:54:22.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-k287c" for this suite.
Jan  9 12:54:28.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:54:28.429: INFO: namespace: e2e-tests-emptydir-k287c, resource: bindings, ignored listing per whitelist
Jan  9 12:54:28.844: INFO: namespace e2e-tests-emptydir-k287c deletion completed in 6.535222327s

• [SLOW TEST:18.392 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:54:28.845: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  9 12:54:29.069: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2c4d9347-32df-11ea-ac2d-0242ac110005" in namespace "e2e-tests-projected-z5jg6" to be "success or failure"
Jan  9 12:54:29.206: INFO: Pod "downwardapi-volume-2c4d9347-32df-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 137.763164ms
Jan  9 12:54:31.337: INFO: Pod "downwardapi-volume-2c4d9347-32df-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.268703383s
Jan  9 12:54:33.353: INFO: Pod "downwardapi-volume-2c4d9347-32df-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.284910027s
Jan  9 12:54:35.600: INFO: Pod "downwardapi-volume-2c4d9347-32df-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.531030578s
Jan  9 12:54:37.613: INFO: Pod "downwardapi-volume-2c4d9347-32df-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.544309775s
Jan  9 12:54:39.637: INFO: Pod "downwardapi-volume-2c4d9347-32df-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.568176895s
STEP: Saw pod success
Jan  9 12:54:39.637: INFO: Pod "downwardapi-volume-2c4d9347-32df-11ea-ac2d-0242ac110005" satisfied condition "success or failure"
Jan  9 12:54:39.647: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-2c4d9347-32df-11ea-ac2d-0242ac110005 container client-container: 
STEP: delete the pod
Jan  9 12:54:39.833: INFO: Waiting for pod downwardapi-volume-2c4d9347-32df-11ea-ac2d-0242ac110005 to disappear
Jan  9 12:54:39.851: INFO: Pod downwardapi-volume-2c4d9347-32df-11ea-ac2d-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:54:39.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-z5jg6" for this suite.
Jan  9 12:54:47.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:54:47.959: INFO: namespace: e2e-tests-projected-z5jg6, resource: bindings, ignored listing per whitelist
Jan  9 12:54:48.024: INFO: namespace e2e-tests-projected-z5jg6 deletion completed in 8.158668776s

• [SLOW TEST:19.179 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:54:48.025: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
Jan  9 12:55:06.964: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-37e57c33-32df-11ea-ac2d-0242ac110005", GenerateName:"", Namespace:"e2e-tests-pods-xbktd", SelfLink:"/api/v1/namespaces/e2e-tests-pods-xbktd/pods/pod-submit-remove-37e57c33-32df-11ea-ac2d-0242ac110005", UID:"37eebccf-32df-11ea-a994-fa163e34d433", ResourceVersion:"17704864", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63714171288, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"510385827"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-wvn9f", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0025e0440), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-wvn9f", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0025ec2b8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002688120), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0025ec2f0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0025ec310)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0025ec318), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0025ec31c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714171289, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714171306, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714171306, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714171288, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc00228a4e0), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc00228a500), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://4f2a41615aea1de9c660845b7a75174d153e0de54392f91fae331b8491446139"}}, QOSClass:"BestEffort"}}
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:55:22.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-xbktd" for this suite.
Jan  9 12:55:28.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:55:28.809: INFO: namespace: e2e-tests-pods-xbktd, resource: bindings, ignored listing per whitelist
Jan  9 12:55:28.903: INFO: namespace e2e-tests-pods-xbktd deletion completed in 6.20653278s

• [SLOW TEST:40.878 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:55:28.903: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
Jan  9 12:55:29.733: INFO: created pod pod-service-account-defaultsa
Jan  9 12:55:29.733: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Jan  9 12:55:29.827: INFO: created pod pod-service-account-mountsa
Jan  9 12:55:29.828: INFO: pod pod-service-account-mountsa service account token volume mount: true
Jan  9 12:55:29.924: INFO: created pod pod-service-account-nomountsa
Jan  9 12:55:29.924: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Jan  9 12:55:30.102: INFO: created pod pod-service-account-defaultsa-mountspec
Jan  9 12:55:30.103: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Jan  9 12:55:30.552: INFO: created pod pod-service-account-mountsa-mountspec
Jan  9 12:55:30.553: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Jan  9 12:55:30.586: INFO: created pod pod-service-account-nomountsa-mountspec
Jan  9 12:55:30.586: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Jan  9 12:55:31.340: INFO: created pod pod-service-account-defaultsa-nomountspec
Jan  9 12:55:31.340: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Jan  9 12:55:32.735: INFO: created pod pod-service-account-mountsa-nomountspec
Jan  9 12:55:32.736: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Jan  9 12:55:33.482: INFO: created pod pod-service-account-nomountsa-nomountspec
Jan  9 12:55:33.482: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:55:33.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-nrt8x" for this suite.
Jan  9 12:56:02.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:56:02.184: INFO: namespace: e2e-tests-svcaccounts-nrt8x, resource: bindings, ignored listing per whitelist
Jan  9 12:56:02.221: INFO: namespace e2e-tests-svcaccounts-nrt8x deletion completed in 27.781264218s

• [SLOW TEST:33.319 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:56:02.222: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan  9 12:56:02.419: INFO: Waiting up to 5m0s for pod "pod-63f0cb9c-32df-11ea-ac2d-0242ac110005" in namespace "e2e-tests-emptydir-44dnn" to be "success or failure"
Jan  9 12:56:02.448: INFO: Pod "pod-63f0cb9c-32df-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 28.934527ms
Jan  9 12:56:04.641: INFO: Pod "pod-63f0cb9c-32df-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221799865s
Jan  9 12:56:06.672: INFO: Pod "pod-63f0cb9c-32df-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.253186735s
Jan  9 12:56:09.931: INFO: Pod "pod-63f0cb9c-32df-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.511586267s
Jan  9 12:56:11.962: INFO: Pod "pod-63f0cb9c-32df-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.5425084s
Jan  9 12:56:13.975: INFO: Pod "pod-63f0cb9c-32df-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.556187554s
Jan  9 12:56:16.928: INFO: Pod "pod-63f0cb9c-32df-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.508547666s
STEP: Saw pod success
Jan  9 12:56:16.928: INFO: Pod "pod-63f0cb9c-32df-11ea-ac2d-0242ac110005" satisfied condition "success or failure"
Jan  9 12:56:16.959: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-63f0cb9c-32df-11ea-ac2d-0242ac110005 container test-container: 
STEP: delete the pod
Jan  9 12:56:17.575: INFO: Waiting for pod pod-63f0cb9c-32df-11ea-ac2d-0242ac110005 to disappear
Jan  9 12:56:17.609: INFO: Pod pod-63f0cb9c-32df-11ea-ac2d-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:56:17.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-44dnn" for this suite.
Jan  9 12:56:25.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:56:25.875: INFO: namespace: e2e-tests-emptydir-44dnn, resource: bindings, ignored listing per whitelist
Jan  9 12:56:25.998: INFO: namespace e2e-tests-emptydir-44dnn deletion completed in 8.376989649s

• [SLOW TEST:23.776 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:56:25.999: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan  9 12:56:26.412: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:56:53.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-lh2ft" for this suite.
Jan  9 12:57:17.349: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:57:17.481: INFO: namespace: e2e-tests-init-container-lh2ft, resource: bindings, ignored listing per whitelist
Jan  9 12:57:17.565: INFO: namespace e2e-tests-init-container-lh2ft deletion completed in 24.249483373s

• [SLOW TEST:51.567 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:57:17.566: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan  9 12:57:17.728: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan  9 12:57:17.744: INFO: Waiting for terminating namespaces to be deleted...
Jan  9 12:57:17.779: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Jan  9 12:57:17.801: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  9 12:57:17.802: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  9 12:57:17.802: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  9 12:57:17.802: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan  9 12:57:17.802: INFO: 	Container coredns ready: true, restart count 0
Jan  9 12:57:17.802: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Jan  9 12:57:17.802: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  9 12:57:17.802: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan  9 12:57:17.802: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Jan  9 12:57:17.802: INFO: 	Container weave ready: true, restart count 0
Jan  9 12:57:17.802: INFO: 	Container weave-npc ready: true, restart count 0
Jan  9 12:57:17.802: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan  9 12:57:17.802: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15e83944fe9cb91b], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:57:18.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-896bh" for this suite.
Jan  9 12:57:24.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:57:25.068: INFO: namespace: e2e-tests-sched-pred-896bh, resource: bindings, ignored listing per whitelist
Jan  9 12:57:25.181: INFO: namespace e2e-tests-sched-pred-896bh deletion completed in 6.244976509s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:7.615 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:57:25.182: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:57:40.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-c2nsc" for this suite.
Jan  9 12:58:06.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:58:06.728: INFO: namespace: e2e-tests-replication-controller-c2nsc, resource: bindings, ignored listing per whitelist
Jan  9 12:58:06.782: INFO: namespace e2e-tests-replication-controller-c2nsc deletion completed in 26.198421461s

• [SLOW TEST:41.600 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:58:06.782: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating all guestbook components
Jan  9 12:58:07.004: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Jan  9 12:58:07.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-pdzm6'
Jan  9 12:58:09.711: INFO: stderr: ""
Jan  9 12:58:09.711: INFO: stdout: "service/redis-slave created\n"
Jan  9 12:58:09.712: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Jan  9 12:58:09.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-pdzm6'
Jan  9 12:58:10.171: INFO: stderr: ""
Jan  9 12:58:10.171: INFO: stdout: "service/redis-master created\n"
Jan  9 12:58:10.171: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jan  9 12:58:10.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-pdzm6'
Jan  9 12:58:10.730: INFO: stderr: ""
Jan  9 12:58:10.730: INFO: stdout: "service/frontend created\n"
Jan  9 12:58:10.731: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Jan  9 12:58:10.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-pdzm6'
Jan  9 12:58:11.148: INFO: stderr: ""
Jan  9 12:58:11.148: INFO: stdout: "deployment.extensions/frontend created\n"
Jan  9 12:58:11.149: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jan  9 12:58:11.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-pdzm6'
Jan  9 12:58:11.716: INFO: stderr: ""
Jan  9 12:58:11.716: INFO: stdout: "deployment.extensions/redis-master created\n"
Jan  9 12:58:11.717: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Jan  9 12:58:11.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-pdzm6'
Jan  9 12:58:12.212: INFO: stderr: ""
Jan  9 12:58:12.212: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Jan  9 12:58:12.212: INFO: Waiting for all frontend pods to be Running.
Jan  9 12:58:42.264: INFO: Waiting for frontend to serve content.
Jan  9 12:58:42.625: INFO: Trying to add a new entry to the guestbook.
Jan  9 12:58:42.721: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Jan  9 12:58:42.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-pdzm6'
Jan  9 12:58:43.007: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  9 12:58:43.008: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Jan  9 12:58:43.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-pdzm6'
Jan  9 12:58:43.271: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  9 12:58:43.271: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan  9 12:58:43.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-pdzm6'
Jan  9 12:58:43.470: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  9 12:58:43.470: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan  9 12:58:43.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-pdzm6'
Jan  9 12:58:43.598: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  9 12:58:43.598: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan  9 12:58:43.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-pdzm6'
Jan  9 12:58:43.984: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  9 12:58:43.984: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan  9 12:58:43.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-pdzm6'
Jan  9 12:58:44.308: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  9 12:58:44.308: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 12:58:44.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-pdzm6" for this suite.
Jan  9 12:59:34.379: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 12:59:34.509: INFO: namespace: e2e-tests-kubectl-pdzm6, resource: bindings, ignored listing per whitelist
Jan  9 12:59:34.596: INFO: namespace e2e-tests-kubectl-pdzm6 deletion completed in 50.273744083s

• [SLOW TEST:87.814 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 12:59:34.597: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-gpqp
STEP: Creating a pod to test atomic-volume-subpath
Jan  9 12:59:34.967: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-gpqp" in namespace "e2e-tests-subpath-dkh4w" to be "success or failure"
Jan  9 12:59:34.992: INFO: Pod "pod-subpath-test-secret-gpqp": Phase="Pending", Reason="", readiness=false. Elapsed: 24.855498ms
Jan  9 12:59:37.014: INFO: Pod "pod-subpath-test-secret-gpqp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047084525s
Jan  9 12:59:39.022: INFO: Pod "pod-subpath-test-secret-gpqp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05439111s
Jan  9 12:59:41.174: INFO: Pod "pod-subpath-test-secret-gpqp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.206847055s
Jan  9 12:59:43.188: INFO: Pod "pod-subpath-test-secret-gpqp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.220509046s
Jan  9 12:59:45.212: INFO: Pod "pod-subpath-test-secret-gpqp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.244687132s
Jan  9 12:59:47.226: INFO: Pod "pod-subpath-test-secret-gpqp": Phase="Pending", Reason="", readiness=false. Elapsed: 12.258893691s
Jan  9 12:59:49.474: INFO: Pod "pod-subpath-test-secret-gpqp": Phase="Pending", Reason="", readiness=false. Elapsed: 14.506905912s
Jan  9 12:59:51.549: INFO: Pod "pod-subpath-test-secret-gpqp": Phase="Running", Reason="", readiness=false. Elapsed: 16.5822128s
Jan  9 12:59:53.568: INFO: Pod "pod-subpath-test-secret-gpqp": Phase="Running", Reason="", readiness=false. Elapsed: 18.600364209s
Jan  9 12:59:55.593: INFO: Pod "pod-subpath-test-secret-gpqp": Phase="Running", Reason="", readiness=false. Elapsed: 20.625917878s
Jan  9 12:59:57.608: INFO: Pod "pod-subpath-test-secret-gpqp": Phase="Running", Reason="", readiness=false. Elapsed: 22.641283078s
Jan  9 12:59:59.627: INFO: Pod "pod-subpath-test-secret-gpqp": Phase="Running", Reason="", readiness=false. Elapsed: 24.659922076s
Jan  9 13:00:01.642: INFO: Pod "pod-subpath-test-secret-gpqp": Phase="Running", Reason="", readiness=false. Elapsed: 26.674580197s
Jan  9 13:00:03.657: INFO: Pod "pod-subpath-test-secret-gpqp": Phase="Running", Reason="", readiness=false. Elapsed: 28.689455513s
Jan  9 13:00:05.670: INFO: Pod "pod-subpath-test-secret-gpqp": Phase="Running", Reason="", readiness=false. Elapsed: 30.70269767s
Jan  9 13:00:07.688: INFO: Pod "pod-subpath-test-secret-gpqp": Phase="Running", Reason="", readiness=false. Elapsed: 32.720509832s
Jan  9 13:00:09.706: INFO: Pod "pod-subpath-test-secret-gpqp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.738570663s
STEP: Saw pod success
Jan  9 13:00:09.706: INFO: Pod "pod-subpath-test-secret-gpqp" satisfied condition "success or failure"
Jan  9 13:00:09.712: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-gpqp container test-container-subpath-secret-gpqp: 
STEP: delete the pod
Jan  9 13:00:09.941: INFO: Waiting for pod pod-subpath-test-secret-gpqp to disappear
Jan  9 13:00:09.956: INFO: Pod pod-subpath-test-secret-gpqp no longer exists
STEP: Deleting pod pod-subpath-test-secret-gpqp
Jan  9 13:00:09.956: INFO: Deleting pod "pod-subpath-test-secret-gpqp" in namespace "e2e-tests-subpath-dkh4w"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 13:00:09.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-dkh4w" for this suite.
Jan  9 13:00:18.234: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 13:00:18.514: INFO: namespace: e2e-tests-subpath-dkh4w, resource: bindings, ignored listing per whitelist
Jan  9 13:00:18.575: INFO: namespace e2e-tests-subpath-dkh4w deletion completed in 8.574411766s

• [SLOW TEST:43.978 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 13:00:18.576: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  9 13:00:18.904: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Jan  9 13:00:18.913: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-f5ftt/daemonsets","resourceVersion":"17705681"},"items":null}

Jan  9 13:00:18.916: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-f5ftt/pods","resourceVersion":"17705681"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 13:00:18.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-f5ftt" for this suite.
Jan  9 13:00:25.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 13:00:25.205: INFO: namespace: e2e-tests-daemonsets-f5ftt, resource: bindings, ignored listing per whitelist
Jan  9 13:00:25.280: INFO: namespace e2e-tests-daemonsets-f5ftt deletion completed in 6.240860886s

S [SKIPPING] [6.704 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Jan  9 13:00:18.904: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 13:00:25.280: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  9 13:00:57.760: INFO: Container started at 2020-01-09 13:00:37 +0000 UTC, pod became ready at 2020-01-09 13:00:55 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 13:00:57.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-dzmdx" for this suite.
Jan  9 13:01:21.809: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 13:01:21.974: INFO: namespace: e2e-tests-container-probe-dzmdx, resource: bindings, ignored listing per whitelist
Jan  9 13:01:21.974: INFO: namespace e2e-tests-container-probe-dzmdx deletion completed in 24.206089658s

• [SLOW TEST:56.694 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 13:01:21.974: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
Jan  9 13:01:37.796: INFO: 5 pods remaining
Jan  9 13:01:37.796: INFO: 5 pods has nil DeletionTimestamp
Jan  9 13:01:37.796: INFO: 
STEP: Gathering metrics
W0109 13:01:42.721978       9 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  9 13:01:42.722: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 13:01:42.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-tvhhb" for this suite.
Jan  9 13:02:14.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 13:02:14.969: INFO: namespace: e2e-tests-gc-tvhhb, resource: bindings, ignored listing per whitelist
Jan  9 13:02:15.051: INFO: namespace e2e-tests-gc-tvhhb deletion completed in 32.324765424s

• [SLOW TEST:53.077 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 13:02:15.052: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  9 13:02:15.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-jlcjv'
Jan  9 13:02:15.521: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  9 13:02:15.521: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Jan  9 13:02:15.549: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-kgbhs]
Jan  9 13:02:15.549: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-kgbhs" in namespace "e2e-tests-kubectl-jlcjv" to be "running and ready"
Jan  9 13:02:15.579: INFO: Pod "e2e-test-nginx-rc-kgbhs": Phase="Pending", Reason="", readiness=false. Elapsed: 30.394671ms
Jan  9 13:02:17.596: INFO: Pod "e2e-test-nginx-rc-kgbhs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047035302s
Jan  9 13:02:19.635: INFO: Pod "e2e-test-nginx-rc-kgbhs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086751199s
Jan  9 13:02:22.050: INFO: Pod "e2e-test-nginx-rc-kgbhs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.501162768s
Jan  9 13:02:24.079: INFO: Pod "e2e-test-nginx-rc-kgbhs": Phase="Pending", Reason="", readiness=false. Elapsed: 8.530012712s
Jan  9 13:02:26.106: INFO: Pod "e2e-test-nginx-rc-kgbhs": Phase="Pending", Reason="", readiness=false. Elapsed: 10.556941632s
Jan  9 13:02:28.202: INFO: Pod "e2e-test-nginx-rc-kgbhs": Phase="Running", Reason="", readiness=true. Elapsed: 12.65350566s
Jan  9 13:02:28.202: INFO: Pod "e2e-test-nginx-rc-kgbhs" satisfied condition "running and ready"
Jan  9 13:02:28.202: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-kgbhs]
Jan  9 13:02:28.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-jlcjv'
Jan  9 13:02:28.432: INFO: stderr: ""
Jan  9 13:02:28.433: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
Jan  9 13:02:28.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-jlcjv'
Jan  9 13:02:28.657: INFO: stderr: ""
Jan  9 13:02:28.657: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 13:02:28.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-jlcjv" for this suite.
Jan  9 13:02:52.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 13:02:52.902: INFO: namespace: e2e-tests-kubectl-jlcjv, resource: bindings, ignored listing per whitelist
Jan  9 13:02:53.005: INFO: namespace e2e-tests-kubectl-jlcjv deletion completed in 24.335879929s

• [SLOW TEST:37.954 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 13:02:53.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  9 13:02:53.272: INFO: Waiting up to 5m0s for pod "downwardapi-volume-58d2e43c-32e0-11ea-ac2d-0242ac110005" in namespace "e2e-tests-downward-api-vsxbn" to be "success or failure"
Jan  9 13:02:53.464: INFO: Pod "downwardapi-volume-58d2e43c-32e0-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 191.803816ms
Jan  9 13:02:55.964: INFO: Pod "downwardapi-volume-58d2e43c-32e0-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.691810187s
Jan  9 13:02:57.991: INFO: Pod "downwardapi-volume-58d2e43c-32e0-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.719278559s
Jan  9 13:03:00.022: INFO: Pod "downwardapi-volume-58d2e43c-32e0-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.749810003s
Jan  9 13:03:02.355: INFO: Pod "downwardapi-volume-58d2e43c-32e0-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.082929439s
Jan  9 13:03:04.369: INFO: Pod "downwardapi-volume-58d2e43c-32e0-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.097197024s
Jan  9 13:03:06.388: INFO: Pod "downwardapi-volume-58d2e43c-32e0-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.116145315s
STEP: Saw pod success
Jan  9 13:03:06.388: INFO: Pod "downwardapi-volume-58d2e43c-32e0-11ea-ac2d-0242ac110005" satisfied condition "success or failure"
Jan  9 13:03:06.394: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-58d2e43c-32e0-11ea-ac2d-0242ac110005 container client-container: 
STEP: delete the pod
Jan  9 13:03:07.404: INFO: Waiting for pod downwardapi-volume-58d2e43c-32e0-11ea-ac2d-0242ac110005 to disappear
Jan  9 13:03:07.925: INFO: Pod downwardapi-volume-58d2e43c-32e0-11ea-ac2d-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 13:03:07.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-vsxbn" for this suite.
Jan  9 13:03:14.075: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 13:03:14.225: INFO: namespace: e2e-tests-downward-api-vsxbn, resource: bindings, ignored listing per whitelist
Jan  9 13:03:14.233: INFO: namespace e2e-tests-downward-api-vsxbn deletion completed in 6.298775853s

• [SLOW TEST:21.227 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 13:03:14.233: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jan  9 13:03:17.389: INFO: Pod name wrapped-volume-race-672de1a4-32e0-11ea-ac2d-0242ac110005: Found 0 pods out of 5
Jan  9 13:03:22.436: INFO: Pod name wrapped-volume-race-672de1a4-32e0-11ea-ac2d-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-672de1a4-32e0-11ea-ac2d-0242ac110005 in namespace e2e-tests-emptydir-wrapper-wk29x, will wait for the garbage collector to delete the pods
Jan  9 13:05:16.884: INFO: Deleting ReplicationController wrapped-volume-race-672de1a4-32e0-11ea-ac2d-0242ac110005 took: 51.268201ms
Jan  9 13:05:17.584: INFO: Terminating ReplicationController wrapped-volume-race-672de1a4-32e0-11ea-ac2d-0242ac110005 pods took: 700.524339ms
STEP: Creating RC which spawns configmap-volume pods
Jan  9 13:06:13.279: INFO: Pod name wrapped-volume-race-cfe0191c-32e0-11ea-ac2d-0242ac110005: Found 0 pods out of 5
Jan  9 13:06:18.304: INFO: Pod name wrapped-volume-race-cfe0191c-32e0-11ea-ac2d-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-cfe0191c-32e0-11ea-ac2d-0242ac110005 in namespace e2e-tests-emptydir-wrapper-wk29x, will wait for the garbage collector to delete the pods
Jan  9 13:08:10.570: INFO: Deleting ReplicationController wrapped-volume-race-cfe0191c-32e0-11ea-ac2d-0242ac110005 took: 124.980126ms
Jan  9 13:08:11.270: INFO: Terminating ReplicationController wrapped-volume-race-cfe0191c-32e0-11ea-ac2d-0242ac110005 pods took: 700.736668ms
STEP: Creating RC which spawns configmap-volume pods
Jan  9 13:09:03.139: INFO: Pod name wrapped-volume-race-3531ddce-32e1-11ea-ac2d-0242ac110005: Found 0 pods out of 5
Jan  9 13:09:08.174: INFO: Pod name wrapped-volume-race-3531ddce-32e1-11ea-ac2d-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-3531ddce-32e1-11ea-ac2d-0242ac110005 in namespace e2e-tests-emptydir-wrapper-wk29x, will wait for the garbage collector to delete the pods
Jan  9 13:11:10.373: INFO: Deleting ReplicationController wrapped-volume-race-3531ddce-32e1-11ea-ac2d-0242ac110005 took: 85.779484ms
Jan  9 13:11:10.673: INFO: Terminating ReplicationController wrapped-volume-race-3531ddce-32e1-11ea-ac2d-0242ac110005 pods took: 300.553875ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 13:12:07.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-wk29x" for this suite.
Jan  9 13:12:19.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 13:12:19.336: INFO: namespace: e2e-tests-emptydir-wrapper-wk29x, resource: bindings, ignored listing per whitelist
Jan  9 13:12:19.804: INFO: namespace e2e-tests-emptydir-wrapper-wk29x deletion completed in 12.736793433s

• [SLOW TEST:545.572 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 13:12:19.805: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  9 13:12:20.126: INFO: Creating deployment "test-recreate-deployment"
Jan  9 13:12:20.183: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jan  9 13:12:20.258: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Jan  9 13:12:22.828: INFO: Waiting deployment "test-recreate-deployment" to complete
Jan  9 13:12:23.814: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714172340, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714172340, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714172340, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714172340, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  9 13:12:25.875: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714172340, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714172340, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714172340, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714172340, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  9 13:12:27.843: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714172340, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714172340, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714172340, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714172340, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  9 13:12:29.850: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714172340, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714172340, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714172340, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714172340, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  9 13:12:31.922: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714172340, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714172340, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714172340, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714172340, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  9 13:12:34.924: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714172340, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714172340, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714172340, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714172340, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  9 13:12:35.833: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714172340, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714172340, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714172340, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714172340, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  9 13:12:37.833: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714172340, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714172340, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714172340, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714172340, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  9 13:12:39.854: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jan  9 13:12:39.894: INFO: Updating deployment test-recreate-deployment
Jan  9 13:12:39.894: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan  9 13:12:40.636: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-68v5v,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-68v5v/deployments/test-recreate-deployment,UID:aab67f0c-32e1-11ea-a994-fa163e34d433,ResourceVersion:17707181,Generation:2,CreationTimestamp:2020-01-09 13:12:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-01-09 13:12:40 +0000 UTC 2020-01-09 13:12:40 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-09 13:12:40 +0000 UTC 2020-01-09 13:12:20 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Jan  9 13:12:40.654: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-68v5v,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-68v5v/replicasets/test-recreate-deployment-589c4bfd,UID:b6b1faa5-32e1-11ea-a994-fa163e34d433,ResourceVersion:17707179,Generation:1,CreationTimestamp:2020-01-09 13:12:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment aab67f0c-32e1-11ea-a994-fa163e34d433 0xc001d88a5f 0xc001d88a70}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  9 13:12:40.654: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jan  9 13:12:40.654: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-68v5v,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-68v5v/replicasets/test-recreate-deployment-5bf7f65dc,UID:aac9ca7a-32e1-11ea-a994-fa163e34d433,ResourceVersion:17707171,Generation:2,CreationTimestamp:2020-01-09 13:12:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment aab67f0c-32e1-11ea-a994-fa163e34d433 0xc001d88b30 0xc001d88b31}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  9 13:12:40.812: INFO: Pod "test-recreate-deployment-589c4bfd-5hk7p" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-5hk7p,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-68v5v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-68v5v/pods/test-recreate-deployment-589c4bfd-5hk7p,UID:b6b9f11d-32e1-11ea-a994-fa163e34d433,ResourceVersion:17707182,Generation:0,CreationTimestamp:2020-01-09 13:12:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd b6b1faa5-32e1-11ea-a994-fa163e34d433 0xc001d895cf 0xc001d895e0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-tkqrg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tkqrg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-tkqrg true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d89640} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d89660}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 13:12:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 13:12:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-09 13:12:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-09 13:12:40 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-09 13:12:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 13:12:40.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-68v5v" for this suite.
Jan  9 13:12:53.077: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 13:12:53.189: INFO: namespace: e2e-tests-deployment-68v5v, resource: bindings, ignored listing per whitelist
Jan  9 13:12:53.195: INFO: namespace e2e-tests-deployment-68v5v deletion completed in 12.346868387s

• [SLOW TEST:33.390 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 13:12:53.195: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 13:14:25.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-runtime-456tq" for this suite.
Jan  9 13:14:31.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 13:14:31.936: INFO: namespace: e2e-tests-container-runtime-456tq, resource: bindings, ignored listing per whitelist
Jan  9 13:14:31.938: INFO: namespace e2e-tests-container-runtime-456tq deletion completed in 6.490804764s

• [SLOW TEST:98.743 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 13:14:31.939: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-f9c6773c-32e1-11ea-ac2d-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  9 13:14:32.883: INFO: Waiting up to 5m0s for pod "pod-configmaps-f9d32534-32e1-11ea-ac2d-0242ac110005" in namespace "e2e-tests-configmap-h2cpc" to be "success or failure"
Jan  9 13:14:32.896: INFO: Pod "pod-configmaps-f9d32534-32e1-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.371371ms
Jan  9 13:14:34.908: INFO: Pod "pod-configmaps-f9d32534-32e1-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024878082s
Jan  9 13:14:36.941: INFO: Pod "pod-configmaps-f9d32534-32e1-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057827369s
Jan  9 13:14:38.956: INFO: Pod "pod-configmaps-f9d32534-32e1-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072918356s
Jan  9 13:14:41.015: INFO: Pod "pod-configmaps-f9d32534-32e1-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.131642442s
Jan  9 13:14:43.351: INFO: Pod "pod-configmaps-f9d32534-32e1-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.467548337s
Jan  9 13:14:45.604: INFO: Pod "pod-configmaps-f9d32534-32e1-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.721160315s
STEP: Saw pod success
Jan  9 13:14:45.604: INFO: Pod "pod-configmaps-f9d32534-32e1-11ea-ac2d-0242ac110005" satisfied condition "success or failure"
Jan  9 13:14:45.611: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-f9d32534-32e1-11ea-ac2d-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan  9 13:14:45.836: INFO: Waiting for pod pod-configmaps-f9d32534-32e1-11ea-ac2d-0242ac110005 to disappear
Jan  9 13:14:51.661: INFO: Pod pod-configmaps-f9d32534-32e1-11ea-ac2d-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 13:14:51.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-h2cpc" for this suite.
Jan  9 13:14:58.477: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 13:14:58.501: INFO: namespace: e2e-tests-configmap-h2cpc, resource: bindings, ignored listing per whitelist
Jan  9 13:14:58.797: INFO: namespace e2e-tests-configmap-h2cpc deletion completed in 6.720613062s

• [SLOW TEST:26.858 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 13:14:58.798: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  9 13:14:59.020: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 13:15:00.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-zpxg9" for this suite.
Jan  9 13:15:06.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 13:15:06.356: INFO: namespace: e2e-tests-custom-resource-definition-zpxg9, resource: bindings, ignored listing per whitelist
Jan  9 13:15:06.461: INFO: namespace e2e-tests-custom-resource-definition-zpxg9 deletion completed in 6.205355471s

• [SLOW TEST:7.664 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 13:15:06.462: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-0e189efe-32e2-11ea-ac2d-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan  9 13:15:06.900: INFO: Waiting up to 5m0s for pod "pod-configmaps-0e19738a-32e2-11ea-ac2d-0242ac110005" in namespace "e2e-tests-configmap-9bxhn" to be "success or failure"
Jan  9 13:15:06.908: INFO: Pod "pod-configmaps-0e19738a-32e2-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.204253ms
Jan  9 13:15:09.316: INFO: Pod "pod-configmaps-0e19738a-32e2-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.416047121s
Jan  9 13:15:11.325: INFO: Pod "pod-configmaps-0e19738a-32e2-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.42538997s
Jan  9 13:15:14.048: INFO: Pod "pod-configmaps-0e19738a-32e2-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.148003296s
Jan  9 13:15:16.069: INFO: Pod "pod-configmaps-0e19738a-32e2-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.16876779s
Jan  9 13:15:19.430: INFO: Pod "pod-configmaps-0e19738a-32e2-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.530355365s
Jan  9 13:15:21.526: INFO: Pod "pod-configmaps-0e19738a-32e2-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.625583794s
Jan  9 13:15:26.909: INFO: Pod "pod-configmaps-0e19738a-32e2-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.00936568s
STEP: Saw pod success
Jan  9 13:15:26.910: INFO: Pod "pod-configmaps-0e19738a-32e2-11ea-ac2d-0242ac110005" satisfied condition "success or failure"
Jan  9 13:15:27.306: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-0e19738a-32e2-11ea-ac2d-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan  9 13:15:27.957: INFO: Waiting for pod pod-configmaps-0e19738a-32e2-11ea-ac2d-0242ac110005 to disappear
Jan  9 13:15:27.965: INFO: Pod pod-configmaps-0e19738a-32e2-11ea-ac2d-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 13:15:27.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-9bxhn" for this suite.
Jan  9 13:15:34.026: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 13:15:34.130: INFO: namespace: e2e-tests-configmap-9bxhn, resource: bindings, ignored listing per whitelist
Jan  9 13:15:34.197: INFO: namespace e2e-tests-configmap-9bxhn deletion completed in 6.225734775s

• [SLOW TEST:27.735 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 13:15:34.197: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-1e899a3f-32e2-11ea-ac2d-0242ac110005
STEP: Creating a pod to test consume secrets
Jan  9 13:15:34.707: INFO: Waiting up to 5m0s for pod "pod-secrets-1eac152f-32e2-11ea-ac2d-0242ac110005" in namespace "e2e-tests-secrets-xhqlx" to be "success or failure"
Jan  9 13:15:34.912: INFO: Pod "pod-secrets-1eac152f-32e2-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 204.83247ms
Jan  9 13:15:38.895: INFO: Pod "pod-secrets-1eac152f-32e2-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.1872028s
Jan  9 13:15:40.914: INFO: Pod "pod-secrets-1eac152f-32e2-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.206433953s
Jan  9 13:15:42.934: INFO: Pod "pod-secrets-1eac152f-32e2-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.226174417s
Jan  9 13:15:44.999: INFO: Pod "pod-secrets-1eac152f-32e2-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.291522041s
Jan  9 13:15:47.012: INFO: Pod "pod-secrets-1eac152f-32e2-11ea-ac2d-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.304201229s
Jan  9 13:15:49.038: INFO: Pod "pod-secrets-1eac152f-32e2-11ea-ac2d-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.329983156s
STEP: Saw pod success
Jan  9 13:15:49.038: INFO: Pod "pod-secrets-1eac152f-32e2-11ea-ac2d-0242ac110005" satisfied condition "success or failure"
Jan  9 13:15:49.056: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-1eac152f-32e2-11ea-ac2d-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan  9 13:15:49.459: INFO: Waiting for pod pod-secrets-1eac152f-32e2-11ea-ac2d-0242ac110005 to disappear
Jan  9 13:15:49.471: INFO: Pod pod-secrets-1eac152f-32e2-11ea-ac2d-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 13:15:49.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-xhqlx" for this suite.
Jan  9 13:15:57.814: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 13:15:57.950: INFO: namespace: e2e-tests-secrets-xhqlx, resource: bindings, ignored listing per whitelist
Jan  9 13:15:57.963: INFO: namespace e2e-tests-secrets-xhqlx deletion completed in 8.475540172s

• [SLOW TEST:23.766 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  9 13:15:57.964: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-hld8
STEP: Creating a pod to test atomic-volume-subpath
Jan  9 13:15:58.256: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-hld8" in namespace "e2e-tests-subpath-x87vr" to be "success or failure"
Jan  9 13:15:58.265: INFO: Pod "pod-subpath-test-configmap-hld8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.943843ms
Jan  9 13:16:01.148: INFO: Pod "pod-subpath-test-configmap-hld8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.892035031s
Jan  9 13:16:03.160: INFO: Pod "pod-subpath-test-configmap-hld8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.90417462s
Jan  9 13:16:05.194: INFO: Pod "pod-subpath-test-configmap-hld8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.938725619s
Jan  9 13:16:07.414: INFO: Pod "pod-subpath-test-configmap-hld8": Phase="Pending", Reason="", readiness=false. Elapsed: 9.158552258s
Jan  9 13:16:09.425: INFO: Pod "pod-subpath-test-configmap-hld8": Phase="Pending", Reason="", readiness=false. Elapsed: 11.169864467s
Jan  9 13:16:11.663: INFO: Pod "pod-subpath-test-configmap-hld8": Phase="Pending", Reason="", readiness=false. Elapsed: 13.407164301s
Jan  9 13:16:13.673: INFO: Pod "pod-subpath-test-configmap-hld8": Phase="Pending", Reason="", readiness=false. Elapsed: 15.417047795s
Jan  9 13:16:15.692: INFO: Pod "pod-subpath-test-configmap-hld8": Phase="Pending", Reason="", readiness=false. Elapsed: 17.436256929s
Jan  9 13:16:17.703: INFO: Pod "pod-subpath-test-configmap-hld8": Phase="Pending", Reason="", readiness=false. Elapsed: 19.447162963s
Jan  9 13:16:19.730: INFO: Pod "pod-subpath-test-configmap-hld8": Phase="Pending", Reason="", readiness=false. Elapsed: 21.474772063s
Jan  9 13:16:21.752: INFO: Pod "pod-subpath-test-configmap-hld8": Phase="Running", Reason="", readiness=false. Elapsed: 23.496031586s
Jan  9 13:16:23.764: INFO: Pod "pod-subpath-test-configmap-hld8": Phase="Running", Reason="", readiness=false. Elapsed: 25.507984364s
Jan  9 13:16:25.781: INFO: Pod "pod-subpath-test-configmap-hld8": Phase="Running", Reason="", readiness=false. Elapsed: 27.525475563s
Jan  9 13:16:27.801: INFO: Pod "pod-subpath-test-configmap-hld8": Phase="Running", Reason="", readiness=false. Elapsed: 29.545126774s
Jan  9 13:16:29.810: INFO: Pod "pod-subpath-test-configmap-hld8": Phase="Running", Reason="", readiness=false. Elapsed: 31.554829697s
Jan  9 13:16:32.228: INFO: Pod "pod-subpath-test-configmap-hld8": Phase="Running", Reason="", readiness=false. Elapsed: 33.972541043s
Jan  9 13:16:34.241: INFO: Pod "pod-subpath-test-configmap-hld8": Phase="Running", Reason="", readiness=false. Elapsed: 35.98515396s
Jan  9 13:16:36.251: INFO: Pod "pod-subpath-test-configmap-hld8": Phase="Running", Reason="", readiness=false. Elapsed: 37.995607623s
Jan  9 13:16:38.455: INFO: Pod "pod-subpath-test-configmap-hld8": Phase="Running", Reason="", readiness=false. Elapsed: 40.199407236s
Jan  9 13:16:40.488: INFO: Pod "pod-subpath-test-configmap-hld8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 42.232351158s
STEP: Saw pod success
Jan  9 13:16:40.488: INFO: Pod "pod-subpath-test-configmap-hld8" satisfied condition "success or failure"
Jan  9 13:16:40.504: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-hld8 container test-container-subpath-configmap-hld8: 
STEP: delete the pod
Jan  9 13:16:41.167: INFO: Waiting for pod pod-subpath-test-configmap-hld8 to disappear
Jan  9 13:16:41.182: INFO: Pod pod-subpath-test-configmap-hld8 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-hld8
Jan  9 13:16:41.182: INFO: Deleting pod "pod-subpath-test-configmap-hld8" in namespace "e2e-tests-subpath-x87vr"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  9 13:16:41.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-x87vr" for this suite.
Jan  9 13:16:49.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  9 13:16:49.547: INFO: namespace: e2e-tests-subpath-x87vr, resource: bindings, ignored listing per whitelist
Jan  9 13:16:49.576: INFO: namespace e2e-tests-subpath-x87vr deletion completed in 8.360791371s

• [SLOW TEST:51.612 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSJan  9 13:16:49.576: INFO: Running AfterSuite actions on all nodes
Jan  9 13:16:49.576: INFO: Running AfterSuite actions on node 1
Jan  9 13:16:49.576: INFO: Skipping dumping logs from cluster

Ran 199 of 2164 Specs in 8984.712 seconds
SUCCESS! -- 199 Passed | 0 Failed | 0 Pending | 1965 Skipped PASS