I0105 10:47:06.630694 8 e2e.go:224] Starting e2e run "b6af4420-2fa8-11ea-910c-0242ac110004" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1578221225 - Will randomize all specs Will run 201 of 2164 specs Jan 5 10:47:07.470: INFO: >>> kubeConfig: /root/.kube/config Jan 5 10:47:07.476: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 5 10:47:07.511: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 5 10:47:07.615: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 5 10:47:07.615: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 5 10:47:07.615: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 5 10:47:07.649: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 5 10:47:07.649: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 5 10:47:07.649: INFO: e2e test version: v1.13.12 Jan 5 10:47:07.652: INFO: kube-apiserver version: v1.13.8 SSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 10:47:07.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir Jan 5 10:47:07.899: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium Jan 5 10:47:07.916: INFO: Waiting up to 5m0s for pod "pod-b82a7357-2fa8-11ea-910c-0242ac110004" in namespace "e2e-tests-emptydir-jq75p" to be "success or failure" Jan 5 10:47:07.929: INFO: Pod "pod-b82a7357-2fa8-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 12.571729ms Jan 5 10:47:09.946: INFO: Pod "pod-b82a7357-2fa8-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029643504s Jan 5 10:47:11.964: INFO: Pod "pod-b82a7357-2fa8-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04746016s Jan 5 10:47:14.027: INFO: Pod "pod-b82a7357-2fa8-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.110640904s Jan 5 10:47:16.048: INFO: Pod "pod-b82a7357-2fa8-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.131485725s Jan 5 10:47:18.080: INFO: Pod "pod-b82a7357-2fa8-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.163824274s Jan 5 10:47:20.103: INFO: Pod "pod-b82a7357-2fa8-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.186528244s STEP: Saw pod success Jan 5 10:47:20.103: INFO: Pod "pod-b82a7357-2fa8-11ea-910c-0242ac110004" satisfied condition "success or failure" Jan 5 10:47:20.109: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-b82a7357-2fa8-11ea-910c-0242ac110004 container test-container: STEP: delete the pod Jan 5 10:47:20.325: INFO: Waiting for pod pod-b82a7357-2fa8-11ea-910c-0242ac110004 to disappear Jan 5 10:47:20.338: INFO: Pod pod-b82a7357-2fa8-11ea-910c-0242ac110004 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 10:47:20.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-jq75p" for this suite. Jan 5 10:47:26.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 10:47:26.720: INFO: namespace: e2e-tests-emptydir-jq75p, resource: bindings, ignored listing per whitelist Jan 5 10:47:26.737: INFO: namespace e2e-tests-emptydir-jq75p deletion completed in 6.389253253s • [SLOW TEST:19.084 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 10:47:26.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 5 10:47:26.995: INFO: Creating ReplicaSet my-hostname-basic-c38bea58-2fa8-11ea-910c-0242ac110004 Jan 5 10:47:27.136: INFO: Pod name my-hostname-basic-c38bea58-2fa8-11ea-910c-0242ac110004: Found 1 pods out of 1 Jan 5 10:47:27.136: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-c38bea58-2fa8-11ea-910c-0242ac110004" is running Jan 5 10:47:37.183: INFO: Pod "my-hostname-basic-c38bea58-2fa8-11ea-910c-0242ac110004-2vp6c" is running (conditions: []) Jan 5 10:47:37.184: INFO: Trying to dial the pod Jan 5 10:47:42.255: INFO: Controller my-hostname-basic-c38bea58-2fa8-11ea-910c-0242ac110004: Got expected result from replica 1 [my-hostname-basic-c38bea58-2fa8-11ea-910c-0242ac110004-2vp6c]: "my-hostname-basic-c38bea58-2fa8-11ea-910c-0242ac110004-2vp6c", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 10:47:42.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-n7l44" for this suite. Jan 5 10:47:50.318: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 10:47:50.470: INFO: namespace: e2e-tests-replicaset-n7l44, resource: bindings, ignored listing per whitelist Jan 5 10:47:50.583: INFO: namespace e2e-tests-replicaset-n7l44 deletion completed in 8.320036332s • [SLOW TEST:23.847 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 10:47:50.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed Jan 5 10:48:02.827: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-d2e68265-2fa8-11ea-910c-0242ac110004", GenerateName:"", Namespace:"e2e-tests-pods-blpgb", SelfLink:"/api/v1/namespaces/e2e-tests-pods-blpgb/pods/pod-submit-remove-d2e68265-2fa8-11ea-910c-0242ac110004", UID:"d2e8ca30-2fa8-11ea-a994-fa163e34d433", ResourceVersion:"17239555", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63713818072, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"755479239"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-lkdrh", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc000e7d5c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-lkdrh", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000f94de8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00117b980), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000f94e20)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000f94e40)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000f94e48), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000f94e4c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713818073, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713818082, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713818082, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713818072, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc0012c3360), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0012c3380), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://88a616f392eff511376d7111fc687d889aae42e7a2ec185913b9b49fd4027c2a"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 10:48:12.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-blpgb" for this suite. Jan 5 10:48:20.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 10:48:20.869: INFO: namespace: e2e-tests-pods-blpgb, resource: bindings, ignored listing per whitelist Jan 5 10:48:20.963: INFO: namespace e2e-tests-pods-blpgb deletion completed in 8.296403995s • [SLOW TEST:30.379 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 10:48:20.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-fnm8t I0105 10:48:21.126310 8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-fnm8t, replica count: 1 I0105 10:48:22.177502 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0105 10:48:23.177953 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0105 10:48:24.178427 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0105 10:48:25.179125 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0105 10:48:26.180467 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0105 10:48:27.181226 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0105 10:48:28.181854 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0105 10:48:29.182634 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0105 10:48:30.183166 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 5 10:48:30.338: INFO: Created: latency-svc-gq8th Jan 5 10:48:30.372: INFO: Got endpoints: latency-svc-gq8th [88.948059ms] Jan 5 10:48:30.535: INFO: Created: latency-svc-jrngv Jan 5 10:48:30.540: INFO: Got endpoints: latency-svc-jrngv [167.179683ms] Jan 5 10:48:30.682: INFO: Created: latency-svc-b4cst Jan 5 10:48:30.741: INFO: Got endpoints: latency-svc-b4cst [367.79113ms] Jan 5 10:48:30.777: INFO: Created: latency-svc-5xpwv Jan 5 10:48:31.046: INFO: Got endpoints: latency-svc-5xpwv [672.413685ms] Jan 5 10:48:31.076: INFO: Created: latency-svc-c2wcr Jan 5 10:48:31.089: INFO: Got endpoints: latency-svc-c2wcr [714.490582ms] Jan 5 10:48:31.146: INFO: Created: latency-svc-z5m2b Jan 5 10:48:31.296: INFO: Got endpoints: latency-svc-z5m2b [921.30825ms] Jan 5 10:48:31.373: INFO: Created: latency-svc-2w79d Jan 5 10:48:31.504: INFO: Got endpoints: latency-svc-2w79d [1.129109157s] Jan 5 10:48:31.522: INFO: Created: latency-svc-jvh8k Jan 5 10:48:31.534: INFO: Got endpoints: latency-svc-jvh8k [1.160263354s] Jan 5 10:48:31.588: INFO: Created: latency-svc-vx59h Jan 5 10:48:31.752: INFO: Got endpoints: latency-svc-vx59h [1.37751142s] Jan 5 10:48:31.768: INFO: Created: latency-svc-dxdlt Jan 5 10:48:31.800: INFO: Got endpoints: latency-svc-dxdlt [1.425180937s] Jan 5 10:48:31.949: INFO: Created: latency-svc-wg2lp Jan 5 10:48:31.969: INFO: Got endpoints: latency-svc-wg2lp [1.595182204s] Jan 5 10:48:32.021: INFO: Created: latency-svc-hrjbw Jan 5 10:48:32.222: INFO: Got endpoints: latency-svc-hrjbw [1.848227349s] Jan 5 10:48:32.274: INFO: Created: latency-svc-92fsr Jan 5 10:48:32.444: INFO: Created: latency-svc-9kxlr Jan 5 10:48:32.447: INFO: Got endpoints: latency-svc-92fsr [2.07258046s] Jan 5 10:48:32.473: INFO: Got endpoints: latency-svc-9kxlr [2.098652212s] Jan 5 10:48:32.668: INFO: Created: latency-svc-bhl9r Jan 5 10:48:32.709: INFO: Got endpoints: latency-svc-bhl9r [2.336495768s] Jan 5 10:48:32.724: INFO: Created: latency-svc-87bvx Jan 5 10:48:32.728: INFO: Got endpoints: latency-svc-87bvx [2.353619897s] Jan 5 10:48:32.897: INFO: Created: latency-svc-xlq7l Jan 5 10:48:32.911: INFO: Got endpoints: latency-svc-xlq7l [2.37069854s] Jan 5 10:48:33.111: INFO: Created: latency-svc-thvgf Jan 5 10:48:33.120: INFO: Got endpoints: latency-svc-thvgf [2.378528276s] Jan 5 10:48:33.173: INFO: Created: latency-svc-tg2rj Jan 5 10:48:33.184: INFO: Got endpoints: latency-svc-tg2rj [2.137155026s] Jan 5 10:48:33.400: INFO: Created: latency-svc-j7wnx Jan 5 10:48:33.420: INFO: Got endpoints: latency-svc-j7wnx [2.331672099s] Jan 5 10:48:33.645: INFO: Created: latency-svc-hzf4m Jan 5 10:48:33.678: INFO: Got endpoints: latency-svc-hzf4m [2.381701454s] Jan 5 10:48:33.942: INFO: Created: latency-svc-rrdp5 Jan 5 10:48:34.114: INFO: Got endpoints: latency-svc-rrdp5 [2.609241372s] Jan 5 10:48:34.162: INFO: Created: latency-svc-8kknh Jan 5 10:48:34.168: INFO: Got endpoints: latency-svc-8kknh [2.633852355s] Jan 5 10:48:34.553: INFO: Created: latency-svc-m8sx4 Jan 5 10:48:34.584: INFO: Got endpoints: latency-svc-m8sx4 [2.831878253s] Jan 5 10:48:34.870: INFO: Created: latency-svc-b96c2 Jan 5 10:48:34.882: INFO: Got endpoints: latency-svc-b96c2 [3.082549856s] Jan 5 10:48:35.278: INFO: Created: latency-svc-nh8t2 Jan 5 10:48:35.359: INFO: Got endpoints: latency-svc-nh8t2 [3.389838138s] Jan 5 10:48:35.377: INFO: Created: latency-svc-s4g5g Jan 5 10:48:35.391: INFO: Got endpoints: latency-svc-s4g5g [3.168000989s] Jan 5 10:48:35.452: INFO: Created: latency-svc-phnj8 Jan 5 10:48:35.587: INFO: Created: latency-svc-mbfnl Jan 5 10:48:35.593: INFO: Got endpoints: latency-svc-phnj8 [3.14582004s] Jan 5 10:48:35.607: INFO: Got endpoints: latency-svc-mbfnl [3.132976958s] Jan 5 10:48:35.645: INFO: Created: latency-svc-w4lt5 Jan 5 10:48:35.838: INFO: Got endpoints: latency-svc-w4lt5 [3.128196896s] Jan 5 10:48:35.895: INFO: Created: latency-svc-8qwj4 Jan 5 10:48:35.901: INFO: Got endpoints: latency-svc-8qwj4 [3.1730424s] Jan 5 10:48:36.087: INFO: Created: latency-svc-krcd5 Jan 5 10:48:36.126: INFO: Got endpoints: latency-svc-krcd5 [3.215029529s] Jan 5 10:48:36.400: INFO: Created: latency-svc-cmvhc Jan 5 10:48:36.411: INFO: Got endpoints: latency-svc-cmvhc [3.291259021s] Jan 5 10:48:36.455: INFO: Created: latency-svc-dcsdz Jan 5 10:48:36.670: INFO: Got endpoints: latency-svc-dcsdz [3.486605851s] Jan 5 10:48:36.758: INFO: Created: latency-svc-qk6vl Jan 5 10:48:36.776: INFO: Got endpoints: latency-svc-qk6vl [3.35565666s] Jan 5 10:48:36.900: INFO: Created: latency-svc-rvzqk Jan 5 10:48:36.918: INFO: Got endpoints: latency-svc-rvzqk [3.239985382s] Jan 5 10:48:37.157: INFO: Created: latency-svc-plkzh Jan 5 10:48:37.175: INFO: Got endpoints: latency-svc-plkzh [3.061474502s] Jan 5 10:48:37.253: INFO: Created: latency-svc-q4ml2 Jan 5 10:48:37.398: INFO: Got endpoints: latency-svc-q4ml2 [3.229655969s] Jan 5 10:48:37.426: INFO: Created: latency-svc-xbrf4 Jan 5 10:48:37.458: INFO: Got endpoints: latency-svc-xbrf4 [2.873410703s] Jan 5 10:48:37.478: INFO: Created: latency-svc-h7mbv Jan 5 10:48:37.601: INFO: Got endpoints: latency-svc-h7mbv [2.717906521s] Jan 5 10:48:37.672: INFO: Created: latency-svc-x64fg Jan 5 10:48:37.681: INFO: Got endpoints: latency-svc-x64fg [2.322267223s] Jan 5 10:48:37.820: INFO: Created: latency-svc-98xbs Jan 5 10:48:37.830: INFO: Got endpoints: latency-svc-98xbs [2.438973199s] Jan 5 10:48:37.874: INFO: Created: latency-svc-b2pmk Jan 5 10:48:37.903: INFO: Got endpoints: latency-svc-b2pmk [2.310202321s] Jan 5 10:48:38.057: INFO: Created: latency-svc-tz5d7 Jan 5 10:48:38.073: INFO: Got endpoints: latency-svc-tz5d7 [2.466567361s] Jan 5 10:48:38.135: INFO: Created: latency-svc-h9xsk Jan 5 10:48:38.229: INFO: Got endpoints: latency-svc-h9xsk [2.390737215s] Jan 5 10:48:38.247: INFO: Created: latency-svc-l4dtw Jan 5 10:48:38.252: INFO: Got endpoints: latency-svc-l4dtw [2.350759347s] Jan 5 10:48:38.338: INFO: Created: latency-svc-dzmst Jan 5 10:48:38.459: INFO: Got endpoints: latency-svc-dzmst [2.33267609s] Jan 5 10:48:38.482: INFO: Created: latency-svc-vgssw Jan 5 10:48:38.500: INFO: Got endpoints: latency-svc-vgssw [2.088768007s] Jan 5 10:48:38.683: INFO: Created: latency-svc-ct7nf Jan 5 10:48:38.721: INFO: Got endpoints: latency-svc-ct7nf [2.050471415s] Jan 5 10:48:38.766: INFO: Created: latency-svc-5dzbs Jan 5 10:48:39.013: INFO: Got endpoints: latency-svc-5dzbs [2.236658941s] Jan 5 10:48:39.045: INFO: Created: latency-svc-5czvl Jan 5 10:48:39.058: INFO: Got endpoints: latency-svc-5czvl [2.139796192s] Jan 5 10:48:39.194: INFO: Created: latency-svc-5q7fs Jan 5 10:48:39.261: INFO: Got endpoints: latency-svc-5q7fs [2.085287907s] Jan 5 10:48:39.382: INFO: Created: latency-svc-qjgh8 Jan 5 10:48:39.403: INFO: Got endpoints: latency-svc-qjgh8 [2.004833537s] Jan 5 10:48:39.461: INFO: Created: latency-svc-hc6xq Jan 5 10:48:39.462: INFO: Got endpoints: latency-svc-hc6xq [2.0040381s] Jan 5 10:48:39.656: INFO: Created: latency-svc-bp7xh Jan 5 10:48:39.868: INFO: Got endpoints: latency-svc-bp7xh [2.267085134s] Jan 5 10:48:39.880: INFO: Created: latency-svc-wpwlr Jan 5 10:48:39.903: INFO: Got endpoints: latency-svc-wpwlr [2.221552696s] Jan 5 10:48:40.063: INFO: Created: latency-svc-g7tvr Jan 5 10:48:40.065: INFO: Got endpoints: latency-svc-g7tvr [2.234400178s] Jan 5 10:48:40.170: INFO: Created: latency-svc-r5sdh Jan 5 10:48:40.266: INFO: Got endpoints: latency-svc-r5sdh [2.36255158s] Jan 5 10:48:40.321: INFO: Created: latency-svc-p568s Jan 5 10:48:40.331: INFO: Got endpoints: latency-svc-p568s [2.258159771s] Jan 5 10:48:40.459: INFO: Created: latency-svc-7q669 Jan 5 10:48:40.524: INFO: Got endpoints: latency-svc-7q669 [2.295332045s] Jan 5 10:48:40.697: INFO: Created: latency-svc-94mtv Jan 5 10:48:40.715: INFO: Got endpoints: latency-svc-94mtv [2.462827726s] Jan 5 10:48:40.867: INFO: Created: latency-svc-v9wkd Jan 5 10:48:41.094: INFO: Got endpoints: latency-svc-v9wkd [2.635482976s] Jan 5 10:48:41.110: INFO: Created: latency-svc-7djqh Jan 5 10:48:41.127: INFO: Got endpoints: latency-svc-7djqh [2.626593117s] Jan 5 10:48:41.189: INFO: Created: latency-svc-f5nr7 Jan 5 10:48:41.291: INFO: Got endpoints: latency-svc-f5nr7 [2.569513284s] Jan 5 10:48:41.360: INFO: Created: latency-svc-4jsjt Jan 5 10:48:41.533: INFO: Got endpoints: latency-svc-4jsjt [2.519642482s] Jan 5 10:48:41.566: INFO: Created: latency-svc-5psms Jan 5 10:48:41.571: INFO: Got endpoints: latency-svc-5psms [2.513116051s] Jan 5 10:48:41.798: INFO: Created: latency-svc-ht4dc Jan 5 10:48:41.832: INFO: Got endpoints: latency-svc-ht4dc [2.570866783s] Jan 5 10:48:42.047: INFO: Created: latency-svc-blpdh Jan 5 10:48:42.073: INFO: Got endpoints: latency-svc-blpdh [2.67003331s] Jan 5 10:48:42.597: INFO: Created: latency-svc-g8tq2 Jan 5 10:48:42.621: INFO: Got endpoints: latency-svc-g8tq2 [3.158967813s] Jan 5 10:48:42.818: INFO: Created: latency-svc-k9fc2 Jan 5 10:48:42.827: INFO: Got endpoints: latency-svc-k9fc2 [2.958838223s] Jan 5 10:48:43.035: INFO: Created: latency-svc-458mr Jan 5 10:48:43.045: INFO: Got endpoints: latency-svc-458mr [3.142222537s] Jan 5 10:48:43.118: INFO: Created: latency-svc-8xt55 Jan 5 10:48:43.219: INFO: Got endpoints: latency-svc-8xt55 [3.153708194s] Jan 5 10:48:43.243: INFO: Created: latency-svc-btltm Jan 5 10:48:43.253: INFO: Got endpoints: latency-svc-btltm [2.986704283s] Jan 5 10:48:43.394: INFO: Created: latency-svc-k7lh7 Jan 5 10:48:43.409: INFO: Got endpoints: latency-svc-k7lh7 [3.077224759s] Jan 5 10:48:43.457: INFO: Created: latency-svc-2n94n Jan 5 10:48:43.481: INFO: Got endpoints: latency-svc-2n94n [2.956166172s] Jan 5 10:48:43.671: INFO: Created: latency-svc-cmzrd Jan 5 10:48:43.715: INFO: Got endpoints: latency-svc-cmzrd [3.000589407s] Jan 5 10:48:43.871: INFO: Created: latency-svc-lz6zs Jan 5 10:48:43.877: INFO: Got endpoints: latency-svc-lz6zs [2.782620165s] Jan 5 10:48:44.076: INFO: Created: latency-svc-dpt8z Jan 5 10:48:44.124: INFO: Got endpoints: latency-svc-dpt8z [2.997198854s] Jan 5 10:48:44.279: INFO: Created: latency-svc-v2q6w Jan 5 10:48:44.289: INFO: Got endpoints: latency-svc-v2q6w [2.997842153s] Jan 5 10:48:44.437: INFO: Created: latency-svc-dsvr9 Jan 5 10:48:44.456: INFO: Got endpoints: latency-svc-dsvr9 [2.922823308s] Jan 5 10:48:44.512: INFO: Created: latency-svc-z7x57 Jan 5 10:48:44.522: INFO: Got endpoints: latency-svc-z7x57 [2.950194486s] Jan 5 10:48:44.657: INFO: Created: latency-svc-gvznn Jan 5 10:48:44.690: INFO: Got endpoints: latency-svc-gvznn [233.674684ms] Jan 5 10:48:44.861: INFO: Created: latency-svc-2fltk Jan 5 10:48:44.889: INFO: Got endpoints: latency-svc-2fltk [3.056998817s] Jan 5 10:48:45.094: INFO: Created: latency-svc-nxbz7 Jan 5 10:48:45.111: INFO: Got endpoints: latency-svc-nxbz7 [3.038109226s] Jan 5 10:48:45.293: INFO: Created: latency-svc-qj8cf Jan 5 10:48:45.345: INFO: Created: latency-svc-t5rb7 Jan 5 10:48:45.356: INFO: Got endpoints: latency-svc-qj8cf [2.734334203s] Jan 5 10:48:45.370: INFO: Got endpoints: latency-svc-t5rb7 [2.542434316s] Jan 5 10:48:45.503: INFO: Created: latency-svc-dh9pm Jan 5 10:48:45.509: INFO: Got endpoints: latency-svc-dh9pm [2.463212204s] Jan 5 10:48:45.559: INFO: Created: latency-svc-7t5n4 Jan 5 10:48:45.671: INFO: Got endpoints: latency-svc-7t5n4 [2.452172706s] Jan 5 10:48:45.693: INFO: Created: latency-svc-n4w9q Jan 5 10:48:45.727: INFO: Got endpoints: latency-svc-n4w9q [2.473763549s] Jan 5 10:48:45.757: INFO: Created: latency-svc-mxgl7 Jan 5 10:48:45.891: INFO: Got endpoints: latency-svc-mxgl7 [2.481675544s] Jan 5 10:48:45.933: INFO: Created: latency-svc-mkddm Jan 5 10:48:45.965: INFO: Got endpoints: latency-svc-mkddm [2.483573085s] Jan 5 10:48:46.240: INFO: Created: latency-svc-jtbrz Jan 5 10:48:46.267: INFO: Got endpoints: latency-svc-jtbrz [2.551023467s] Jan 5 10:48:46.396: INFO: Created: latency-svc-m8wfz Jan 5 10:48:46.427: INFO: Got endpoints: latency-svc-m8wfz [2.549793127s] Jan 5 10:48:46.515: INFO: Created: latency-svc-pvcl4 Jan 5 10:48:46.579: INFO: Got endpoints: latency-svc-pvcl4 [2.454727198s] Jan 5 10:48:46.611: INFO: Created: latency-svc-gcrp9 Jan 5 10:48:46.644: INFO: Got endpoints: latency-svc-gcrp9 [2.354255132s] Jan 5 10:48:46.688: INFO: Created: latency-svc-lcz8s Jan 5 10:48:46.773: INFO: Got endpoints: latency-svc-lcz8s [2.251082729s] Jan 5 10:48:46.881: INFO: Created: latency-svc-t2pmk Jan 5 10:48:47.029: INFO: Got endpoints: latency-svc-t2pmk [2.338866014s] Jan 5 10:48:47.082: INFO: Created: latency-svc-flsfc Jan 5 10:48:47.887: INFO: Got endpoints: latency-svc-flsfc [2.9977409s] Jan 5 10:48:47.902: INFO: Created: latency-svc-j4xjn Jan 5 10:48:47.983: INFO: Got endpoints: latency-svc-j4xjn [2.870961356s] Jan 5 10:48:48.098: INFO: Created: latency-svc-tzfdc Jan 5 10:48:48.119: INFO: Got endpoints: latency-svc-tzfdc [2.762981659s] Jan 5 10:48:48.173: INFO: Created: latency-svc-fvh4k Jan 5 10:48:48.285: INFO: Created: latency-svc-hnpwn Jan 5 10:48:48.293: INFO: Got endpoints: latency-svc-fvh4k [2.923123582s] Jan 5 10:48:48.300: INFO: Got endpoints: latency-svc-hnpwn [2.79137905s] Jan 5 10:48:48.358: INFO: Created: latency-svc-2krc8 Jan 5 10:48:48.455: INFO: Got endpoints: latency-svc-2krc8 [2.783707134s] Jan 5 10:48:48.498: INFO: Created: latency-svc-gwhwb Jan 5 10:48:48.544: INFO: Got endpoints: latency-svc-gwhwb [2.816873665s] Jan 5 10:48:48.552: INFO: Created: latency-svc-bgkrm Jan 5 10:48:48.662: INFO: Got endpoints: latency-svc-bgkrm [2.771485568s] Jan 5 10:48:48.694: INFO: Created: latency-svc-2zsz4 Jan 5 10:48:48.704: INFO: Got endpoints: latency-svc-2zsz4 [2.738791314s] Jan 5 10:48:48.977: INFO: Created: latency-svc-cw6p9 Jan 5 10:48:49.007: INFO: Got endpoints: latency-svc-cw6p9 [2.740259031s] Jan 5 10:48:49.193: INFO: Created: latency-svc-k689p Jan 5 10:48:49.202: INFO: Got endpoints: latency-svc-k689p [2.77393449s] Jan 5 10:48:49.347: INFO: Created: latency-svc-thxlm Jan 5 10:48:49.378: INFO: Got endpoints: latency-svc-thxlm [2.798689152s] Jan 5 10:48:49.431: INFO: Created: latency-svc-5tg28 Jan 5 10:48:49.588: INFO: Got endpoints: latency-svc-5tg28 [2.943906496s] Jan 5 10:48:49.602: INFO: Created: latency-svc-rgppz Jan 5 10:48:49.618: INFO: Got endpoints: latency-svc-rgppz [2.845244719s] Jan 5 10:48:49.687: INFO: Created: latency-svc-p654q Jan 5 10:48:49.784: INFO: Got endpoints: latency-svc-p654q [2.754719331s] Jan 5 10:48:49.834: INFO: Created: latency-svc-br25m Jan 5 10:48:50.060: INFO: Got endpoints: latency-svc-br25m [2.173010695s] Jan 5 10:48:50.093: INFO: Created: latency-svc-4jhr6 Jan 5 10:48:50.221: INFO: Got endpoints: latency-svc-4jhr6 [2.237923609s] Jan 5 10:48:50.249: INFO: Created: latency-svc-fwjtb Jan 5 10:48:50.268: INFO: Got endpoints: latency-svc-fwjtb [2.149145469s] Jan 5 10:48:50.408: INFO: Created: latency-svc-64l2c Jan 5 10:48:50.426: INFO: Got endpoints: latency-svc-64l2c [2.133036173s] Jan 5 10:48:50.494: INFO: Created: latency-svc-dz4z6 Jan 5 10:48:50.579: INFO: Got endpoints: latency-svc-dz4z6 [2.278863633s] Jan 5 10:48:50.639: INFO: Created: latency-svc-pvgnt Jan 5 10:48:50.787: INFO: Created: latency-svc-2bd47 Jan 5 10:48:50.787: INFO: Got endpoints: latency-svc-pvgnt [2.330961761s] Jan 5 10:48:50.790: INFO: Got endpoints: latency-svc-2bd47 [2.245582458s] Jan 5 10:48:50.862: INFO: Created: latency-svc-4bk22 Jan 5 10:48:51.083: INFO: Got endpoints: latency-svc-4bk22 [2.420247484s] Jan 5 10:48:51.120: INFO: Created: latency-svc-dqwhz Jan 5 10:48:51.127: INFO: Got endpoints: latency-svc-dqwhz [2.423338602s] Jan 5 10:48:51.324: INFO: Created: latency-svc-9lp2w Jan 5 10:48:51.364: INFO: Got endpoints: latency-svc-9lp2w [2.356722402s] Jan 5 10:48:51.582: INFO: Created: latency-svc-jxb5z Jan 5 10:48:51.638: INFO: Got endpoints: latency-svc-jxb5z [2.435796058s] Jan 5 10:48:51.831: INFO: Created: latency-svc-xsqc4 Jan 5 10:48:51.831: INFO: Got endpoints: latency-svc-xsqc4 [2.452490644s] Jan 5 10:48:51.987: INFO: Created: latency-svc-nz5vx Jan 5 10:48:52.021: INFO: Got endpoints: latency-svc-nz5vx [2.432779825s] Jan 5 10:48:52.057: INFO: Created: latency-svc-qw56t Jan 5 10:48:52.221: INFO: Got endpoints: latency-svc-qw56t [2.602317671s] Jan 5 10:48:52.271: INFO: Created: latency-svc-gzn57 Jan 5 10:48:52.272: INFO: Got endpoints: latency-svc-gzn57 [2.487116034s] Jan 5 10:48:52.428: INFO: Created: latency-svc-mqh9c Jan 5 10:48:52.461: INFO: Got endpoints: latency-svc-mqh9c [2.399924335s] Jan 5 10:48:52.516: INFO: Created: latency-svc-6rmsm Jan 5 10:48:52.627: INFO: Got endpoints: latency-svc-6rmsm [2.406094881s] Jan 5 10:48:52.760: INFO: Created: latency-svc-5r4hf Jan 5 10:48:52.851: INFO: Got endpoints: latency-svc-5r4hf [2.582390306s] Jan 5 10:48:52.884: INFO: Created: latency-svc-rxq8k Jan 5 10:48:52.912: INFO: Got endpoints: latency-svc-rxq8k [2.485670106s] Jan 5 10:48:53.085: INFO: Created: latency-svc-grfnh Jan 5 10:48:53.102: INFO: Got endpoints: latency-svc-grfnh [2.522804013s] Jan 5 10:48:53.145: INFO: Created: latency-svc-5pgch Jan 5 10:48:53.393: INFO: Got endpoints: latency-svc-5pgch [2.606635911s] Jan 5 10:48:53.423: INFO: Created: latency-svc-bwfnv Jan 5 10:48:53.441: INFO: Got endpoints: latency-svc-bwfnv [2.651323339s] Jan 5 10:48:53.689: INFO: Created: latency-svc-vttbn Jan 5 10:48:53.704: INFO: Got endpoints: latency-svc-vttbn [2.620917115s] Jan 5 10:48:53.818: INFO: Created: latency-svc-zsgxr Jan 5 10:48:54.164: INFO: Got endpoints: latency-svc-zsgxr [3.036436948s] Jan 5 10:48:54.179: INFO: Created: latency-svc-q4q89 Jan 5 10:48:54.185: INFO: Got endpoints: latency-svc-q4q89 [2.820040689s] Jan 5 10:48:54.414: INFO: Created: latency-svc-b5c8b Jan 5 10:48:54.464: INFO: Got endpoints: latency-svc-b5c8b [2.826694363s] Jan 5 10:48:54.605: INFO: Created: latency-svc-ztxlf Jan 5 10:48:54.618: INFO: Got endpoints: latency-svc-ztxlf [2.787601708s] Jan 5 10:48:54.771: INFO: Created: latency-svc-528sb Jan 5 10:48:54.792: INFO: Got endpoints: latency-svc-528sb [2.770150238s] Jan 5 10:48:54.843: INFO: Created: latency-svc-zhg2m Jan 5 10:48:54.971: INFO: Got endpoints: latency-svc-zhg2m [2.75010608s] Jan 5 10:48:55.019: INFO: Created: latency-svc-9xkd6 Jan 5 10:48:55.031: INFO: Got endpoints: latency-svc-9xkd6 [2.759084572s] Jan 5 10:48:55.060: INFO: Created: latency-svc-mzn4v Jan 5 10:48:55.206: INFO: Got endpoints: latency-svc-mzn4v [2.744960789s] Jan 5 10:48:55.219: INFO: Created: latency-svc-dhsdh Jan 5 10:48:55.225: INFO: Got endpoints: latency-svc-dhsdh [2.598052082s] Jan 5 10:48:55.273: INFO: Created: latency-svc-n2vdt Jan 5 10:48:55.299: INFO: Got endpoints: latency-svc-n2vdt [2.447580686s] Jan 5 10:48:55.421: INFO: Created: latency-svc-cvt7f Jan 5 10:48:55.441: INFO: Got endpoints: latency-svc-cvt7f [2.528707497s] Jan 5 10:48:55.485: INFO: Created: latency-svc-g9tnl Jan 5 10:48:55.622: INFO: Got endpoints: latency-svc-g9tnl [2.519103521s] Jan 5 10:48:55.633: INFO: Created: latency-svc-8pbg4 Jan 5 10:48:55.670: INFO: Got endpoints: latency-svc-8pbg4 [2.276429066s] Jan 5 10:48:55.700: INFO: Created: latency-svc-h4gx8 Jan 5 10:48:55.713: INFO: Got endpoints: latency-svc-h4gx8 [2.271716899s] Jan 5 10:48:55.832: INFO: Created: latency-svc-lt2jj Jan 5 10:48:55.848: INFO: Got endpoints: latency-svc-lt2jj [2.143472267s] Jan 5 10:48:55.904: INFO: Created: latency-svc-ccd8d Jan 5 10:48:55.909: INFO: Got endpoints: latency-svc-ccd8d [1.745227063s] Jan 5 10:48:56.028: INFO: Created: latency-svc-ssw5k Jan 5 10:48:56.029: INFO: Got endpoints: latency-svc-ssw5k [1.843924613s] Jan 5 10:48:56.075: INFO: Created: latency-svc-pfcc4 Jan 5 10:48:56.076: INFO: Got endpoints: latency-svc-pfcc4 [1.610795381s] Jan 5 10:48:56.272: INFO: Created: latency-svc-sl5xj Jan 5 10:48:56.317: INFO: Got endpoints: latency-svc-sl5xj [1.698465976s] Jan 5 10:48:56.332: INFO: Created: latency-svc-7962q Jan 5 10:48:56.341: INFO: Got endpoints: latency-svc-7962q [1.548817665s] Jan 5 10:48:56.473: INFO: Created: latency-svc-sj6g6 Jan 5 10:48:56.482: INFO: Got endpoints: latency-svc-sj6g6 [1.511050829s] Jan 5 10:48:56.647: INFO: Created: latency-svc-9lqrc Jan 5 10:48:56.687: INFO: Got endpoints: latency-svc-9lqrc [1.65598913s] Jan 5 10:48:56.827: INFO: Created: latency-svc-fc6lt Jan 5 10:48:56.854: INFO: Got endpoints: latency-svc-fc6lt [1.647603136s] Jan 5 10:48:57.064: INFO: Created: latency-svc-pmf6g Jan 5 10:48:57.077: INFO: Got endpoints: latency-svc-pmf6g [1.851831654s] Jan 5 10:48:57.131: INFO: Created: latency-svc-6rnvg Jan 5 10:48:57.226: INFO: Got endpoints: latency-svc-6rnvg [1.926266405s] Jan 5 10:48:57.294: INFO: Created: latency-svc-78qzg Jan 5 10:48:57.301: INFO: Got endpoints: latency-svc-78qzg [1.859499211s] Jan 5 10:48:57.530: INFO: Created: latency-svc-45nbq Jan 5 10:48:57.548: INFO: Got endpoints: latency-svc-45nbq [1.926143841s] Jan 5 10:48:57.748: INFO: Created: latency-svc-9z89f Jan 5 10:48:57.762: INFO: Got endpoints: latency-svc-9z89f [2.091911203s] Jan 5 10:48:57.962: INFO: Created: latency-svc-c2htb Jan 5 10:48:57.989: INFO: Got endpoints: latency-svc-c2htb [2.276134248s] Jan 5 10:48:58.047: INFO: Created: latency-svc-svmh4 Jan 5 10:48:58.161: INFO: Got endpoints: latency-svc-svmh4 [2.313570344s] Jan 5 10:48:58.228: INFO: Created: latency-svc-hnm78 Jan 5 10:48:58.248: INFO: Got endpoints: latency-svc-hnm78 [2.338890032s] Jan 5 10:48:58.343: INFO: Created: latency-svc-5vfzk Jan 5 10:48:58.359: INFO: Got endpoints: latency-svc-5vfzk [2.330714364s] Jan 5 10:48:58.514: INFO: Created: latency-svc-zgg78 Jan 5 10:48:58.539: INFO: Got endpoints: latency-svc-zgg78 [2.462805337s] Jan 5 10:48:58.705: INFO: Created: latency-svc-d5wm2 Jan 5 10:48:58.724: INFO: Created: latency-svc-kjdcn Jan 5 10:48:58.737: INFO: Got endpoints: latency-svc-d5wm2 [2.419778163s] Jan 5 10:48:58.755: INFO: Got endpoints: latency-svc-kjdcn [2.414084653s] Jan 5 10:48:58.873: INFO: Created: latency-svc-hsz98 Jan 5 10:48:58.928: INFO: Got endpoints: latency-svc-hsz98 [2.445651811s] Jan 5 10:48:59.556: INFO: Created: latency-svc-bnfzt Jan 5 10:49:00.160: INFO: Got endpoints: latency-svc-bnfzt [3.472591903s] Jan 5 10:49:00.298: INFO: Created: latency-svc-h8vbr Jan 5 10:49:00.344: INFO: Got endpoints: latency-svc-h8vbr [3.490057363s] Jan 5 10:49:00.610: INFO: Created: latency-svc-6bbkl Jan 5 10:49:00.741: INFO: Got endpoints: latency-svc-6bbkl [3.663554853s] Jan 5 10:49:00.810: INFO: Created: latency-svc-sc7lq Jan 5 10:49:01.068: INFO: Created: latency-svc-p5x44 Jan 5 10:49:01.226: INFO: Got endpoints: latency-svc-sc7lq [3.999814471s] Jan 5 10:49:01.272: INFO: Created: latency-svc-hv2r7 Jan 5 10:49:01.288: INFO: Got endpoints: latency-svc-hv2r7 [3.739969401s] Jan 5 10:49:01.392: INFO: Got endpoints: latency-svc-p5x44 [4.091341642s] Jan 5 10:49:01.409: INFO: Created: latency-svc-d8qbz Jan 5 10:49:01.436: INFO: Got endpoints: latency-svc-d8qbz [3.673178959s] Jan 5 10:49:01.494: INFO: Created: latency-svc-qhz4c Jan 5 10:49:01.753: INFO: Created: latency-svc-pr4fl Jan 5 10:49:01.769: INFO: Got endpoints: latency-svc-qhz4c [3.779808859s] Jan 5 10:49:01.888: INFO: Got endpoints: latency-svc-pr4fl [3.726381649s] Jan 5 10:49:01.911: INFO: Created: latency-svc-9cbzp Jan 5 10:49:01.950: INFO: Got endpoints: latency-svc-9cbzp [3.702002846s] Jan 5 10:49:02.111: INFO: Created: latency-svc-z5bvd Jan 5 10:49:02.149: INFO: Got endpoints: latency-svc-z5bvd [3.789563455s] Jan 5 10:49:02.292: INFO: Created: latency-svc-5hmc9 Jan 5 10:49:02.326: INFO: Got endpoints: latency-svc-5hmc9 [3.78725728s] Jan 5 10:49:02.523: INFO: Created: latency-svc-6qhnz Jan 5 10:49:02.562: INFO: Got endpoints: latency-svc-6qhnz [3.825222296s] Jan 5 10:49:02.686: INFO: Created: latency-svc-r8626 Jan 5 10:49:02.697: INFO: Got endpoints: latency-svc-r8626 [3.941856121s] Jan 5 10:49:02.848: INFO: Created: latency-svc-hwr58 Jan 5 10:49:02.887: INFO: Got endpoints: latency-svc-hwr58 [3.95773428s] Jan 5 10:49:03.033: INFO: Created: latency-svc-s8cwl Jan 5 10:49:03.082: INFO: Got endpoints: latency-svc-s8cwl [2.921327557s] Jan 5 10:49:03.216: INFO: Created: latency-svc-fcqmf Jan 5 10:49:03.229: INFO: Got endpoints: latency-svc-fcqmf [2.884214105s] Jan 5 10:49:03.316: INFO: Created: latency-svc-lmkmk Jan 5 10:49:03.377: INFO: Got endpoints: latency-svc-lmkmk [2.635106986s] Jan 5 10:49:03.426: INFO: Created: latency-svc-vq7vm Jan 5 10:49:03.471: INFO: Got endpoints: latency-svc-vq7vm [2.24456081s] Jan 5 10:49:03.710: INFO: Created: latency-svc-sx725 Jan 5 10:49:03.720: INFO: Got endpoints: latency-svc-sx725 [2.431650658s] Jan 5 10:49:03.793: INFO: Created: latency-svc-qq49p Jan 5 10:49:03.968: INFO: Got endpoints: latency-svc-qq49p [2.5754618s] Jan 5 10:49:04.026: INFO: Created: latency-svc-4vl5t Jan 5 10:49:04.041: INFO: Got endpoints: latency-svc-4vl5t [2.605162823s] Jan 5 10:49:04.383: INFO: Created: latency-svc-gmrpz Jan 5 10:49:04.401: INFO: Got endpoints: latency-svc-gmrpz [2.631400766s] Jan 5 10:49:04.521: INFO: Created: latency-svc-2z9vf Jan 5 10:49:04.549: INFO: Got endpoints: latency-svc-2z9vf [2.660806745s] Jan 5 10:49:04.703: INFO: Created: latency-svc-9m8r5 Jan 5 10:49:04.706: INFO: Got endpoints: latency-svc-9m8r5 [2.755262243s] Jan 5 10:49:04.853: INFO: Created: latency-svc-pt6rc Jan 5 10:49:04.880: INFO: Got endpoints: latency-svc-pt6rc [2.731064497s] Jan 5 10:49:04.953: INFO: Created: latency-svc-cx5zc Jan 5 10:49:05.104: INFO: Got endpoints: latency-svc-cx5zc [2.777433778s] Jan 5 10:49:05.111: INFO: Created: latency-svc-8wkfw Jan 5 10:49:05.122: INFO: Got endpoints: latency-svc-8wkfw [2.558967031s] Jan 5 10:49:05.179: INFO: Created: latency-svc-jbrz4 Jan 5 10:49:05.271: INFO: Got endpoints: latency-svc-jbrz4 [2.574220617s] Jan 5 10:49:05.293: INFO: Created: latency-svc-njl25 Jan 5 10:49:05.322: INFO: Got endpoints: latency-svc-njl25 [2.435516188s] Jan 5 10:49:05.323: INFO: Latencies: [167.179683ms 233.674684ms 367.79113ms 672.413685ms 714.490582ms 921.30825ms 1.129109157s 1.160263354s 1.37751142s 1.425180937s 1.511050829s 1.548817665s 1.595182204s 1.610795381s 1.647603136s 1.65598913s 1.698465976s 1.745227063s 1.843924613s 1.848227349s 1.851831654s 1.859499211s 1.926143841s 1.926266405s 2.0040381s 2.004833537s 2.050471415s 2.07258046s 2.085287907s 2.088768007s 2.091911203s 2.098652212s 2.133036173s 2.137155026s 2.139796192s 2.143472267s 2.149145469s 2.173010695s 2.221552696s 2.234400178s 2.236658941s 2.237923609s 2.24456081s 2.245582458s 2.251082729s 2.258159771s 2.267085134s 2.271716899s 2.276134248s 2.276429066s 2.278863633s 2.295332045s 2.310202321s 2.313570344s 2.322267223s 2.330714364s 2.330961761s 2.331672099s 2.33267609s 2.336495768s 2.338866014s 2.338890032s 2.350759347s 2.353619897s 2.354255132s 2.356722402s 2.36255158s 2.37069854s 2.378528276s 2.381701454s 2.390737215s 2.399924335s 2.406094881s 2.414084653s 2.419778163s 2.420247484s 2.423338602s 2.431650658s 2.432779825s 2.435516188s 2.435796058s 2.438973199s 2.445651811s 2.447580686s 2.452172706s 2.452490644s 2.454727198s 2.462805337s 2.462827726s 2.463212204s 2.466567361s 2.473763549s 2.481675544s 2.483573085s 2.485670106s 2.487116034s 2.513116051s 2.519103521s 2.519642482s 2.522804013s 2.528707497s 2.542434316s 2.549793127s 2.551023467s 2.558967031s 2.569513284s 2.570866783s 2.574220617s 2.5754618s 2.582390306s 2.598052082s 2.602317671s 2.605162823s 2.606635911s 2.609241372s 2.620917115s 2.626593117s 2.631400766s 2.633852355s 2.635106986s 2.635482976s 2.651323339s 2.660806745s 2.67003331s 2.717906521s 2.731064497s 2.734334203s 2.738791314s 2.740259031s 2.744960789s 2.75010608s 2.754719331s 2.755262243s 2.759084572s 2.762981659s 2.770150238s 2.771485568s 2.77393449s 2.777433778s 2.782620165s 2.783707134s 2.787601708s 2.79137905s 2.798689152s 2.816873665s 2.820040689s 2.826694363s 2.831878253s 2.845244719s 2.870961356s 2.873410703s 2.884214105s 2.921327557s 2.922823308s 2.923123582s 2.943906496s 2.950194486s 2.956166172s 2.958838223s 2.986704283s 2.997198854s 2.9977409s 2.997842153s 3.000589407s 3.036436948s 3.038109226s 3.056998817s 3.061474502s 3.077224759s 3.082549856s 3.128196896s 3.132976958s 3.142222537s 3.14582004s 3.153708194s 3.158967813s 3.168000989s 3.1730424s 3.215029529s 3.229655969s 3.239985382s 3.291259021s 3.35565666s 3.389838138s 3.472591903s 3.486605851s 3.490057363s 3.663554853s 3.673178959s 3.702002846s 3.726381649s 3.739969401s 3.779808859s 3.78725728s 3.789563455s 3.825222296s 3.941856121s 3.95773428s 3.999814471s 4.091341642s] Jan 5 10:49:05.323: INFO: 50 %ile: 2.528707497s Jan 5 10:49:05.324: INFO: 90 %ile: 3.239985382s Jan 5 10:49:05.324: INFO: 99 %ile: 3.999814471s Jan 5 10:49:05.324: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 10:49:05.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-fnm8t" for this suite. Jan 5 10:49:57.470: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 10:49:57.576: INFO: namespace: e2e-tests-svc-latency-fnm8t, resource: bindings, ignored listing per whitelist Jan 5 10:49:57.634: INFO: namespace e2e-tests-svc-latency-fnm8t deletion completed in 52.282913026s • [SLOW TEST:96.670 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 10:49:57.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 10:49:57.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-f4nf6" for this suite. Jan 5 10:50:22.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 10:50:22.167: INFO: namespace: e2e-tests-pods-f4nf6, resource: bindings, ignored listing per whitelist Jan 5 10:50:22.230: INFO: namespace e2e-tests-pods-f4nf6 deletion completed in 24.225343973s • [SLOW TEST:24.596 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 10:50:22.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-mp56q.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-mp56q.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-mp56q.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-mp56q.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-mp56q.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-mp56q.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 5 10:50:36.637: INFO: Unable to read wheezy_udp@kubernetes.default from pod e2e-tests-dns-mp56q/dns-test-2c1ccc9a-2fa9-11ea-910c-0242ac110004: the server could not find the requested resource (get pods dns-test-2c1ccc9a-2fa9-11ea-910c-0242ac110004) Jan 5 10:50:36.668: INFO: Unable to read wheezy_tcp@kubernetes.default from pod e2e-tests-dns-mp56q/dns-test-2c1ccc9a-2fa9-11ea-910c-0242ac110004: the server could not find the requested resource (get pods dns-test-2c1ccc9a-2fa9-11ea-910c-0242ac110004) Jan 5 10:50:36.706: INFO: Unable to read wheezy_udp@kubernetes.default.svc from pod e2e-tests-dns-mp56q/dns-test-2c1ccc9a-2fa9-11ea-910c-0242ac110004: the server could not find the requested resource (get pods dns-test-2c1ccc9a-2fa9-11ea-910c-0242ac110004) Jan 5 10:50:36.733: INFO: Unable to read wheezy_tcp@kubernetes.default.svc from pod e2e-tests-dns-mp56q/dns-test-2c1ccc9a-2fa9-11ea-910c-0242ac110004: the server could not find the requested resource (get pods dns-test-2c1ccc9a-2fa9-11ea-910c-0242ac110004) Jan 5 10:50:36.741: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-mp56q/dns-test-2c1ccc9a-2fa9-11ea-910c-0242ac110004: the server could not find the requested resource (get pods dns-test-2c1ccc9a-2fa9-11ea-910c-0242ac110004) Jan 5 10:50:36.752: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-mp56q/dns-test-2c1ccc9a-2fa9-11ea-910c-0242ac110004: the server could not find the requested resource (get pods dns-test-2c1ccc9a-2fa9-11ea-910c-0242ac110004) Jan 5 10:50:36.783: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-mp56q.svc.cluster.local from pod e2e-tests-dns-mp56q/dns-test-2c1ccc9a-2fa9-11ea-910c-0242ac110004: the server could not find the requested resource (get pods dns-test-2c1ccc9a-2fa9-11ea-910c-0242ac110004) Jan 5 10:50:36.806: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod e2e-tests-dns-mp56q/dns-test-2c1ccc9a-2fa9-11ea-910c-0242ac110004: the server could not find the requested resource (get pods dns-test-2c1ccc9a-2fa9-11ea-910c-0242ac110004) Jan 5 10:50:36.813: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-mp56q/dns-test-2c1ccc9a-2fa9-11ea-910c-0242ac110004: the server could not find the requested resource (get pods dns-test-2c1ccc9a-2fa9-11ea-910c-0242ac110004) Jan 5 10:50:36.821: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-mp56q/dns-test-2c1ccc9a-2fa9-11ea-910c-0242ac110004: the server could not find the requested resource (get pods dns-test-2c1ccc9a-2fa9-11ea-910c-0242ac110004) Jan 5 10:50:36.826: INFO: Unable to read jessie_udp@kubernetes.default from pod e2e-tests-dns-mp56q/dns-test-2c1ccc9a-2fa9-11ea-910c-0242ac110004: the server could not find the requested resource (get pods dns-test-2c1ccc9a-2fa9-11ea-910c-0242ac110004) Jan 5 10:50:36.828: INFO: Unable to read jessie_tcp@kubernetes.default from pod e2e-tests-dns-mp56q/dns-test-2c1ccc9a-2fa9-11ea-910c-0242ac110004: the server could not find the requested resource (get pods dns-test-2c1ccc9a-2fa9-11ea-910c-0242ac110004) Jan 5 10:50:36.831: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod e2e-tests-dns-mp56q/dns-test-2c1ccc9a-2fa9-11ea-910c-0242ac110004: the server could not find the requested resource (get pods dns-test-2c1ccc9a-2fa9-11ea-910c-0242ac110004) Jan 5 10:50:36.836: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod e2e-tests-dns-mp56q/dns-test-2c1ccc9a-2fa9-11ea-910c-0242ac110004: the server could not find the requested resource (get pods dns-test-2c1ccc9a-2fa9-11ea-910c-0242ac110004) Jan 5 10:50:36.841: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-mp56q/dns-test-2c1ccc9a-2fa9-11ea-910c-0242ac110004: the server could not find the requested resource (get pods dns-test-2c1ccc9a-2fa9-11ea-910c-0242ac110004) Jan 5 10:50:36.844: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-mp56q/dns-test-2c1ccc9a-2fa9-11ea-910c-0242ac110004: the server could not find the requested resource (get pods dns-test-2c1ccc9a-2fa9-11ea-910c-0242ac110004) Jan 5 10:50:36.848: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-mp56q.svc.cluster.local from pod e2e-tests-dns-mp56q/dns-test-2c1ccc9a-2fa9-11ea-910c-0242ac110004: the server could not find the requested resource (get pods dns-test-2c1ccc9a-2fa9-11ea-910c-0242ac110004) Jan 5 10:50:36.854: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-mp56q/dns-test-2c1ccc9a-2fa9-11ea-910c-0242ac110004: the server could not find the requested resource (get pods dns-test-2c1ccc9a-2fa9-11ea-910c-0242ac110004) Jan 5 10:50:36.859: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-mp56q/dns-test-2c1ccc9a-2fa9-11ea-910c-0242ac110004: the server could not find the requested resource (get pods dns-test-2c1ccc9a-2fa9-11ea-910c-0242ac110004) Jan 5 10:50:36.864: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-mp56q/dns-test-2c1ccc9a-2fa9-11ea-910c-0242ac110004: the server could not find the requested resource (get pods dns-test-2c1ccc9a-2fa9-11ea-910c-0242ac110004) Jan 5 10:50:36.864: INFO: Lookups using e2e-tests-dns-mp56q/dns-test-2c1ccc9a-2fa9-11ea-910c-0242ac110004 failed for: [wheezy_udp@kubernetes.default wheezy_tcp@kubernetes.default wheezy_udp@kubernetes.default.svc wheezy_tcp@kubernetes.default.svc wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-mp56q.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-mp56q.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord] Jan 5 10:50:42.066: INFO: DNS probes using e2e-tests-dns-mp56q/dns-test-2c1ccc9a-2fa9-11ea-910c-0242ac110004 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 10:50:42.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-mp56q" for this suite. Jan 5 10:50:50.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 10:50:50.755: INFO: namespace: e2e-tests-dns-mp56q, resource: bindings, ignored listing per whitelist Jan 5 10:50:50.762: INFO: namespace e2e-tests-dns-mp56q deletion completed in 8.483827727s • [SLOW TEST:28.531 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 10:50:50.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0105 10:51:32.126434 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 5 10:51:32.126: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 10:51:32.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-qsbb4" for this suite. Jan 5 10:51:43.776: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 10:51:44.421: INFO: namespace: e2e-tests-gc-qsbb4, resource: bindings, ignored listing per whitelist Jan 5 10:51:44.504: INFO: namespace e2e-tests-gc-qsbb4 deletion completed in 12.369079833s • [SLOW TEST:53.742 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 10:51:44.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-5d4bfba1-2fa9-11ea-910c-0242ac110004 STEP: Creating a pod to test consume configMaps Jan 5 10:51:45.102: INFO: Waiting up to 5m0s for pod "pod-configmaps-5d60496f-2fa9-11ea-910c-0242ac110004" in namespace "e2e-tests-configmap-9rbn7" to be "success or failure" Jan 5 10:51:45.155: INFO: Pod "pod-configmaps-5d60496f-2fa9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 53.117939ms Jan 5 10:51:47.265: INFO: Pod "pod-configmaps-5d60496f-2fa9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.163439283s Jan 5 10:51:49.277: INFO: Pod "pod-configmaps-5d60496f-2fa9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.175675391s Jan 5 10:51:51.299: INFO: Pod "pod-configmaps-5d60496f-2fa9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.197471932s Jan 5 10:51:53.317: INFO: Pod "pod-configmaps-5d60496f-2fa9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.21490336s Jan 5 10:51:55.733: INFO: Pod "pod-configmaps-5d60496f-2fa9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.631065385s Jan 5 10:51:57.752: INFO: Pod "pod-configmaps-5d60496f-2fa9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 12.650223269s Jan 5 10:51:59.774: INFO: Pod "pod-configmaps-5d60496f-2fa9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 14.672583639s Jan 5 10:52:01.817: INFO: Pod "pod-configmaps-5d60496f-2fa9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 16.715141528s Jan 5 10:52:03.845: INFO: Pod "pod-configmaps-5d60496f-2fa9-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.743285961s STEP: Saw pod success Jan 5 10:52:03.845: INFO: Pod "pod-configmaps-5d60496f-2fa9-11ea-910c-0242ac110004" satisfied condition "success or failure" Jan 5 10:52:03.883: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-5d60496f-2fa9-11ea-910c-0242ac110004 container configmap-volume-test: STEP: delete the pod Jan 5 10:52:04.144: INFO: Waiting for pod pod-configmaps-5d60496f-2fa9-11ea-910c-0242ac110004 to disappear Jan 5 10:52:04.158: INFO: Pod pod-configmaps-5d60496f-2fa9-11ea-910c-0242ac110004 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 10:52:04.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-9rbn7" for this suite. Jan 5 10:52:10.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 10:52:10.411: INFO: namespace: e2e-tests-configmap-9rbn7, resource: bindings, ignored listing per whitelist Jan 5 10:52:10.578: INFO: namespace e2e-tests-configmap-9rbn7 deletion completed in 6.411465294s • [SLOW TEST:26.072 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 10:52:10.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-6cb23ec9-2fa9-11ea-910c-0242ac110004 STEP: Creating a pod to test consume configMaps Jan 5 10:52:10.807: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6cb39d2b-2fa9-11ea-910c-0242ac110004" in namespace "e2e-tests-projected-97tsw" to be "success or failure" Jan 5 10:52:10.818: INFO: Pod "pod-projected-configmaps-6cb39d2b-2fa9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.610803ms Jan 5 10:52:12.832: INFO: Pod "pod-projected-configmaps-6cb39d2b-2fa9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02499814s Jan 5 10:52:14.866: INFO: Pod "pod-projected-configmaps-6cb39d2b-2fa9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058906142s Jan 5 10:52:16.886: INFO: Pod "pod-projected-configmaps-6cb39d2b-2fa9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078754867s Jan 5 10:52:18.906: INFO: Pod "pod-projected-configmaps-6cb39d2b-2fa9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.09866124s Jan 5 10:52:20.922: INFO: Pod "pod-projected-configmaps-6cb39d2b-2fa9-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.115378436s STEP: Saw pod success Jan 5 10:52:20.922: INFO: Pod "pod-projected-configmaps-6cb39d2b-2fa9-11ea-910c-0242ac110004" satisfied condition "success or failure" Jan 5 10:52:20.929: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-6cb39d2b-2fa9-11ea-910c-0242ac110004 container projected-configmap-volume-test: STEP: delete the pod Jan 5 10:52:21.314: INFO: Waiting for pod pod-projected-configmaps-6cb39d2b-2fa9-11ea-910c-0242ac110004 to disappear Jan 5 10:52:21.662: INFO: Pod pod-projected-configmaps-6cb39d2b-2fa9-11ea-910c-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 10:52:21.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-97tsw" for this suite. Jan 5 10:52:27.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 10:52:28.021: INFO: namespace: e2e-tests-projected-97tsw, resource: bindings, ignored listing per whitelist Jan 5 10:52:28.065: INFO: namespace e2e-tests-projected-97tsw deletion completed in 6.38741051s • [SLOW TEST:17.487 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 10:52:28.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod Jan 5 10:52:28.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-jdv88' Jan 5 10:52:30.383: INFO: stderr: "" Jan 5 10:52:30.383: INFO: stdout: "pod/pause created\n" Jan 5 10:52:30.383: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jan 5 10:52:30.384: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-jdv88" to be "running and ready" Jan 5 10:52:30.429: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 45.480993ms Jan 5 10:52:32.448: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06408466s Jan 5 10:52:34.471: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087129119s Jan 5 10:52:36.506: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.122792412s Jan 5 10:52:38.541: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.157135024s Jan 5 10:52:40.563: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.179759997s Jan 5 10:52:40.564: INFO: Pod "pause" satisfied condition "running and ready" Jan 5 10:52:40.564: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod Jan 5 10:52:40.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-jdv88' Jan 5 10:52:40.722: INFO: stderr: "" Jan 5 10:52:40.722: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jan 5 10:52:40.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-jdv88' Jan 5 10:52:40.842: INFO: stderr: "" Jan 5 10:52:40.842: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 10s testing-label-value\n" STEP: removing the label testing-label of a pod Jan 5 10:52:40.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-jdv88' Jan 5 10:52:40.958: INFO: stderr: "" Jan 5 10:52:40.958: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jan 5 10:52:40.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-jdv88' Jan 5 10:52:41.137: INFO: stderr: "" Jan 5 10:52:41.138: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 11s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources Jan 5 10:52:41.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-jdv88' Jan 5 10:52:41.306: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 5 10:52:41.307: INFO: stdout: "pod \"pause\" force deleted\n" Jan 5 10:52:41.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-jdv88' Jan 5 10:52:41.431: INFO: stderr: "No resources found.\n" Jan 5 10:52:41.431: INFO: stdout: "" Jan 5 10:52:41.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-jdv88 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 5 10:52:41.535: INFO: stderr: "" Jan 5 10:52:41.535: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 10:52:41.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-jdv88" for this suite. Jan 5 10:52:47.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 10:52:47.729: INFO: namespace: e2e-tests-kubectl-jdv88, resource: bindings, ignored listing per whitelist Jan 5 10:52:47.750: INFO: namespace e2e-tests-kubectl-jdv88 deletion completed in 6.202376332s • [SLOW TEST:19.685 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 10:52:47.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin Jan 5 10:52:47.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-m2jtb run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Jan 5 10:52:58.332: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0105 10:52:56.713281 218 log.go:172] (0xc00014c6e0) (0xc0009763c0) Create stream\nI0105 10:52:56.713888 218 log.go:172] (0xc00014c6e0) (0xc0009763c0) Stream added, broadcasting: 1\nI0105 10:52:56.723671 218 log.go:172] (0xc00014c6e0) Reply frame received for 1\nI0105 10:52:56.723706 218 log.go:172] (0xc00014c6e0) (0xc000976460) Create stream\nI0105 10:52:56.723712 218 log.go:172] (0xc00014c6e0) (0xc000976460) Stream added, broadcasting: 3\nI0105 10:52:56.726867 218 log.go:172] (0xc00014c6e0) Reply frame received for 3\nI0105 10:52:56.726931 218 log.go:172] (0xc00014c6e0) (0xc000976500) Create stream\nI0105 10:52:56.726943 218 log.go:172] (0xc00014c6e0) (0xc000976500) Stream added, broadcasting: 5\nI0105 10:52:56.728341 218 log.go:172] (0xc00014c6e0) Reply frame received for 5\nI0105 10:52:56.728393 218 log.go:172] (0xc00014c6e0) (0xc00088a000) Create stream\nI0105 10:52:56.728432 218 log.go:172] (0xc00014c6e0) (0xc00088a000) Stream added, broadcasting: 7\nI0105 10:52:56.729986 218 log.go:172] (0xc00014c6e0) Reply frame received for 7\nI0105 10:52:56.730523 218 log.go:172] (0xc000976460) (3) Writing data frame\nI0105 10:52:56.730758 218 log.go:172] (0xc000976460) (3) Writing data frame\nI0105 10:52:56.783049 218 log.go:172] (0xc00014c6e0) Data frame received for 5\nI0105 10:52:56.783087 218 log.go:172] (0xc000976500) (5) Data frame handling\nI0105 10:52:56.783112 218 log.go:172] (0xc000976500) (5) Data frame sent\nI0105 10:52:56.783124 218 log.go:172] (0xc00014c6e0) Data frame received for 5\nI0105 10:52:56.783141 218 log.go:172] (0xc000976500) (5) Data frame handling\nI0105 10:52:56.783200 218 log.go:172] (0xc000976500) (5) Data frame sent\nI0105 10:52:58.273630 218 log.go:172] (0xc00014c6e0) Data frame received for 1\nI0105 10:52:58.273992 218 log.go:172] (0xc00014c6e0) (0xc000976460) Stream removed, broadcasting: 3\nI0105 10:52:58.274088 218 log.go:172] (0xc0009763c0) (1) Data frame handling\nI0105 10:52:58.274114 218 log.go:172] (0xc0009763c0) (1) Data frame sent\nI0105 10:52:58.274149 218 log.go:172] (0xc00014c6e0) (0xc000976500) Stream removed, broadcasting: 5\nI0105 10:52:58.274225 218 log.go:172] (0xc00014c6e0) (0xc00088a000) Stream removed, broadcasting: 7\nI0105 10:52:58.274275 218 log.go:172] (0xc00014c6e0) (0xc0009763c0) Stream removed, broadcasting: 1\nI0105 10:52:58.274301 218 log.go:172] (0xc00014c6e0) Go away received\nI0105 10:52:58.274386 218 log.go:172] (0xc00014c6e0) (0xc0009763c0) Stream removed, broadcasting: 1\nI0105 10:52:58.274409 218 log.go:172] (0xc00014c6e0) (0xc000976460) Stream removed, broadcasting: 3\nI0105 10:52:58.274425 218 log.go:172] (0xc00014c6e0) (0xc000976500) Stream removed, broadcasting: 5\nI0105 10:52:58.274441 218 log.go:172] (0xc00014c6e0) (0xc00088a000) Stream removed, broadcasting: 7\n" Jan 5 10:52:58.332: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 10:53:00.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-m2jtb" for this suite. Jan 5 10:53:06.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 10:53:06.776: INFO: namespace: e2e-tests-kubectl-m2jtb, resource: bindings, ignored listing per whitelist Jan 5 10:53:06.844: INFO: namespace e2e-tests-kubectl-m2jtb deletion completed in 6.475591615s • [SLOW TEST:19.092 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 10:53:06.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-8e513cd7-2fa9-11ea-910c-0242ac110004 STEP: Creating a pod to test consume configMaps Jan 5 10:53:07.221: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8e529881-2fa9-11ea-910c-0242ac110004" in namespace "e2e-tests-projected-6c4rx" to be "success or failure" Jan 5 10:53:07.305: INFO: Pod "pod-projected-configmaps-8e529881-2fa9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 83.872593ms Jan 5 10:53:09.319: INFO: Pod "pod-projected-configmaps-8e529881-2fa9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097521288s Jan 5 10:53:11.340: INFO: Pod "pod-projected-configmaps-8e529881-2fa9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118882493s Jan 5 10:53:13.480: INFO: Pod "pod-projected-configmaps-8e529881-2fa9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.259129439s Jan 5 10:53:15.621: INFO: Pod "pod-projected-configmaps-8e529881-2fa9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.400217589s Jan 5 10:53:17.972: INFO: Pod "pod-projected-configmaps-8e529881-2fa9-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.750493823s STEP: Saw pod success Jan 5 10:53:17.972: INFO: Pod "pod-projected-configmaps-8e529881-2fa9-11ea-910c-0242ac110004" satisfied condition "success or failure" Jan 5 10:53:17.997: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-8e529881-2fa9-11ea-910c-0242ac110004 container projected-configmap-volume-test: STEP: delete the pod Jan 5 10:53:18.375: INFO: Waiting for pod pod-projected-configmaps-8e529881-2fa9-11ea-910c-0242ac110004 to disappear Jan 5 10:53:18.418: INFO: Pod pod-projected-configmaps-8e529881-2fa9-11ea-910c-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 10:53:18.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-6c4rx" for this suite. Jan 5 10:53:24.505: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 10:53:24.558: INFO: namespace: e2e-tests-projected-6c4rx, resource: bindings, ignored listing per whitelist Jan 5 10:53:24.720: INFO: namespace e2e-tests-projected-6c4rx deletion completed in 6.287344466s • [SLOW TEST:17.875 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 10:53:24.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-98dead65-2fa9-11ea-910c-0242ac110004 STEP: Creating a pod to test consume secrets Jan 5 10:53:24.905: INFO: Waiting up to 5m0s for pod "pod-secrets-98df3593-2fa9-11ea-910c-0242ac110004" in namespace "e2e-tests-secrets-82j5v" to be "success or failure" Jan 5 10:53:24.913: INFO: Pod "pod-secrets-98df3593-2fa9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.777076ms Jan 5 10:53:27.235: INFO: Pod "pod-secrets-98df3593-2fa9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.329761975s Jan 5 10:53:29.252: INFO: Pod "pod-secrets-98df3593-2fa9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.346580251s Jan 5 10:53:31.274: INFO: Pod "pod-secrets-98df3593-2fa9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.368502224s Jan 5 10:53:33.288: INFO: Pod "pod-secrets-98df3593-2fa9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.382329875s Jan 5 10:53:35.452: INFO: Pod "pod-secrets-98df3593-2fa9-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.546526519s STEP: Saw pod success Jan 5 10:53:35.452: INFO: Pod "pod-secrets-98df3593-2fa9-11ea-910c-0242ac110004" satisfied condition "success or failure" Jan 5 10:53:35.461: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-98df3593-2fa9-11ea-910c-0242ac110004 container secret-volume-test: STEP: delete the pod Jan 5 10:53:35.861: INFO: Waiting for pod pod-secrets-98df3593-2fa9-11ea-910c-0242ac110004 to disappear Jan 5 10:53:35.928: INFO: Pod pod-secrets-98df3593-2fa9-11ea-910c-0242ac110004 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 10:53:35.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-82j5v" for this suite. Jan 5 10:53:42.056: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 10:53:42.224: INFO: namespace: e2e-tests-secrets-82j5v, resource: bindings, ignored listing per whitelist Jan 5 10:53:42.271: INFO: namespace e2e-tests-secrets-82j5v deletion completed in 6.271619221s • [SLOW TEST:17.550 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 10:53:42.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-bx48c Jan 5 10:53:52.702: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-bx48c STEP: checking the pod's current state and verifying that restartCount is present Jan 5 10:53:52.707: INFO: Initial restart count of pod liveness-http is 0 Jan 5 10:54:19.580: INFO: Restart count of pod e2e-tests-container-probe-bx48c/liveness-http is now 1 (26.872997193s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 10:54:19.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-bx48c" for this suite. Jan 5 10:54:25.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 10:54:25.912: INFO: namespace: e2e-tests-container-probe-bx48c, resource: bindings, ignored listing per whitelist Jan 5 10:54:26.123: INFO: namespace e2e-tests-container-probe-bx48c deletion completed in 6.44674771s • [SLOW TEST:43.852 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 10:54:26.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Jan 5 10:54:26.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4pvgr' Jan 5 10:54:26.889: INFO: stderr: "" Jan 5 10:54:26.889: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 5 10:54:26.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-4pvgr' Jan 5 10:54:27.106: INFO: stderr: "" Jan 5 10:54:27.106: INFO: stdout: "update-demo-nautilus-clzmk update-demo-nautilus-nmlsz " Jan 5 10:54:27.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-clzmk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4pvgr' Jan 5 10:54:27.287: INFO: stderr: "" Jan 5 10:54:27.287: INFO: stdout: "" Jan 5 10:54:27.287: INFO: update-demo-nautilus-clzmk is created but not running Jan 5 10:54:32.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-4pvgr' Jan 5 10:54:32.549: INFO: stderr: "" Jan 5 10:54:32.549: INFO: stdout: "update-demo-nautilus-clzmk update-demo-nautilus-nmlsz " Jan 5 10:54:32.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-clzmk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4pvgr' Jan 5 10:54:32.660: INFO: stderr: "" Jan 5 10:54:32.660: INFO: stdout: "" Jan 5 10:54:32.660: INFO: update-demo-nautilus-clzmk is created but not running Jan 5 10:54:37.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-4pvgr' Jan 5 10:54:37.826: INFO: stderr: "" Jan 5 10:54:37.827: INFO: stdout: "update-demo-nautilus-clzmk update-demo-nautilus-nmlsz " Jan 5 10:54:37.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-clzmk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4pvgr' Jan 5 10:54:38.028: INFO: stderr: "" Jan 5 10:54:38.028: INFO: stdout: "true" Jan 5 10:54:38.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-clzmk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4pvgr' Jan 5 10:54:38.197: INFO: stderr: "" Jan 5 10:54:38.197: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 5 10:54:38.197: INFO: validating pod update-demo-nautilus-clzmk Jan 5 10:54:38.236: INFO: got data: { "image": "nautilus.jpg" } Jan 5 10:54:38.237: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 5 10:54:38.237: INFO: update-demo-nautilus-clzmk is verified up and running Jan 5 10:54:38.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nmlsz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4pvgr' Jan 5 10:54:38.341: INFO: stderr: "" Jan 5 10:54:38.341: INFO: stdout: "" Jan 5 10:54:38.342: INFO: update-demo-nautilus-nmlsz is created but not running Jan 5 10:54:43.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-4pvgr' Jan 5 10:54:43.526: INFO: stderr: "" Jan 5 10:54:43.526: INFO: stdout: "update-demo-nautilus-clzmk update-demo-nautilus-nmlsz " Jan 5 10:54:43.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-clzmk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4pvgr' Jan 5 10:54:43.744: INFO: stderr: "" Jan 5 10:54:43.744: INFO: stdout: "true" Jan 5 10:54:43.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-clzmk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4pvgr' Jan 5 10:54:43.849: INFO: stderr: "" Jan 5 10:54:43.849: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 5 10:54:43.849: INFO: validating pod update-demo-nautilus-clzmk Jan 5 10:54:43.900: INFO: got data: { "image": "nautilus.jpg" } Jan 5 10:54:43.900: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 5 10:54:43.900: INFO: update-demo-nautilus-clzmk is verified up and running Jan 5 10:54:43.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nmlsz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4pvgr' Jan 5 10:54:44.014: INFO: stderr: "" Jan 5 10:54:44.014: INFO: stdout: "true" Jan 5 10:54:44.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nmlsz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4pvgr' Jan 5 10:54:44.119: INFO: stderr: "" Jan 5 10:54:44.119: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 5 10:54:44.119: INFO: validating pod update-demo-nautilus-nmlsz Jan 5 10:54:44.136: INFO: got data: { "image": "nautilus.jpg" } Jan 5 10:54:44.136: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 5 10:54:44.136: INFO: update-demo-nautilus-nmlsz is verified up and running STEP: using delete to clean up resources Jan 5 10:54:44.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-4pvgr' Jan 5 10:54:44.268: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 5 10:54:44.268: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 5 10:54:44.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-4pvgr' Jan 5 10:54:44.416: INFO: stderr: "No resources found.\n" Jan 5 10:54:44.416: INFO: stdout: "" Jan 5 10:54:44.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-4pvgr -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 5 10:54:44.591: INFO: stderr: "" Jan 5 10:54:44.591: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 10:54:44.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-4pvgr" for this suite. Jan 5 10:55:06.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 10:55:06.932: INFO: namespace: e2e-tests-kubectl-4pvgr, resource: bindings, ignored listing per whitelist Jan 5 10:55:06.996: INFO: namespace e2e-tests-kubectl-4pvgr deletion completed in 22.362154151s • [SLOW TEST:40.873 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 10:55:06.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-d5cb4621-2fa9-11ea-910c-0242ac110004 STEP: Creating a pod to test consume configMaps Jan 5 10:55:07.123: INFO: Waiting up to 5m0s for pod "pod-configmaps-d5cc17f8-2fa9-11ea-910c-0242ac110004" in namespace "e2e-tests-configmap-b4shf" to be "success or failure" Jan 5 10:55:07.194: INFO: Pod "pod-configmaps-d5cc17f8-2fa9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 71.623704ms Jan 5 10:55:09.209: INFO: Pod "pod-configmaps-d5cc17f8-2fa9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086314098s Jan 5 10:55:11.223: INFO: Pod "pod-configmaps-d5cc17f8-2fa9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100380572s Jan 5 10:55:13.236: INFO: Pod "pod-configmaps-d5cc17f8-2fa9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113494701s Jan 5 10:55:15.302: INFO: Pod "pod-configmaps-d5cc17f8-2fa9-11ea-910c-0242ac110004": Phase="Running", Reason="", readiness=true. Elapsed: 8.179445516s Jan 5 10:55:17.333: INFO: Pod "pod-configmaps-d5cc17f8-2fa9-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.210307447s STEP: Saw pod success Jan 5 10:55:17.333: INFO: Pod "pod-configmaps-d5cc17f8-2fa9-11ea-910c-0242ac110004" satisfied condition "success or failure" Jan 5 10:55:17.341: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-d5cc17f8-2fa9-11ea-910c-0242ac110004 container configmap-volume-test: STEP: delete the pod Jan 5 10:55:17.569: INFO: Waiting for pod pod-configmaps-d5cc17f8-2fa9-11ea-910c-0242ac110004 to disappear Jan 5 10:55:17.579: INFO: Pod pod-configmaps-d5cc17f8-2fa9-11ea-910c-0242ac110004 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 10:55:17.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-b4shf" for this suite. Jan 5 10:55:23.637: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 10:55:23.808: INFO: namespace: e2e-tests-configmap-b4shf, resource: bindings, ignored listing per whitelist Jan 5 10:55:23.856: INFO: namespace e2e-tests-configmap-b4shf deletion completed in 6.265861648s • [SLOW TEST:16.860 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 10:55:23.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 5 10:55:24.227: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dff8d27a-2fa9-11ea-910c-0242ac110004" in namespace "e2e-tests-downward-api-kn4zq" to be "success or failure" Jan 5 10:55:24.251: INFO: Pod "downwardapi-volume-dff8d27a-2fa9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 23.670117ms Jan 5 10:55:26.375: INFO: Pod "downwardapi-volume-dff8d27a-2fa9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.147414698s Jan 5 10:55:28.402: INFO: Pod "downwardapi-volume-dff8d27a-2fa9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.175012839s Jan 5 10:55:30.797: INFO: Pod "downwardapi-volume-dff8d27a-2fa9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.570046045s Jan 5 10:55:32.820: INFO: Pod "downwardapi-volume-dff8d27a-2fa9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.592681335s Jan 5 10:55:34.849: INFO: Pod "downwardapi-volume-dff8d27a-2fa9-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.622082495s STEP: Saw pod success Jan 5 10:55:34.849: INFO: Pod "downwardapi-volume-dff8d27a-2fa9-11ea-910c-0242ac110004" satisfied condition "success or failure" Jan 5 10:55:34.862: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-dff8d27a-2fa9-11ea-910c-0242ac110004 container client-container: STEP: delete the pod Jan 5 10:55:35.038: INFO: Waiting for pod downwardapi-volume-dff8d27a-2fa9-11ea-910c-0242ac110004 to disappear Jan 5 10:55:35.059: INFO: Pod downwardapi-volume-dff8d27a-2fa9-11ea-910c-0242ac110004 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 10:55:35.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-kn4zq" for this suite. Jan 5 10:55:41.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 10:55:41.286: INFO: namespace: e2e-tests-downward-api-kn4zq, resource: bindings, ignored listing per whitelist Jan 5 10:55:41.299: INFO: namespace e2e-tests-downward-api-kn4zq deletion completed in 6.182496724s • [SLOW TEST:17.442 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 10:55:41.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 5 10:55:41.432: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jan 5 10:55:41.468: INFO: Pod name sample-pod: Found 0 pods out of 1 Jan 5 10:55:46.855: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 5 10:55:50.903: INFO: Creating deployment "test-rolling-update-deployment" Jan 5 10:55:50.930: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jan 5 10:55:51.070: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jan 5 10:55:53.098: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jan 5 10:55:53.315: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713818551, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713818551, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713818551, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713818551, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 5 10:55:55.338: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713818551, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713818551, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713818551, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713818551, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 5 10:55:57.331: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713818551, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713818551, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713818551, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713818551, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 5 10:55:59.320: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713818551, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713818551, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713818551, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713818551, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 5 10:56:02.245: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jan 5 10:56:02.288: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-cxpkd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-cxpkd/deployments/test-rolling-update-deployment,UID:efe6a33e-2fa9-11ea-a994-fa163e34d433,ResourceVersion:17242109,Generation:1,CreationTimestamp:2020-01-05 10:55:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-05 10:55:51 +0000 UTC 2020-01-05 10:55:51 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-05 10:56:00 +0000 UTC 2020-01-05 10:55:51 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jan 5 10:56:02.295: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-cxpkd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-cxpkd/replicasets/test-rolling-update-deployment-75db98fb4c,UID:f0061678-2fa9-11ea-a994-fa163e34d433,ResourceVersion:17242100,Generation:1,CreationTimestamp:2020-01-05 10:55:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment efe6a33e-2fa9-11ea-a994-fa163e34d433 0xc00117c4e7 0xc00117c4e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jan 5 10:56:02.295: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jan 5 10:56:02.296: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-cxpkd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-cxpkd/replicasets/test-rolling-update-controller,UID:ea41b7d9-2fa9-11ea-a994-fa163e34d433,ResourceVersion:17242108,Generation:2,CreationTimestamp:2020-01-05 10:55:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment efe6a33e-2fa9-11ea-a994-fa163e34d433 0xc00117c427 0xc00117c428}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 5 10:56:02.300: INFO: Pod "test-rolling-update-deployment-75db98fb4c-dmjjs" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-dmjjs,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-cxpkd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-cxpkd/pods/test-rolling-update-deployment-75db98fb4c-dmjjs,UID:f008c9bc-2fa9-11ea-a994-fa163e34d433,ResourceVersion:17242099,Generation:0,CreationTimestamp:2020-01-05 10:55:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c f0061678-2fa9-11ea-a994-fa163e34d433 0xc00207d3f7 0xc00207d3f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pjxpc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pjxpc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-pjxpc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00207d460} {node.kubernetes.io/unreachable Exists NoExecute 0xc00207d480}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 10:55:51 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 10:55:59 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 10:55:59 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 10:55:51 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-05 10:55:51 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-05 10:55:58 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://24b43f1a85b1c02f3012b8c249dfa9c5ad17e7b23c474b8fbc05c917c4661045}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 10:56:02.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-cxpkd" for this suite. Jan 5 10:56:10.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 10:56:10.385: INFO: namespace: e2e-tests-deployment-cxpkd, resource: bindings, ignored listing per whitelist Jan 5 10:56:11.541: INFO: namespace e2e-tests-deployment-cxpkd deletion completed in 9.235362134s • [SLOW TEST:30.241 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 10:56:11.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-fc5f4bd5-2fa9-11ea-910c-0242ac110004 STEP: Creating a pod to test consume secrets Jan 5 10:56:12.048: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fc6f5b4f-2fa9-11ea-910c-0242ac110004" in namespace "e2e-tests-projected-ts65w" to be "success or failure" Jan 5 10:56:12.166: INFO: Pod "pod-projected-secrets-fc6f5b4f-2fa9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 117.283778ms Jan 5 10:56:14.186: INFO: Pod "pod-projected-secrets-fc6f5b4f-2fa9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137918654s Jan 5 10:56:16.199: INFO: Pod "pod-projected-secrets-fc6f5b4f-2fa9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.15027935s Jan 5 10:56:18.835: INFO: Pod "pod-projected-secrets-fc6f5b4f-2fa9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.787162806s Jan 5 10:56:20.852: INFO: Pod "pod-projected-secrets-fc6f5b4f-2fa9-11ea-910c-0242ac110004": Phase="Running", Reason="", readiness=true. Elapsed: 8.803537777s Jan 5 10:56:22.910: INFO: Pod "pod-projected-secrets-fc6f5b4f-2fa9-11ea-910c-0242ac110004": Phase="Running", Reason="", readiness=true. Elapsed: 10.861578364s Jan 5 10:56:24.932: INFO: Pod "pod-projected-secrets-fc6f5b4f-2fa9-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.883293211s STEP: Saw pod success Jan 5 10:56:24.932: INFO: Pod "pod-projected-secrets-fc6f5b4f-2fa9-11ea-910c-0242ac110004" satisfied condition "success or failure" Jan 5 10:56:24.955: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-fc6f5b4f-2fa9-11ea-910c-0242ac110004 container secret-volume-test: STEP: delete the pod Jan 5 10:56:25.146: INFO: Waiting for pod pod-projected-secrets-fc6f5b4f-2fa9-11ea-910c-0242ac110004 to disappear Jan 5 10:56:25.157: INFO: Pod pod-projected-secrets-fc6f5b4f-2fa9-11ea-910c-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 10:56:25.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-ts65w" for this suite. Jan 5 10:56:31.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 10:56:31.417: INFO: namespace: e2e-tests-projected-ts65w, resource: bindings, ignored listing per whitelist Jan 5 10:56:31.452: INFO: namespace e2e-tests-projected-ts65w deletion completed in 6.283428139s • [SLOW TEST:19.911 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 10:56:31.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Jan 5 10:56:31.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-sdbx4' Jan 5 10:56:32.274: INFO: stderr: "" Jan 5 10:56:32.274: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jan 5 10:56:33.423: INFO: Selector matched 1 pods for map[app:redis] Jan 5 10:56:33.423: INFO: Found 0 / 1 Jan 5 10:56:34.316: INFO: Selector matched 1 pods for map[app:redis] Jan 5 10:56:34.316: INFO: Found 0 / 1 Jan 5 10:56:35.291: INFO: Selector matched 1 pods for map[app:redis] Jan 5 10:56:35.291: INFO: Found 0 / 1 Jan 5 10:56:36.305: INFO: Selector matched 1 pods for map[app:redis] Jan 5 10:56:36.305: INFO: Found 0 / 1 Jan 5 10:56:37.628: INFO: Selector matched 1 pods for map[app:redis] Jan 5 10:56:37.628: INFO: Found 0 / 1 Jan 5 10:56:38.291: INFO: Selector matched 1 pods for map[app:redis] Jan 5 10:56:38.292: INFO: Found 0 / 1 Jan 5 10:56:39.435: INFO: Selector matched 1 pods for map[app:redis] Jan 5 10:56:39.435: INFO: Found 0 / 1 Jan 5 10:56:40.312: INFO: Selector matched 1 pods for map[app:redis] Jan 5 10:56:40.312: INFO: Found 0 / 1 Jan 5 10:56:41.306: INFO: Selector matched 1 pods for map[app:redis] Jan 5 10:56:41.306: INFO: Found 0 / 1 Jan 5 10:56:42.289: INFO: Selector matched 1 pods for map[app:redis] Jan 5 10:56:42.289: INFO: Found 1 / 1 Jan 5 10:56:42.289: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jan 5 10:56:42.299: INFO: Selector matched 1 pods for map[app:redis] Jan 5 10:56:42.300: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 5 10:56:42.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-4sxcw --namespace=e2e-tests-kubectl-sdbx4 -p {"metadata":{"annotations":{"x":"y"}}}' Jan 5 10:56:42.517: INFO: stderr: "" Jan 5 10:56:42.518: INFO: stdout: "pod/redis-master-4sxcw patched\n" STEP: checking annotations Jan 5 10:56:42.562: INFO: Selector matched 1 pods for map[app:redis] Jan 5 10:56:42.562: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 10:56:42.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-sdbx4" for this suite. Jan 5 10:57:06.701: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 10:57:06.890: INFO: namespace: e2e-tests-kubectl-sdbx4, resource: bindings, ignored listing per whitelist Jan 5 10:57:06.936: INFO: namespace e2e-tests-kubectl-sdbx4 deletion completed in 24.355287819s • [SLOW TEST:35.483 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 10:57:06.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 5 10:57:07.290: INFO: Waiting up to 5m0s for pod "pod-1d69a935-2faa-11ea-910c-0242ac110004" in namespace "e2e-tests-emptydir-tmpl4" to be "success or failure" Jan 5 10:57:07.325: INFO: Pod "pod-1d69a935-2faa-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 34.848239ms Jan 5 10:57:09.508: INFO: Pod "pod-1d69a935-2faa-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217425772s Jan 5 10:57:11.526: INFO: Pod "pod-1d69a935-2faa-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.235713977s Jan 5 10:57:13.549: INFO: Pod "pod-1d69a935-2faa-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.258366751s Jan 5 10:57:15.952: INFO: Pod "pod-1d69a935-2faa-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.661312457s Jan 5 10:57:17.966: INFO: Pod "pod-1d69a935-2faa-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.675900491s STEP: Saw pod success Jan 5 10:57:17.966: INFO: Pod "pod-1d69a935-2faa-11ea-910c-0242ac110004" satisfied condition "success or failure" Jan 5 10:57:17.971: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-1d69a935-2faa-11ea-910c-0242ac110004 container test-container: STEP: delete the pod Jan 5 10:57:18.062: INFO: Waiting for pod pod-1d69a935-2faa-11ea-910c-0242ac110004 to disappear Jan 5 10:57:18.412: INFO: Pod pod-1d69a935-2faa-11ea-910c-0242ac110004 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 10:57:18.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-tmpl4" for this suite. Jan 5 10:57:24.692: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 10:57:24.800: INFO: namespace: e2e-tests-emptydir-tmpl4, resource: bindings, ignored listing per whitelist Jan 5 10:57:24.821: INFO: namespace e2e-tests-emptydir-tmpl4 deletion completed in 6.395842825s • [SLOW TEST:17.884 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 10:57:24.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 5 10:57:25.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-4gscw' Jan 5 10:57:25.226: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 5 10:57:25.226: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Jan 5 10:57:25.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-4gscw' Jan 5 10:57:25.413: INFO: stderr: "" Jan 5 10:57:25.413: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 10:57:25.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-4gscw" for this suite. Jan 5 10:57:47.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 10:57:47.796: INFO: namespace: e2e-tests-kubectl-4gscw, resource: bindings, ignored listing per whitelist Jan 5 10:57:47.834: INFO: namespace e2e-tests-kubectl-4gscw deletion completed in 22.360193848s • [SLOW TEST:23.013 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 10:57:47.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 5 10:57:48.155: INFO: Waiting up to 5m0s for pod "downwardapi-volume-35c61a9f-2faa-11ea-910c-0242ac110004" in namespace "e2e-tests-projected-xrtxp" to be "success or failure" Jan 5 10:57:48.248: INFO: Pod "downwardapi-volume-35c61a9f-2faa-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 92.495401ms Jan 5 10:57:50.268: INFO: Pod "downwardapi-volume-35c61a9f-2faa-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112856615s Jan 5 10:57:52.299: INFO: Pod "downwardapi-volume-35c61a9f-2faa-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.143495168s Jan 5 10:57:54.517: INFO: Pod "downwardapi-volume-35c61a9f-2faa-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.36178651s Jan 5 10:57:56.550: INFO: Pod "downwardapi-volume-35c61a9f-2faa-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.394685463s Jan 5 10:57:58.578: INFO: Pod "downwardapi-volume-35c61a9f-2faa-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.422202689s STEP: Saw pod success Jan 5 10:57:58.578: INFO: Pod "downwardapi-volume-35c61a9f-2faa-11ea-910c-0242ac110004" satisfied condition "success or failure" Jan 5 10:57:58.584: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-35c61a9f-2faa-11ea-910c-0242ac110004 container client-container: STEP: delete the pod Jan 5 10:57:58.701: INFO: Waiting for pod downwardapi-volume-35c61a9f-2faa-11ea-910c-0242ac110004 to disappear Jan 5 10:57:59.305: INFO: Pod downwardapi-volume-35c61a9f-2faa-11ea-910c-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 10:57:59.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xrtxp" for this suite. Jan 5 10:58:07.768: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 10:58:07.882: INFO: namespace: e2e-tests-projected-xrtxp, resource: bindings, ignored listing per whitelist Jan 5 10:58:07.967: INFO: namespace e2e-tests-projected-xrtxp deletion completed in 8.370498549s • [SLOW TEST:20.132 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 10:58:07.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller Jan 5 10:58:08.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-d27mh' Jan 5 10:58:08.595: INFO: stderr: "" Jan 5 10:58:08.595: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 5 10:58:08.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-d27mh' Jan 5 10:58:08.716: INFO: stderr: "" Jan 5 10:58:08.716: INFO: stdout: "update-demo-nautilus-f7bcg " STEP: Replicas for name=update-demo: expected=2 actual=1 Jan 5 10:58:13.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-d27mh' Jan 5 10:58:13.907: INFO: stderr: "" Jan 5 10:58:13.907: INFO: stdout: "update-demo-nautilus-8fcb6 update-demo-nautilus-f7bcg " Jan 5 10:58:13.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8fcb6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-d27mh' Jan 5 10:58:14.104: INFO: stderr: "" Jan 5 10:58:14.104: INFO: stdout: "" Jan 5 10:58:14.104: INFO: update-demo-nautilus-8fcb6 is created but not running Jan 5 10:58:19.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-d27mh' Jan 5 10:58:19.229: INFO: stderr: "" Jan 5 10:58:19.229: INFO: stdout: "update-demo-nautilus-8fcb6 update-demo-nautilus-f7bcg " Jan 5 10:58:19.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8fcb6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-d27mh' Jan 5 10:58:19.374: INFO: stderr: "" Jan 5 10:58:19.374: INFO: stdout: "true" Jan 5 10:58:19.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8fcb6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-d27mh' Jan 5 10:58:19.470: INFO: stderr: "" Jan 5 10:58:19.470: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 5 10:58:19.470: INFO: validating pod update-demo-nautilus-8fcb6 Jan 5 10:58:19.512: INFO: got data: { "image": "nautilus.jpg" } Jan 5 10:58:19.512: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 5 10:58:19.512: INFO: update-demo-nautilus-8fcb6 is verified up and running Jan 5 10:58:19.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f7bcg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-d27mh' Jan 5 10:58:19.657: INFO: stderr: "" Jan 5 10:58:19.658: INFO: stdout: "true" Jan 5 10:58:19.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f7bcg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-d27mh' Jan 5 10:58:19.761: INFO: stderr: "" Jan 5 10:58:19.761: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 5 10:58:19.761: INFO: validating pod update-demo-nautilus-f7bcg Jan 5 10:58:19.773: INFO: got data: { "image": "nautilus.jpg" } Jan 5 10:58:19.774: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 5 10:58:19.774: INFO: update-demo-nautilus-f7bcg is verified up and running STEP: rolling-update to new replication controller Jan 5 10:58:19.779: INFO: scanned /root for discovery docs: Jan 5 10:58:19.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-d27mh' Jan 5 10:58:52.789: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jan 5 10:58:52.789: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 5 10:58:52.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-d27mh' Jan 5 10:58:52.921: INFO: stderr: "" Jan 5 10:58:52.921: INFO: stdout: "update-demo-kitten-6s4vj update-demo-kitten-c48q5 " Jan 5 10:58:52.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-6s4vj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-d27mh' Jan 5 10:58:53.030: INFO: stderr: "" Jan 5 10:58:53.030: INFO: stdout: "true" Jan 5 10:58:53.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-6s4vj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-d27mh' Jan 5 10:58:53.124: INFO: stderr: "" Jan 5 10:58:53.125: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jan 5 10:58:53.125: INFO: validating pod update-demo-kitten-6s4vj Jan 5 10:58:53.143: INFO: got data: { "image": "kitten.jpg" } Jan 5 10:58:53.143: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jan 5 10:58:53.143: INFO: update-demo-kitten-6s4vj is verified up and running Jan 5 10:58:53.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-c48q5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-d27mh' Jan 5 10:58:53.276: INFO: stderr: "" Jan 5 10:58:53.276: INFO: stdout: "true" Jan 5 10:58:53.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-c48q5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-d27mh' Jan 5 10:58:53.429: INFO: stderr: "" Jan 5 10:58:53.429: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jan 5 10:58:53.429: INFO: validating pod update-demo-kitten-c48q5 Jan 5 10:58:53.450: INFO: got data: { "image": "kitten.jpg" } Jan 5 10:58:53.450: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jan 5 10:58:53.450: INFO: update-demo-kitten-c48q5 is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 10:58:53.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-d27mh" for this suite. Jan 5 10:59:19.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 10:59:19.763: INFO: namespace: e2e-tests-kubectl-d27mh, resource: bindings, ignored listing per whitelist Jan 5 10:59:19.840: INFO: namespace e2e-tests-kubectl-d27mh deletion completed in 26.381361117s • [SLOW TEST:71.873 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 10:59:19.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jan 5 10:59:27.778: INFO: 7 pods remaining Jan 5 10:59:27.778: INFO: 0 pods has nil DeletionTimestamp Jan 5 10:59:27.778: INFO: Jan 5 10:59:28.861: INFO: 0 pods remaining Jan 5 10:59:28.861: INFO: 0 pods has nil DeletionTimestamp Jan 5 10:59:28.861: INFO: Jan 5 10:59:29.602: INFO: 0 pods remaining Jan 5 10:59:29.602: INFO: 0 pods has nil DeletionTimestamp Jan 5 10:59:29.602: INFO: STEP: Gathering metrics W0105 10:59:30.357037 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 5 10:59:30.357: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 10:59:30.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-46q44" for this suite. Jan 5 10:59:44.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 10:59:44.601: INFO: namespace: e2e-tests-gc-46q44, resource: bindings, ignored listing per whitelist Jan 5 10:59:44.608: INFO: namespace e2e-tests-gc-46q44 deletion completed in 14.240513643s • [SLOW TEST:24.768 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 10:59:44.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-7b5a4e1b-2faa-11ea-910c-0242ac110004 STEP: Creating configMap with name cm-test-opt-upd-7b5a4f07-2faa-11ea-910c-0242ac110004 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-7b5a4e1b-2faa-11ea-910c-0242ac110004 STEP: Updating configmap cm-test-opt-upd-7b5a4f07-2faa-11ea-910c-0242ac110004 STEP: Creating configMap with name cm-test-opt-create-7b5a4f65-2faa-11ea-910c-0242ac110004 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:01:09.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-kw7r2" for this suite. Jan 5 11:01:33.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:01:33.366: INFO: namespace: e2e-tests-projected-kw7r2, resource: bindings, ignored listing per whitelist Jan 5 11:01:33.456: INFO: namespace e2e-tests-projected-kw7r2 deletion completed in 24.234284646s • [SLOW TEST:108.848 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:01:33.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 5 11:01:52.134: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 5 11:01:52.174: INFO: Pod pod-with-prestop-exec-hook still exists Jan 5 11:01:54.175: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 5 11:01:54.199: INFO: Pod pod-with-prestop-exec-hook still exists Jan 5 11:01:56.175: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 5 11:01:56.196: INFO: Pod pod-with-prestop-exec-hook still exists Jan 5 11:01:58.175: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 5 11:01:58.190: INFO: Pod pod-with-prestop-exec-hook still exists Jan 5 11:02:00.175: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 5 11:02:00.198: INFO: Pod pod-with-prestop-exec-hook still exists Jan 5 11:02:02.176: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 5 11:02:02.199: INFO: Pod pod-with-prestop-exec-hook still exists Jan 5 11:02:04.175: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 5 11:02:04.204: INFO: Pod pod-with-prestop-exec-hook still exists Jan 5 11:02:06.175: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 5 11:02:06.195: INFO: Pod pod-with-prestop-exec-hook still exists Jan 5 11:02:08.175: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 5 11:02:08.197: INFO: Pod pod-with-prestop-exec-hook still exists Jan 5 11:02:10.175: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 5 11:02:10.365: INFO: Pod pod-with-prestop-exec-hook still exists Jan 5 11:02:12.175: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 5 11:02:12.448: INFO: Pod pod-with-prestop-exec-hook still exists Jan 5 11:02:14.176: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 5 11:02:14.195: INFO: Pod pod-with-prestop-exec-hook still exists Jan 5 11:02:16.175: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 5 11:02:16.204: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:02:16.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-qvxcf" for this suite. Jan 5 11:02:40.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:02:40.548: INFO: namespace: e2e-tests-container-lifecycle-hook-qvxcf, resource: bindings, ignored listing per whitelist Jan 5 11:02:40.608: INFO: namespace e2e-tests-container-lifecycle-hook-qvxcf deletion completed in 24.351345945s • [SLOW TEST:67.152 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:02:40.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-wbmg STEP: Creating a pod to test atomic-volume-subpath Jan 5 11:02:40.814: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-wbmg" in namespace "e2e-tests-subpath-r26z5" to be "success or failure" Jan 5 11:02:40.884: INFO: Pod "pod-subpath-test-configmap-wbmg": Phase="Pending", Reason="", readiness=false. Elapsed: 70.376706ms Jan 5 11:02:43.667: INFO: Pod "pod-subpath-test-configmap-wbmg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.852925589s Jan 5 11:02:45.685: INFO: Pod "pod-subpath-test-configmap-wbmg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.871615619s Jan 5 11:02:47.710: INFO: Pod "pod-subpath-test-configmap-wbmg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.896125794s Jan 5 11:02:49.726: INFO: Pod "pod-subpath-test-configmap-wbmg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.912692046s Jan 5 11:02:51.746: INFO: Pod "pod-subpath-test-configmap-wbmg": Phase="Pending", Reason="", readiness=false. Elapsed: 10.931809307s Jan 5 11:02:53.759: INFO: Pod "pod-subpath-test-configmap-wbmg": Phase="Pending", Reason="", readiness=false. Elapsed: 12.945036002s Jan 5 11:02:55.791: INFO: Pod "pod-subpath-test-configmap-wbmg": Phase="Pending", Reason="", readiness=false. Elapsed: 14.976913212s Jan 5 11:02:57.808: INFO: Pod "pod-subpath-test-configmap-wbmg": Phase="Running", Reason="", readiness=true. Elapsed: 16.993977897s Jan 5 11:02:59.827: INFO: Pod "pod-subpath-test-configmap-wbmg": Phase="Running", Reason="", readiness=false. Elapsed: 19.01356151s Jan 5 11:03:01.870: INFO: Pod "pod-subpath-test-configmap-wbmg": Phase="Running", Reason="", readiness=false. Elapsed: 21.05585045s Jan 5 11:03:03.901: INFO: Pod "pod-subpath-test-configmap-wbmg": Phase="Running", Reason="", readiness=false. Elapsed: 23.087248411s Jan 5 11:03:05.921: INFO: Pod "pod-subpath-test-configmap-wbmg": Phase="Running", Reason="", readiness=false. Elapsed: 25.106950756s Jan 5 11:03:07.939: INFO: Pod "pod-subpath-test-configmap-wbmg": Phase="Running", Reason="", readiness=false. Elapsed: 27.125656969s Jan 5 11:03:09.958: INFO: Pod "pod-subpath-test-configmap-wbmg": Phase="Running", Reason="", readiness=false. Elapsed: 29.144447172s Jan 5 11:03:11.981: INFO: Pod "pod-subpath-test-configmap-wbmg": Phase="Running", Reason="", readiness=false. Elapsed: 31.167003835s Jan 5 11:03:14.004: INFO: Pod "pod-subpath-test-configmap-wbmg": Phase="Running", Reason="", readiness=false. Elapsed: 33.190688517s Jan 5 11:03:16.017: INFO: Pod "pod-subpath-test-configmap-wbmg": Phase="Running", Reason="", readiness=false. Elapsed: 35.203527205s Jan 5 11:03:18.029: INFO: Pod "pod-subpath-test-configmap-wbmg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.215104039s STEP: Saw pod success Jan 5 11:03:18.029: INFO: Pod "pod-subpath-test-configmap-wbmg" satisfied condition "success or failure" Jan 5 11:03:18.033: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-wbmg container test-container-subpath-configmap-wbmg: STEP: delete the pod Jan 5 11:03:18.939: INFO: Waiting for pod pod-subpath-test-configmap-wbmg to disappear Jan 5 11:03:18.961: INFO: Pod pod-subpath-test-configmap-wbmg no longer exists STEP: Deleting pod pod-subpath-test-configmap-wbmg Jan 5 11:03:18.962: INFO: Deleting pod "pod-subpath-test-configmap-wbmg" in namespace "e2e-tests-subpath-r26z5" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:03:18.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-r26z5" for this suite. Jan 5 11:03:27.174: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:03:27.310: INFO: namespace: e2e-tests-subpath-r26z5, resource: bindings, ignored listing per whitelist Jan 5 11:03:27.357: INFO: namespace e2e-tests-subpath-r26z5 deletion completed in 8.360026303s • [SLOW TEST:46.749 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:03:27.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 5 11:06:29.597: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 5 11:06:29.670: INFO: Pod pod-with-poststart-exec-hook still exists Jan 5 11:06:31.670: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 5 11:06:31.704: INFO: Pod pod-with-poststart-exec-hook still exists Jan 5 11:06:33.670: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 5 11:06:33.681: INFO: Pod pod-with-poststart-exec-hook still exists Jan 5 11:06:35.670: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 5 11:06:35.686: INFO: Pod pod-with-poststart-exec-hook still exists Jan 5 11:06:37.670: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 5 11:06:37.691: INFO: Pod pod-with-poststart-exec-hook still exists Jan 5 11:06:39.670: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 5 11:06:39.704: INFO: Pod pod-with-poststart-exec-hook still exists Jan 5 11:06:41.670: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 5 11:06:41.688: INFO: Pod pod-with-poststart-exec-hook still exists Jan 5 11:06:43.670: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 5 11:06:43.687: INFO: Pod pod-with-poststart-exec-hook still exists Jan 5 11:06:45.670: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 5 11:06:45.687: INFO: Pod pod-with-poststart-exec-hook still exists Jan 5 11:06:47.670: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 5 11:06:47.687: INFO: Pod pod-with-poststart-exec-hook still exists Jan 5 11:06:49.670: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 5 11:06:49.716: INFO: Pod pod-with-poststart-exec-hook still exists Jan 5 11:06:51.670: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 5 11:06:51.683: INFO: Pod pod-with-poststart-exec-hook still exists Jan 5 11:06:53.670: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 5 11:06:53.685: INFO: Pod pod-with-poststart-exec-hook still exists Jan 5 11:06:55.670: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 5 11:06:55.685: INFO: Pod pod-with-poststart-exec-hook still exists Jan 5 11:06:57.670: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 5 11:06:57.688: INFO: Pod pod-with-poststart-exec-hook still exists Jan 5 11:06:59.670: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 5 11:06:59.736: INFO: Pod pod-with-poststart-exec-hook still exists Jan 5 11:07:01.670: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 5 11:07:01.687: INFO: Pod pod-with-poststart-exec-hook still exists Jan 5 11:07:03.670: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 5 11:07:03.685: INFO: Pod pod-with-poststart-exec-hook still exists Jan 5 11:07:05.670: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 5 11:07:05.687: INFO: Pod pod-with-poststart-exec-hook still exists Jan 5 11:07:07.670: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 5 11:07:07.684: INFO: Pod pod-with-poststart-exec-hook still exists Jan 5 11:07:09.670: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 5 11:07:09.695: INFO: Pod pod-with-poststart-exec-hook still exists Jan 5 11:07:11.671: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 5 11:07:11.723: INFO: Pod pod-with-poststart-exec-hook still exists Jan 5 11:07:13.671: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 5 11:07:13.700: INFO: Pod pod-with-poststart-exec-hook still exists Jan 5 11:07:15.670: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 5 11:07:15.694: INFO: Pod pod-with-poststart-exec-hook still exists Jan 5 11:07:17.670: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 5 11:07:17.687: INFO: Pod pod-with-poststart-exec-hook still exists Jan 5 11:07:19.670: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 5 11:07:19.693: INFO: Pod pod-with-poststart-exec-hook still exists Jan 5 11:07:21.670: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 5 11:07:21.686: INFO: Pod pod-with-poststart-exec-hook still exists Jan 5 11:07:23.670: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 5 11:07:23.690: INFO: Pod pod-with-poststart-exec-hook still exists Jan 5 11:07:25.670: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 5 11:07:25.690: INFO: Pod pod-with-poststart-exec-hook still exists Jan 5 11:07:27.671: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 5 11:07:27.981: INFO: Pod pod-with-poststart-exec-hook still exists Jan 5 11:07:29.670: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 5 11:07:29.685: INFO: Pod pod-with-poststart-exec-hook still exists Jan 5 11:07:31.670: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 5 11:07:31.688: INFO: Pod pod-with-poststart-exec-hook still exists Jan 5 11:07:33.670: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 5 11:07:33.686: INFO: Pod pod-with-poststart-exec-hook still exists Jan 5 11:07:35.670: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 5 11:07:36.011: INFO: Pod pod-with-poststart-exec-hook still exists Jan 5 11:07:37.670: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 5 11:07:37.686: INFO: Pod pod-with-poststart-exec-hook still exists Jan 5 11:07:39.670: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 5 11:07:39.679: INFO: Pod pod-with-poststart-exec-hook still exists Jan 5 11:07:41.670: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 5 11:07:41.700: INFO: Pod pod-with-poststart-exec-hook still exists Jan 5 11:07:43.670: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 5 11:07:43.691: INFO: Pod pod-with-poststart-exec-hook still exists Jan 5 11:07:45.670: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 5 11:07:45.690: INFO: Pod pod-with-poststart-exec-hook still exists Jan 5 11:07:47.670: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 5 11:07:47.691: INFO: Pod pod-with-poststart-exec-hook still exists Jan 5 11:07:49.671: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 5 11:07:49.687: INFO: Pod pod-with-poststart-exec-hook still exists Jan 5 11:07:51.671: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 5 11:07:51.688: INFO: Pod pod-with-poststart-exec-hook still exists Jan 5 11:07:53.670: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 5 11:07:53.700: INFO: Pod pod-with-poststart-exec-hook still exists Jan 5 11:07:55.670: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 5 11:07:55.683: INFO: Pod pod-with-poststart-exec-hook still exists Jan 5 11:07:57.670: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 5 11:07:57.685: INFO: Pod pod-with-poststart-exec-hook still exists Jan 5 11:07:59.670: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 5 11:07:59.717: INFO: Pod pod-with-poststart-exec-hook still exists Jan 5 11:08:01.670: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 5 11:08:01.682: INFO: Pod pod-with-poststart-exec-hook still exists Jan 5 11:08:03.670: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 5 11:08:03.685: INFO: Pod pod-with-poststart-exec-hook still exists Jan 5 11:08:05.670: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 5 11:08:05.680: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:08:05.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-lnj44" for this suite. Jan 5 11:08:29.757: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:08:29.930: INFO: namespace: e2e-tests-container-lifecycle-hook-lnj44, resource: bindings, ignored listing per whitelist Jan 5 11:08:29.945: INFO: namespace e2e-tests-container-lifecycle-hook-lnj44 deletion completed in 24.257769138s • [SLOW TEST:302.587 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:08:29.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 5 11:08:30.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-djsjc' Jan 5 11:08:31.965: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 5 11:08:31.965: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 Jan 5 11:08:36.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-djsjc' Jan 5 11:08:36.483: INFO: stderr: "" Jan 5 11:08:36.483: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:08:36.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-djsjc" for this suite. Jan 5 11:08:42.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:08:42.705: INFO: namespace: e2e-tests-kubectl-djsjc, resource: bindings, ignored listing per whitelist Jan 5 11:08:42.765: INFO: namespace e2e-tests-kubectl-djsjc deletion completed in 6.242784561s • [SLOW TEST:12.820 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:08:42.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-t5fwx Jan 5 11:08:51.142: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-t5fwx STEP: checking the pod's current state and verifying that restartCount is present Jan 5 11:08:51.155: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:12:51.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-t5fwx" for this suite. Jan 5 11:12:57.490: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:12:57.597: INFO: namespace: e2e-tests-container-probe-t5fwx, resource: bindings, ignored listing per whitelist Jan 5 11:12:57.756: INFO: namespace e2e-tests-container-probe-t5fwx deletion completed in 6.304756048s • [SLOW TEST:254.991 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:12:57.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jan 5 11:12:58.025: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:13:14.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-cjccn" for this suite. Jan 5 11:13:20.358: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:13:20.706: INFO: namespace: e2e-tests-init-container-cjccn, resource: bindings, ignored listing per whitelist Jan 5 11:13:20.765: INFO: namespace e2e-tests-init-container-cjccn deletion completed in 6.492946419s • [SLOW TEST:23.009 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:13:20.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 5 11:13:29.547: INFO: Successfully updated pod "pod-update-activedeadlineseconds-61c2cf98-2fac-11ea-910c-0242ac110004" Jan 5 11:13:29.547: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-61c2cf98-2fac-11ea-910c-0242ac110004" in namespace "e2e-tests-pods-9jzdp" to be "terminated due to deadline exceeded" Jan 5 11:13:29.677: INFO: Pod "pod-update-activedeadlineseconds-61c2cf98-2fac-11ea-910c-0242ac110004": Phase="Running", Reason="", readiness=true. Elapsed: 128.925659ms Jan 5 11:13:31.700: INFO: Pod "pod-update-activedeadlineseconds-61c2cf98-2fac-11ea-910c-0242ac110004": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.152452984s Jan 5 11:13:31.700: INFO: Pod "pod-update-activedeadlineseconds-61c2cf98-2fac-11ea-910c-0242ac110004" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:13:31.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-9jzdp" for this suite. Jan 5 11:13:38.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:13:38.972: INFO: namespace: e2e-tests-pods-9jzdp, resource: bindings, ignored listing per whitelist Jan 5 11:13:39.073: INFO: namespace e2e-tests-pods-9jzdp deletion completed in 7.36517279s • [SLOW TEST:18.308 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:13:39.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-6cb963ac-2fac-11ea-910c-0242ac110004 STEP: Creating a pod to test consume secrets Jan 5 11:13:39.479: INFO: Waiting up to 5m0s for pod "pod-secrets-6cbab557-2fac-11ea-910c-0242ac110004" in namespace "e2e-tests-secrets-dwd4m" to be "success or failure" Jan 5 11:13:39.525: INFO: Pod "pod-secrets-6cbab557-2fac-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 45.761585ms Jan 5 11:13:41.964: INFO: Pod "pod-secrets-6cbab557-2fac-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.484921044s Jan 5 11:13:44.015: INFO: Pod "pod-secrets-6cbab557-2fac-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.535732646s Jan 5 11:13:46.210: INFO: Pod "pod-secrets-6cbab557-2fac-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.730657583s Jan 5 11:13:48.223: INFO: Pod "pod-secrets-6cbab557-2fac-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.74344867s Jan 5 11:13:50.243: INFO: Pod "pod-secrets-6cbab557-2fac-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.763612948s STEP: Saw pod success Jan 5 11:13:50.243: INFO: Pod "pod-secrets-6cbab557-2fac-11ea-910c-0242ac110004" satisfied condition "success or failure" Jan 5 11:13:50.279: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-6cbab557-2fac-11ea-910c-0242ac110004 container secret-volume-test: STEP: delete the pod Jan 5 11:13:51.334: INFO: Waiting for pod pod-secrets-6cbab557-2fac-11ea-910c-0242ac110004 to disappear Jan 5 11:13:51.351: INFO: Pod pod-secrets-6cbab557-2fac-11ea-910c-0242ac110004 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:13:51.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-dwd4m" for this suite. Jan 5 11:13:57.465: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:13:57.660: INFO: namespace: e2e-tests-secrets-dwd4m, resource: bindings, ignored listing per whitelist Jan 5 11:13:57.668: INFO: namespace e2e-tests-secrets-dwd4m deletion completed in 6.292911212s • [SLOW TEST:18.594 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:13:57.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:14:07.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-jl62h" for this suite. Jan 5 11:14:54.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:14:54.496: INFO: namespace: e2e-tests-kubelet-test-jl62h, resource: bindings, ignored listing per whitelist Jan 5 11:14:54.612: INFO: namespace e2e-tests-kubelet-test-jl62h deletion completed in 46.585588696s • [SLOW TEST:56.943 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:14:54.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jan 5 11:15:05.651: INFO: Successfully updated pod "labelsupdate99c79dfa-2fac-11ea-910c-0242ac110004" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:15:07.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-tw7hg" for this suite. Jan 5 11:15:31.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:15:31.977: INFO: namespace: e2e-tests-downward-api-tw7hg, resource: bindings, ignored listing per whitelist Jan 5 11:15:32.019: INFO: namespace e2e-tests-downward-api-tw7hg deletion completed in 24.272568561s • [SLOW TEST:37.406 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:15:32.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-hbjc STEP: Creating a pod to test atomic-volume-subpath Jan 5 11:15:32.311: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-hbjc" in namespace "e2e-tests-subpath-fdxfp" to be "success or failure" Jan 5 11:15:32.363: INFO: Pod "pod-subpath-test-secret-hbjc": Phase="Pending", Reason="", readiness=false. Elapsed: 52.002247ms Jan 5 11:15:34.434: INFO: Pod "pod-subpath-test-secret-hbjc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123372143s Jan 5 11:15:36.459: INFO: Pod "pod-subpath-test-secret-hbjc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.14858837s Jan 5 11:15:38.593: INFO: Pod "pod-subpath-test-secret-hbjc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.282449193s Jan 5 11:15:40.641: INFO: Pod "pod-subpath-test-secret-hbjc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.330424055s Jan 5 11:15:42.662: INFO: Pod "pod-subpath-test-secret-hbjc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.350737756s Jan 5 11:15:44.674: INFO: Pod "pod-subpath-test-secret-hbjc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.36354189s Jan 5 11:15:46.686: INFO: Pod "pod-subpath-test-secret-hbjc": Phase="Pending", Reason="", readiness=false. Elapsed: 14.374993179s Jan 5 11:15:48.725: INFO: Pod "pod-subpath-test-secret-hbjc": Phase="Running", Reason="", readiness=false. Elapsed: 16.414011649s Jan 5 11:15:50.744: INFO: Pod "pod-subpath-test-secret-hbjc": Phase="Running", Reason="", readiness=false. Elapsed: 18.432970142s Jan 5 11:15:52.760: INFO: Pod "pod-subpath-test-secret-hbjc": Phase="Running", Reason="", readiness=false. Elapsed: 20.448938044s Jan 5 11:15:54.780: INFO: Pod "pod-subpath-test-secret-hbjc": Phase="Running", Reason="", readiness=false. Elapsed: 22.469250659s Jan 5 11:15:56.800: INFO: Pod "pod-subpath-test-secret-hbjc": Phase="Running", Reason="", readiness=false. Elapsed: 24.489453429s Jan 5 11:15:58.816: INFO: Pod "pod-subpath-test-secret-hbjc": Phase="Running", Reason="", readiness=false. Elapsed: 26.505274864s Jan 5 11:16:00.835: INFO: Pod "pod-subpath-test-secret-hbjc": Phase="Running", Reason="", readiness=false. Elapsed: 28.524074346s Jan 5 11:16:02.861: INFO: Pod "pod-subpath-test-secret-hbjc": Phase="Running", Reason="", readiness=false. Elapsed: 30.549772393s Jan 5 11:16:04.886: INFO: Pod "pod-subpath-test-secret-hbjc": Phase="Running", Reason="", readiness=false. Elapsed: 32.574805237s Jan 5 11:16:06.912: INFO: Pod "pod-subpath-test-secret-hbjc": Phase="Running", Reason="", readiness=false. Elapsed: 34.600963488s Jan 5 11:16:09.251: INFO: Pod "pod-subpath-test-secret-hbjc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.940231092s STEP: Saw pod success Jan 5 11:16:09.251: INFO: Pod "pod-subpath-test-secret-hbjc" satisfied condition "success or failure" Jan 5 11:16:09.450: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-hbjc container test-container-subpath-secret-hbjc: STEP: delete the pod Jan 5 11:16:09.633: INFO: Waiting for pod pod-subpath-test-secret-hbjc to disappear Jan 5 11:16:09.696: INFO: Pod pod-subpath-test-secret-hbjc no longer exists STEP: Deleting pod pod-subpath-test-secret-hbjc Jan 5 11:16:09.697: INFO: Deleting pod "pod-subpath-test-secret-hbjc" in namespace "e2e-tests-subpath-fdxfp" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:16:09.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-fdxfp" for this suite. Jan 5 11:16:17.738: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:16:17.844: INFO: namespace: e2e-tests-subpath-fdxfp, resource: bindings, ignored listing per whitelist Jan 5 11:16:17.912: INFO: namespace e2e-tests-subpath-fdxfp deletion completed in 8.203939675s • [SLOW TEST:45.892 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:16:17.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-j7wgw STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-j7wgw STEP: Deleting pre-stop pod Jan 5 11:16:41.550: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:16:41.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-j7wgw" for this suite. Jan 5 11:17:23.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:17:23.935: INFO: namespace: e2e-tests-prestop-j7wgw, resource: bindings, ignored listing per whitelist Jan 5 11:17:23.942: INFO: namespace e2e-tests-prestop-j7wgw deletion completed in 42.29133342s • [SLOW TEST:66.031 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:17:23.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Jan 5 11:17:24.185: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 5 11:17:24.301: INFO: Waiting for terminating namespaces to be deleted... Jan 5 11:17:24.306: INFO: Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test Jan 5 11:17:24.327: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded) Jan 5 11:17:24.327: INFO: Container weave ready: true, restart count 0 Jan 5 11:17:24.327: INFO: Container weave-npc ready: true, restart count 0 Jan 5 11:17:24.327: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Jan 5 11:17:24.327: INFO: Container coredns ready: true, restart count 0 Jan 5 11:17:24.327: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Jan 5 11:17:24.327: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Jan 5 11:17:24.327: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Jan 5 11:17:24.327: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Jan 5 11:17:24.327: INFO: Container coredns ready: true, restart count 0 Jan 5 11:17:24.327: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded) Jan 5 11:17:24.327: INFO: Container kube-proxy ready: true, restart count 0 Jan 5 11:17:24.327: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-server-hu5at5svl7ps Jan 5 11:17:24.386: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps Jan 5 11:17:24.386: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps Jan 5 11:17:24.386: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps Jan 5 11:17:24.386: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps Jan 5 11:17:24.386: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps Jan 5 11:17:24.386: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps Jan 5 11:17:24.386: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps Jan 5 11:17:24.386: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-f2dfed60-2fac-11ea-910c-0242ac110004.15e6f97f448ab033], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-jf8ff/filler-pod-f2dfed60-2fac-11ea-910c-0242ac110004 to hunter-server-hu5at5svl7ps] STEP: Considering event: Type = [Normal], Name = [filler-pod-f2dfed60-2fac-11ea-910c-0242ac110004.15e6f9807af95f2f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-f2dfed60-2fac-11ea-910c-0242ac110004.15e6f980f9bafcf4], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-f2dfed60-2fac-11ea-910c-0242ac110004.15e6f981257f6a12], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.15e6f9819b837b2e], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.] STEP: removing the label node off the node hunter-server-hu5at5svl7ps STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:17:35.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-jf8ff" for this suite. Jan 5 11:17:44.149: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:17:44.220: INFO: namespace: e2e-tests-sched-pred-jf8ff, resource: bindings, ignored listing per whitelist Jan 5 11:17:44.338: INFO: namespace e2e-tests-sched-pred-jf8ff deletion completed in 8.573652274s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:20.395 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:17:44.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 5 11:17:44.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-9m2pl' Jan 5 11:17:44.901: INFO: stderr: "" Jan 5 11:17:44.901: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Jan 5 11:17:54.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-9m2pl -o json' Jan 5 11:17:55.072: INFO: stderr: "" Jan 5 11:17:55.072: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-01-05T11:17:44Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-9m2pl\",\n \"resourceVersion\": \"17244529\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-9m2pl/pods/e2e-test-nginx-pod\",\n \"uid\": \"ff15f699-2fac-11ea-a994-fa163e34d433\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-vrtmz\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-vrtmz\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-vrtmz\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-05T11:17:45Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-05T11:17:52Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-05T11:17:52Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-05T11:17:44Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://1e42b349b2af9281ec6cf71c899c1d41ea668b359dfa1eb9a37933456365e31b\",\n \"image\": \"nginx:1.14-alpine\",\n \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-01-05T11:17:51Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.96.1.240\",\n \"phase\": \"Running\",\n \"podIP\": \"10.32.0.4\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-01-05T11:17:45Z\"\n }\n}\n" STEP: replace the image in the pod Jan 5 11:17:55.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-9m2pl' Jan 5 11:17:55.434: INFO: stderr: "" Jan 5 11:17:55.434: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 Jan 5 11:17:55.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-9m2pl' Jan 5 11:18:03.173: INFO: stderr: "" Jan 5 11:18:03.174: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:18:03.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-9m2pl" for this suite. Jan 5 11:18:09.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:18:09.450: INFO: namespace: e2e-tests-kubectl-9m2pl, resource: bindings, ignored listing per whitelist Jan 5 11:18:09.484: INFO: namespace e2e-tests-kubectl-9m2pl deletion completed in 6.228228161s • [SLOW TEST:25.146 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:18:09.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-sbsxk STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 5 11:18:10.382: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 5 11:18:42.712: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-sbsxk PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 5 11:18:42.712: INFO: >>> kubeConfig: /root/.kube/config I0105 11:18:42.807294 8 log.go:172] (0xc0000fefd0) (0xc00114bea0) Create stream I0105 11:18:42.807344 8 log.go:172] (0xc0000fefd0) (0xc00114bea0) Stream added, broadcasting: 1 I0105 11:18:42.812300 8 log.go:172] (0xc0000fefd0) Reply frame received for 1 I0105 11:18:42.812421 8 log.go:172] (0xc0000fefd0) (0xc0015f45a0) Create stream I0105 11:18:42.812436 8 log.go:172] (0xc0000fefd0) (0xc0015f45a0) Stream added, broadcasting: 3 I0105 11:18:42.813591 8 log.go:172] (0xc0000fefd0) Reply frame received for 3 I0105 11:18:42.813617 8 log.go:172] (0xc0000fefd0) (0xc00169b540) Create stream I0105 11:18:42.813625 8 log.go:172] (0xc0000fefd0) (0xc00169b540) Stream added, broadcasting: 5 I0105 11:18:42.814539 8 log.go:172] (0xc0000fefd0) Reply frame received for 5 I0105 11:18:43.017992 8 log.go:172] (0xc0000fefd0) Data frame received for 3 I0105 11:18:43.018075 8 log.go:172] (0xc0015f45a0) (3) Data frame handling I0105 11:18:43.018123 8 log.go:172] (0xc0015f45a0) (3) Data frame sent I0105 11:18:43.198495 8 log.go:172] (0xc0000fefd0) Data frame received for 1 I0105 11:18:43.198628 8 log.go:172] (0xc0000fefd0) (0xc0015f45a0) Stream removed, broadcasting: 3 I0105 11:18:43.198716 8 log.go:172] (0xc00114bea0) (1) Data frame handling I0105 11:18:43.198762 8 log.go:172] (0xc00114bea0) (1) Data frame sent I0105 11:18:43.198774 8 log.go:172] (0xc0000fefd0) (0xc00169b540) Stream removed, broadcasting: 5 I0105 11:18:43.198814 8 log.go:172] (0xc0000fefd0) (0xc00114bea0) Stream removed, broadcasting: 1 I0105 11:18:43.198840 8 log.go:172] (0xc0000fefd0) Go away received I0105 11:18:43.199424 8 log.go:172] (0xc0000fefd0) (0xc00114bea0) Stream removed, broadcasting: 1 I0105 11:18:43.199449 8 log.go:172] (0xc0000fefd0) (0xc0015f45a0) Stream removed, broadcasting: 3 I0105 11:18:43.199463 8 log.go:172] (0xc0000fefd0) (0xc00169b540) Stream removed, broadcasting: 5 Jan 5 11:18:43.199: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:18:43.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-sbsxk" for this suite. Jan 5 11:19:07.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:19:07.548: INFO: namespace: e2e-tests-pod-network-test-sbsxk, resource: bindings, ignored listing per whitelist Jan 5 11:19:07.548: INFO: namespace e2e-tests-pod-network-test-sbsxk deletion completed in 24.3325362s • [SLOW TEST:58.063 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:19:07.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-qks7h Jan 5 11:19:15.773: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-qks7h STEP: checking the pod's current state and verifying that restartCount is present Jan 5 11:19:15.778: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:23:17.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-qks7h" for this suite. Jan 5 11:23:23.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:23:24.080: INFO: namespace: e2e-tests-container-probe-qks7h, resource: bindings, ignored listing per whitelist Jan 5 11:23:24.183: INFO: namespace e2e-tests-container-probe-qks7h deletion completed in 6.30598262s • [SLOW TEST:256.635 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:23:24.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Jan 5 11:23:24.318: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 5 11:23:24.351: INFO: Waiting for terminating namespaces to be deleted... Jan 5 11:23:24.425: INFO: Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test Jan 5 11:23:24.481: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Jan 5 11:23:24.481: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded) Jan 5 11:23:24.481: INFO: Container weave ready: true, restart count 0 Jan 5 11:23:24.481: INFO: Container weave-npc ready: true, restart count 0 Jan 5 11:23:24.481: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Jan 5 11:23:24.481: INFO: Container coredns ready: true, restart count 0 Jan 5 11:23:24.481: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Jan 5 11:23:24.481: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Jan 5 11:23:24.481: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Jan 5 11:23:24.481: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Jan 5 11:23:24.481: INFO: Container coredns ready: true, restart count 0 Jan 5 11:23:24.481: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded) Jan 5 11:23:24.481: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-cf94429e-2fad-11ea-910c-0242ac110004 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-cf94429e-2fad-11ea-910c-0242ac110004 off the node hunter-server-hu5at5svl7ps STEP: verifying the node doesn't have the label kubernetes.io/e2e-cf94429e-2fad-11ea-910c-0242ac110004 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:23:43.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-s4vfc" for this suite. Jan 5 11:23:57.358: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:23:57.549: INFO: namespace: e2e-tests-sched-pred-s4vfc, resource: bindings, ignored listing per whitelist Jan 5 11:23:57.592: INFO: namespace e2e-tests-sched-pred-s4vfc deletion completed in 14.472819707s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:33.409 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:23:57.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 5 11:23:57.744: INFO: Waiting up to 5m0s for pod "pod-dd52ece9-2fad-11ea-910c-0242ac110004" in namespace "e2e-tests-emptydir-8hw9k" to be "success or failure" Jan 5 11:23:57.835: INFO: Pod "pod-dd52ece9-2fad-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 91.571436ms Jan 5 11:23:59.852: INFO: Pod "pod-dd52ece9-2fad-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107896645s Jan 5 11:24:01.868: INFO: Pod "pod-dd52ece9-2fad-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.124090048s Jan 5 11:24:04.660: INFO: Pod "pod-dd52ece9-2fad-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.915931683s Jan 5 11:24:06.743: INFO: Pod "pod-dd52ece9-2fad-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.999260999s STEP: Saw pod success Jan 5 11:24:06.743: INFO: Pod "pod-dd52ece9-2fad-11ea-910c-0242ac110004" satisfied condition "success or failure" Jan 5 11:24:06.761: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-dd52ece9-2fad-11ea-910c-0242ac110004 container test-container: STEP: delete the pod Jan 5 11:24:06.827: INFO: Waiting for pod pod-dd52ece9-2fad-11ea-910c-0242ac110004 to disappear Jan 5 11:24:06.883: INFO: Pod pod-dd52ece9-2fad-11ea-910c-0242ac110004 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:24:06.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-8hw9k" for this suite. Jan 5 11:24:13.062: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:24:13.413: INFO: namespace: e2e-tests-emptydir-8hw9k, resource: bindings, ignored listing per whitelist Jan 5 11:24:13.798: INFO: namespace e2e-tests-emptydir-8hw9k deletion completed in 6.891526087s • [SLOW TEST:16.205 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:24:13.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-trx2g STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 5 11:24:14.154: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 5 11:24:54.536: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-trx2g PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 5 11:24:54.536: INFO: >>> kubeConfig: /root/.kube/config I0105 11:24:54.718874 8 log.go:172] (0xc0000ff1e0) (0xc000eda460) Create stream I0105 11:24:54.718985 8 log.go:172] (0xc0000ff1e0) (0xc000eda460) Stream added, broadcasting: 1 I0105 11:24:54.729520 8 log.go:172] (0xc0000ff1e0) Reply frame received for 1 I0105 11:24:54.729656 8 log.go:172] (0xc0000ff1e0) (0xc000d62e60) Create stream I0105 11:24:54.729711 8 log.go:172] (0xc0000ff1e0) (0xc000d62e60) Stream added, broadcasting: 3 I0105 11:24:54.731669 8 log.go:172] (0xc0000ff1e0) Reply frame received for 3 I0105 11:24:54.731713 8 log.go:172] (0xc0000ff1e0) (0xc000eda500) Create stream I0105 11:24:54.731730 8 log.go:172] (0xc0000ff1e0) (0xc000eda500) Stream added, broadcasting: 5 I0105 11:24:54.733743 8 log.go:172] (0xc0000ff1e0) Reply frame received for 5 I0105 11:24:56.039784 8 log.go:172] (0xc0000ff1e0) Data frame received for 3 I0105 11:24:56.039893 8 log.go:172] (0xc000d62e60) (3) Data frame handling I0105 11:24:56.039920 8 log.go:172] (0xc000d62e60) (3) Data frame sent I0105 11:24:56.205443 8 log.go:172] (0xc0000ff1e0) (0xc000d62e60) Stream removed, broadcasting: 3 I0105 11:24:56.205643 8 log.go:172] (0xc0000ff1e0) Data frame received for 1 I0105 11:24:56.205710 8 log.go:172] (0xc000eda460) (1) Data frame handling I0105 11:24:56.205775 8 log.go:172] (0xc000eda460) (1) Data frame sent I0105 11:24:56.205808 8 log.go:172] (0xc0000ff1e0) (0xc000eda500) Stream removed, broadcasting: 5 I0105 11:24:56.206351 8 log.go:172] (0xc0000ff1e0) (0xc000eda460) Stream removed, broadcasting: 1 I0105 11:24:56.206511 8 log.go:172] (0xc0000ff1e0) Go away received I0105 11:24:56.207274 8 log.go:172] (0xc0000ff1e0) (0xc000eda460) Stream removed, broadcasting: 1 I0105 11:24:56.207307 8 log.go:172] (0xc0000ff1e0) (0xc000d62e60) Stream removed, broadcasting: 3 I0105 11:24:56.207329 8 log.go:172] (0xc0000ff1e0) (0xc000eda500) Stream removed, broadcasting: 5 Jan 5 11:24:56.207: INFO: Found all expected endpoints: [netserver-0] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:24:56.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-trx2g" for this suite. Jan 5 11:25:20.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:25:20.357: INFO: namespace: e2e-tests-pod-network-test-trx2g, resource: bindings, ignored listing per whitelist Jan 5 11:25:20.700: INFO: namespace e2e-tests-pod-network-test-trx2g deletion completed in 24.461191368s • [SLOW TEST:66.902 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:25:20.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Jan 5 11:25:21.002: INFO: Waiting up to 5m0s for pod "pod-0eedde5d-2fae-11ea-910c-0242ac110004" in namespace "e2e-tests-emptydir-lq97n" to be "success or failure" Jan 5 11:25:21.015: INFO: Pod "pod-0eedde5d-2fae-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 13.429141ms Jan 5 11:25:23.125: INFO: Pod "pod-0eedde5d-2fae-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123577048s Jan 5 11:25:25.141: INFO: Pod "pod-0eedde5d-2fae-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.139185751s Jan 5 11:25:27.551: INFO: Pod "pod-0eedde5d-2fae-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.549653907s Jan 5 11:25:29.577: INFO: Pod "pod-0eedde5d-2fae-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.57560832s Jan 5 11:25:31.593: INFO: Pod "pod-0eedde5d-2fae-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.591268943s STEP: Saw pod success Jan 5 11:25:31.593: INFO: Pod "pod-0eedde5d-2fae-11ea-910c-0242ac110004" satisfied condition "success or failure" Jan 5 11:25:31.599: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-0eedde5d-2fae-11ea-910c-0242ac110004 container test-container: STEP: delete the pod Jan 5 11:25:31.667: INFO: Waiting for pod pod-0eedde5d-2fae-11ea-910c-0242ac110004 to disappear Jan 5 11:25:31.748: INFO: Pod pod-0eedde5d-2fae-11ea-910c-0242ac110004 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:25:31.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-lq97n" for this suite. Jan 5 11:25:37.825: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:25:37.850: INFO: namespace: e2e-tests-emptydir-lq97n, resource: bindings, ignored listing per whitelist Jan 5 11:25:38.023: INFO: namespace e2e-tests-emptydir-lq97n deletion completed in 6.260085026s • [SLOW TEST:17.322 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:25:38.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jan 5 11:25:38.484: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-wckms,SelfLink:/api/v1/namespaces/e2e-tests-watch-wckms/configmaps/e2e-watch-test-resource-version,UID:194063ed-2fae-11ea-a994-fa163e34d433,ResourceVersion:17245322,Generation:0,CreationTimestamp:2020-01-05 11:25:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 5 11:25:38.485: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-wckms,SelfLink:/api/v1/namespaces/e2e-tests-watch-wckms/configmaps/e2e-watch-test-resource-version,UID:194063ed-2fae-11ea-a994-fa163e34d433,ResourceVersion:17245323,Generation:0,CreationTimestamp:2020-01-05 11:25:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:25:38.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-wckms" for this suite. Jan 5 11:25:44.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:25:44.677: INFO: namespace: e2e-tests-watch-wckms, resource: bindings, ignored listing per whitelist Jan 5 11:25:44.757: INFO: namespace e2e-tests-watch-wckms deletion completed in 6.256472921s • [SLOW TEST:6.733 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:25:44.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-1d3d760d-2fae-11ea-910c-0242ac110004 STEP: Creating a pod to test consume secrets Jan 5 11:25:44.981: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1d3ec53a-2fae-11ea-910c-0242ac110004" in namespace "e2e-tests-projected-2hh5x" to be "success or failure" Jan 5 11:25:45.002: INFO: Pod "pod-projected-secrets-1d3ec53a-2fae-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 20.251495ms Jan 5 11:25:47.028: INFO: Pod "pod-projected-secrets-1d3ec53a-2fae-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046360498s Jan 5 11:25:49.051: INFO: Pod "pod-projected-secrets-1d3ec53a-2fae-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069222469s Jan 5 11:25:51.300: INFO: Pod "pod-projected-secrets-1d3ec53a-2fae-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.318797823s Jan 5 11:25:53.327: INFO: Pod "pod-projected-secrets-1d3ec53a-2fae-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.345706935s Jan 5 11:25:55.410: INFO: Pod "pod-projected-secrets-1d3ec53a-2fae-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.428716675s STEP: Saw pod success Jan 5 11:25:55.410: INFO: Pod "pod-projected-secrets-1d3ec53a-2fae-11ea-910c-0242ac110004" satisfied condition "success or failure" Jan 5 11:25:55.418: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-1d3ec53a-2fae-11ea-910c-0242ac110004 container projected-secret-volume-test: STEP: delete the pod Jan 5 11:25:56.564: INFO: Waiting for pod pod-projected-secrets-1d3ec53a-2fae-11ea-910c-0242ac110004 to disappear Jan 5 11:25:56.721: INFO: Pod pod-projected-secrets-1d3ec53a-2fae-11ea-910c-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:25:56.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-2hh5x" for this suite. Jan 5 11:26:02.781: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:26:02.988: INFO: namespace: e2e-tests-projected-2hh5x, resource: bindings, ignored listing per whitelist Jan 5 11:26:02.998: INFO: namespace e2e-tests-projected-2hh5x deletion completed in 6.255619295s • [SLOW TEST:18.240 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:26:02.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-28145e13-2fae-11ea-910c-0242ac110004 STEP: Creating a pod to test consume secrets Jan 5 11:26:03.181: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2815aee5-2fae-11ea-910c-0242ac110004" in namespace "e2e-tests-projected-w22m2" to be "success or failure" Jan 5 11:26:03.186: INFO: Pod "pod-projected-secrets-2815aee5-2fae-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 5.121282ms Jan 5 11:26:05.325: INFO: Pod "pod-projected-secrets-2815aee5-2fae-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.144167949s Jan 5 11:26:07.345: INFO: Pod "pod-projected-secrets-2815aee5-2fae-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.163772335s Jan 5 11:26:09.551: INFO: Pod "pod-projected-secrets-2815aee5-2fae-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.369965954s Jan 5 11:26:11.578: INFO: Pod "pod-projected-secrets-2815aee5-2fae-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.396899932s Jan 5 11:26:13.614: INFO: Pod "pod-projected-secrets-2815aee5-2fae-11ea-910c-0242ac110004": Phase="Running", Reason="", readiness=true. Elapsed: 10.433209065s Jan 5 11:26:16.236: INFO: Pod "pod-projected-secrets-2815aee5-2fae-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.054631396s STEP: Saw pod success Jan 5 11:26:16.236: INFO: Pod "pod-projected-secrets-2815aee5-2fae-11ea-910c-0242ac110004" satisfied condition "success or failure" Jan 5 11:26:16.245: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-2815aee5-2fae-11ea-910c-0242ac110004 container projected-secret-volume-test: STEP: delete the pod Jan 5 11:26:16.920: INFO: Waiting for pod pod-projected-secrets-2815aee5-2fae-11ea-910c-0242ac110004 to disappear Jan 5 11:26:17.104: INFO: Pod pod-projected-secrets-2815aee5-2fae-11ea-910c-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:26:17.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-w22m2" for this suite. Jan 5 11:26:23.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:26:23.231: INFO: namespace: e2e-tests-projected-w22m2, resource: bindings, ignored listing per whitelist Jan 5 11:26:23.379: INFO: namespace e2e-tests-projected-w22m2 deletion completed in 6.265273126s • [SLOW TEST:20.381 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:26:23.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 5 11:26:23.657: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3444b93f-2fae-11ea-910c-0242ac110004" in namespace "e2e-tests-projected-vnx6p" to be "success or failure" Jan 5 11:26:23.677: INFO: Pod "downwardapi-volume-3444b93f-2fae-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 19.377487ms Jan 5 11:26:25.691: INFO: Pod "downwardapi-volume-3444b93f-2fae-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033609764s Jan 5 11:26:27.708: INFO: Pod "downwardapi-volume-3444b93f-2fae-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051265438s Jan 5 11:26:29.723: INFO: Pod "downwardapi-volume-3444b93f-2fae-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065504778s Jan 5 11:26:31.741: INFO: Pod "downwardapi-volume-3444b93f-2fae-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.084071245s Jan 5 11:26:33.945: INFO: Pod "downwardapi-volume-3444b93f-2fae-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.288225135s STEP: Saw pod success Jan 5 11:26:33.946: INFO: Pod "downwardapi-volume-3444b93f-2fae-11ea-910c-0242ac110004" satisfied condition "success or failure" Jan 5 11:26:33.956: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-3444b93f-2fae-11ea-910c-0242ac110004 container client-container: STEP: delete the pod Jan 5 11:26:34.132: INFO: Waiting for pod downwardapi-volume-3444b93f-2fae-11ea-910c-0242ac110004 to disappear Jan 5 11:26:34.140: INFO: Pod downwardapi-volume-3444b93f-2fae-11ea-910c-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:26:34.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vnx6p" for this suite. Jan 5 11:26:40.174: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:26:40.309: INFO: namespace: e2e-tests-projected-vnx6p, resource: bindings, ignored listing per whitelist Jan 5 11:26:40.375: INFO: namespace e2e-tests-projected-vnx6p deletion completed in 6.226752508s • [SLOW TEST:16.995 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:26:40.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 5 11:26:50.984: INFO: Waiting up to 5m0s for pod "client-envvars-449338a1-2fae-11ea-910c-0242ac110004" in namespace "e2e-tests-pods-vk7sd" to be "success or failure" Jan 5 11:26:51.100: INFO: Pod "client-envvars-449338a1-2fae-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 116.519686ms Jan 5 11:26:53.225: INFO: Pod "client-envvars-449338a1-2fae-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.240841901s Jan 5 11:26:55.241: INFO: Pod "client-envvars-449338a1-2fae-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.257335996s Jan 5 11:26:57.267: INFO: Pod "client-envvars-449338a1-2fae-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.283553791s Jan 5 11:26:59.283: INFO: Pod "client-envvars-449338a1-2fae-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.299351204s Jan 5 11:27:01.309: INFO: Pod "client-envvars-449338a1-2fae-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.325190034s STEP: Saw pod success Jan 5 11:27:01.309: INFO: Pod "client-envvars-449338a1-2fae-11ea-910c-0242ac110004" satisfied condition "success or failure" Jan 5 11:27:01.314: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-449338a1-2fae-11ea-910c-0242ac110004 container env3cont: STEP: delete the pod Jan 5 11:27:02.079: INFO: Waiting for pod client-envvars-449338a1-2fae-11ea-910c-0242ac110004 to disappear Jan 5 11:27:02.534: INFO: Pod client-envvars-449338a1-2fae-11ea-910c-0242ac110004 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:27:02.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-vk7sd" for this suite. Jan 5 11:27:44.712: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:27:44.806: INFO: namespace: e2e-tests-pods-vk7sd, resource: bindings, ignored listing per whitelist Jan 5 11:27:44.852: INFO: namespace e2e-tests-pods-vk7sd deletion completed in 42.289557493s • [SLOW TEST:64.477 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:27:44.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 5 11:27:45.082: INFO: Waiting up to 5m0s for pod "pod-64d2f18c-2fae-11ea-910c-0242ac110004" in namespace "e2e-tests-emptydir-f8dtm" to be "success or failure" Jan 5 11:27:45.097: INFO: Pod "pod-64d2f18c-2fae-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 15.562341ms Jan 5 11:27:47.122: INFO: Pod "pod-64d2f18c-2fae-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040529108s Jan 5 11:27:49.215: INFO: Pod "pod-64d2f18c-2fae-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.13286181s Jan 5 11:27:51.679: INFO: Pod "pod-64d2f18c-2fae-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.597816971s Jan 5 11:27:53.694: INFO: Pod "pod-64d2f18c-2fae-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.612099363s Jan 5 11:27:55.711: INFO: Pod "pod-64d2f18c-2fae-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.629701249s STEP: Saw pod success Jan 5 11:27:55.711: INFO: Pod "pod-64d2f18c-2fae-11ea-910c-0242ac110004" satisfied condition "success or failure" Jan 5 11:27:55.723: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-64d2f18c-2fae-11ea-910c-0242ac110004 container test-container: STEP: delete the pod Jan 5 11:27:55.843: INFO: Waiting for pod pod-64d2f18c-2fae-11ea-910c-0242ac110004 to disappear Jan 5 11:27:55.858: INFO: Pod pod-64d2f18c-2fae-11ea-910c-0242ac110004 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:27:55.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-f8dtm" for this suite. Jan 5 11:28:02.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:28:02.840: INFO: namespace: e2e-tests-emptydir-f8dtm, resource: bindings, ignored listing per whitelist Jan 5 11:28:02.870: INFO: namespace e2e-tests-emptydir-f8dtm deletion completed in 6.998068465s • [SLOW TEST:18.017 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:28:02.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components Jan 5 11:28:03.126: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Jan 5 11:28:03.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-bfmk7' Jan 5 11:28:05.573: INFO: stderr: "" Jan 5 11:28:05.574: INFO: stdout: "service/redis-slave created\n" Jan 5 11:28:05.575: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Jan 5 11:28:05.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-bfmk7' Jan 5 11:28:06.032: INFO: stderr: "" Jan 5 11:28:06.032: INFO: stdout: "service/redis-master created\n" Jan 5 11:28:06.033: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jan 5 11:28:06.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-bfmk7' Jan 5 11:28:06.440: INFO: stderr: "" Jan 5 11:28:06.440: INFO: stdout: "service/frontend created\n" Jan 5 11:28:06.442: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Jan 5 11:28:06.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-bfmk7' Jan 5 11:28:06.801: INFO: stderr: "" Jan 5 11:28:06.801: INFO: stdout: "deployment.extensions/frontend created\n" Jan 5 11:28:06.802: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 5 11:28:06.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-bfmk7' Jan 5 11:28:07.279: INFO: stderr: "" Jan 5 11:28:07.279: INFO: stdout: "deployment.extensions/redis-master created\n" Jan 5 11:28:07.280: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Jan 5 11:28:07.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-bfmk7' Jan 5 11:28:07.695: INFO: stderr: "" Jan 5 11:28:07.695: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app Jan 5 11:28:07.695: INFO: Waiting for all frontend pods to be Running. Jan 5 11:28:37.749: INFO: Waiting for frontend to serve content. Jan 5 11:28:38.389: INFO: Trying to add a new entry to the guestbook. Jan 5 11:28:38.522: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Jan 5 11:28:38.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-bfmk7' Jan 5 11:28:38.993: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 5 11:28:38.994: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Jan 5 11:28:38.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-bfmk7' Jan 5 11:28:39.331: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 5 11:28:39.331: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jan 5 11:28:39.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-bfmk7' Jan 5 11:28:39.650: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 5 11:28:39.650: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 5 11:28:39.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-bfmk7' Jan 5 11:28:40.015: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 5 11:28:40.015: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 5 11:28:40.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-bfmk7' Jan 5 11:28:40.725: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 5 11:28:40.730: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jan 5 11:28:40.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-bfmk7' Jan 5 11:28:40.971: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 5 11:28:40.971: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:28:40.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-bfmk7" for this suite. Jan 5 11:29:27.309: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:29:27.354: INFO: namespace: e2e-tests-kubectl-bfmk7, resource: bindings, ignored listing per whitelist Jan 5 11:29:27.435: INFO: namespace e2e-tests-kubectl-bfmk7 deletion completed in 46.379171669s • [SLOW TEST:84.565 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:29:27.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Jan 5 11:29:27.655: INFO: namespace e2e-tests-kubectl-mprrj Jan 5 11:29:27.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-mprrj' Jan 5 11:29:28.016: INFO: stderr: "" Jan 5 11:29:28.016: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jan 5 11:29:29.031: INFO: Selector matched 1 pods for map[app:redis] Jan 5 11:29:29.031: INFO: Found 0 / 1 Jan 5 11:29:30.144: INFO: Selector matched 1 pods for map[app:redis] Jan 5 11:29:30.144: INFO: Found 0 / 1 Jan 5 11:29:31.028: INFO: Selector matched 1 pods for map[app:redis] Jan 5 11:29:31.028: INFO: Found 0 / 1 Jan 5 11:29:32.036: INFO: Selector matched 1 pods for map[app:redis] Jan 5 11:29:32.036: INFO: Found 0 / 1 Jan 5 11:29:33.029: INFO: Selector matched 1 pods for map[app:redis] Jan 5 11:29:33.029: INFO: Found 0 / 1 Jan 5 11:29:34.037: INFO: Selector matched 1 pods for map[app:redis] Jan 5 11:29:34.037: INFO: Found 0 / 1 Jan 5 11:29:35.604: INFO: Selector matched 1 pods for map[app:redis] Jan 5 11:29:35.604: INFO: Found 0 / 1 Jan 5 11:29:36.028: INFO: Selector matched 1 pods for map[app:redis] Jan 5 11:29:36.028: INFO: Found 0 / 1 Jan 5 11:29:37.106: INFO: Selector matched 1 pods for map[app:redis] Jan 5 11:29:37.106: INFO: Found 1 / 1 Jan 5 11:29:37.106: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 5 11:29:37.114: INFO: Selector matched 1 pods for map[app:redis] Jan 5 11:29:37.114: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 5 11:29:37.114: INFO: wait on redis-master startup in e2e-tests-kubectl-mprrj Jan 5 11:29:37.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-4slhj redis-master --namespace=e2e-tests-kubectl-mprrj' Jan 5 11:29:37.285: INFO: stderr: "" Jan 5 11:29:37.285: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 05 Jan 11:29:36.187 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 05 Jan 11:29:36.187 # Server started, Redis version 3.2.12\n1:M 05 Jan 11:29:36.187 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 05 Jan 11:29:36.187 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Jan 5 11:29:37.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-mprrj' Jan 5 11:29:37.570: INFO: stderr: "" Jan 5 11:29:37.570: INFO: stdout: "service/rm2 exposed\n" Jan 5 11:29:37.576: INFO: Service rm2 in namespace e2e-tests-kubectl-mprrj found. STEP: exposing service Jan 5 11:29:39.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-mprrj' Jan 5 11:29:39.907: INFO: stderr: "" Jan 5 11:29:39.907: INFO: stdout: "service/rm3 exposed\n" Jan 5 11:29:40.078: INFO: Service rm3 in namespace e2e-tests-kubectl-mprrj found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:29:42.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-mprrj" for this suite. Jan 5 11:30:06.229: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:30:06.390: INFO: namespace: e2e-tests-kubectl-mprrj, resource: bindings, ignored listing per whitelist Jan 5 11:30:06.428: INFO: namespace e2e-tests-kubectl-mprrj deletion completed in 24.306702906s • [SLOW TEST:38.992 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:30:06.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jan 5 11:30:06.675: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:30:22.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-vb4s8" for this suite. Jan 5 11:30:28.699: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:30:28.891: INFO: namespace: e2e-tests-init-container-vb4s8, resource: bindings, ignored listing per whitelist Jan 5 11:30:28.897: INFO: namespace e2e-tests-init-container-vb4s8 deletion completed in 6.403138333s • [SLOW TEST:22.468 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:30:28.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-c68ec19d-2fae-11ea-910c-0242ac110004 STEP: Creating secret with name secret-projected-all-test-volume-c68ec179-2fae-11ea-910c-0242ac110004 STEP: Creating a pod to test Check all projections for projected volume plugin Jan 5 11:30:29.119: INFO: Waiting up to 5m0s for pod "projected-volume-c68ec0ef-2fae-11ea-910c-0242ac110004" in namespace "e2e-tests-projected-8gl66" to be "success or failure" Jan 5 11:30:29.126: INFO: Pod "projected-volume-c68ec0ef-2fae-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.122938ms Jan 5 11:30:31.141: INFO: Pod "projected-volume-c68ec0ef-2fae-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022053165s Jan 5 11:30:33.152: INFO: Pod "projected-volume-c68ec0ef-2fae-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033399899s Jan 5 11:30:35.286: INFO: Pod "projected-volume-c68ec0ef-2fae-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.166613483s Jan 5 11:30:37.379: INFO: Pod "projected-volume-c68ec0ef-2fae-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.259664328s STEP: Saw pod success Jan 5 11:30:37.379: INFO: Pod "projected-volume-c68ec0ef-2fae-11ea-910c-0242ac110004" satisfied condition "success or failure" Jan 5 11:30:37.388: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-c68ec0ef-2fae-11ea-910c-0242ac110004 container projected-all-volume-test: STEP: delete the pod Jan 5 11:30:37.523: INFO: Waiting for pod projected-volume-c68ec0ef-2fae-11ea-910c-0242ac110004 to disappear Jan 5 11:30:37.534: INFO: Pod projected-volume-c68ec0ef-2fae-11ea-910c-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:30:37.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8gl66" for this suite. Jan 5 11:30:45.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:30:45.687: INFO: namespace: e2e-tests-projected-8gl66, resource: bindings, ignored listing per whitelist Jan 5 11:30:45.935: INFO: namespace e2e-tests-projected-8gl66 deletion completed in 8.393169499s • [SLOW TEST:17.038 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:30:45.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 5 11:31:04.548: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 5 11:31:04.587: INFO: Pod pod-with-prestop-http-hook still exists Jan 5 11:31:06.588: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 5 11:31:07.026: INFO: Pod pod-with-prestop-http-hook still exists Jan 5 11:31:08.588: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 5 11:31:08.646: INFO: Pod pod-with-prestop-http-hook still exists Jan 5 11:31:10.588: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 5 11:31:10.601: INFO: Pod pod-with-prestop-http-hook still exists Jan 5 11:31:12.588: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 5 11:31:12.631: INFO: Pod pod-with-prestop-http-hook still exists Jan 5 11:31:14.588: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 5 11:31:14.625: INFO: Pod pod-with-prestop-http-hook still exists Jan 5 11:31:16.588: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 5 11:31:16.626: INFO: Pod pod-with-prestop-http-hook still exists Jan 5 11:31:18.588: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 5 11:31:18.605: INFO: Pod pod-with-prestop-http-hook still exists Jan 5 11:31:20.588: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 5 11:31:20.632: INFO: Pod pod-with-prestop-http-hook still exists Jan 5 11:31:22.588: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 5 11:31:22.678: INFO: Pod pod-with-prestop-http-hook still exists Jan 5 11:31:24.589: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 5 11:31:24.648: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:31:24.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-zp26p" for this suite. Jan 5 11:31:46.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:31:46.948: INFO: namespace: e2e-tests-container-lifecycle-hook-zp26p, resource: bindings, ignored listing per whitelist Jan 5 11:31:46.982: INFO: namespace e2e-tests-container-lifecycle-hook-zp26p deletion completed in 22.233423436s • [SLOW TEST:61.046 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:31:46.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jan 5 11:32:13.481: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-j27gk PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 5 11:32:13.481: INFO: >>> kubeConfig: /root/.kube/config I0105 11:32:13.567427 8 log.go:172] (0xc0012a0420) (0xc001c1d2c0) Create stream I0105 11:32:13.567533 8 log.go:172] (0xc0012a0420) (0xc001c1d2c0) Stream added, broadcasting: 1 I0105 11:32:13.574037 8 log.go:172] (0xc0012a0420) Reply frame received for 1 I0105 11:32:13.574094 8 log.go:172] (0xc0012a0420) (0xc0020a2c80) Create stream I0105 11:32:13.574112 8 log.go:172] (0xc0012a0420) (0xc0020a2c80) Stream added, broadcasting: 3 I0105 11:32:13.575694 8 log.go:172] (0xc0012a0420) Reply frame received for 3 I0105 11:32:13.575743 8 log.go:172] (0xc0012a0420) (0xc001c1d360) Create stream I0105 11:32:13.575761 8 log.go:172] (0xc0012a0420) (0xc001c1d360) Stream added, broadcasting: 5 I0105 11:32:13.577251 8 log.go:172] (0xc0012a0420) Reply frame received for 5 I0105 11:32:13.742999 8 log.go:172] (0xc0012a0420) Data frame received for 3 I0105 11:32:13.743116 8 log.go:172] (0xc0020a2c80) (3) Data frame handling I0105 11:32:13.743193 8 log.go:172] (0xc0020a2c80) (3) Data frame sent I0105 11:32:14.104111 8 log.go:172] (0xc0012a0420) (0xc0020a2c80) Stream removed, broadcasting: 3 I0105 11:32:14.104580 8 log.go:172] (0xc0012a0420) Data frame received for 1 I0105 11:32:14.104823 8 log.go:172] (0xc0012a0420) (0xc001c1d360) Stream removed, broadcasting: 5 I0105 11:32:14.104976 8 log.go:172] (0xc001c1d2c0) (1) Data frame handling I0105 11:32:14.105032 8 log.go:172] (0xc001c1d2c0) (1) Data frame sent I0105 11:32:14.105070 8 log.go:172] (0xc0012a0420) (0xc001c1d2c0) Stream removed, broadcasting: 1 I0105 11:32:14.105107 8 log.go:172] (0xc0012a0420) Go away received I0105 11:32:14.105733 8 log.go:172] (0xc0012a0420) (0xc001c1d2c0) Stream removed, broadcasting: 1 I0105 11:32:14.105777 8 log.go:172] (0xc0012a0420) (0xc0020a2c80) Stream removed, broadcasting: 3 I0105 11:32:14.105837 8 log.go:172] (0xc0012a0420) (0xc001c1d360) Stream removed, broadcasting: 5 Jan 5 11:32:14.105: INFO: Exec stderr: "" Jan 5 11:32:14.106: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-j27gk PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 5 11:32:14.106: INFO: >>> kubeConfig: /root/.kube/config I0105 11:32:14.180286 8 log.go:172] (0xc0012a08f0) (0xc001c1d680) Create stream I0105 11:32:14.180402 8 log.go:172] (0xc0012a08f0) (0xc001c1d680) Stream added, broadcasting: 1 I0105 11:32:14.186337 8 log.go:172] (0xc0012a08f0) Reply frame received for 1 I0105 11:32:14.186373 8 log.go:172] (0xc0012a08f0) (0xc000d6a1e0) Create stream I0105 11:32:14.186394 8 log.go:172] (0xc0012a08f0) (0xc000d6a1e0) Stream added, broadcasting: 3 I0105 11:32:14.187173 8 log.go:172] (0xc0012a08f0) Reply frame received for 3 I0105 11:32:14.187207 8 log.go:172] (0xc0012a08f0) (0xc00099c0a0) Create stream I0105 11:32:14.187216 8 log.go:172] (0xc0012a08f0) (0xc00099c0a0) Stream added, broadcasting: 5 I0105 11:32:14.188280 8 log.go:172] (0xc0012a08f0) Reply frame received for 5 I0105 11:32:14.313150 8 log.go:172] (0xc0012a08f0) Data frame received for 3 I0105 11:32:14.313212 8 log.go:172] (0xc000d6a1e0) (3) Data frame handling I0105 11:32:14.313250 8 log.go:172] (0xc000d6a1e0) (3) Data frame sent I0105 11:32:14.452974 8 log.go:172] (0xc0012a08f0) Data frame received for 1 I0105 11:32:14.453080 8 log.go:172] (0xc0012a08f0) (0xc000d6a1e0) Stream removed, broadcasting: 3 I0105 11:32:14.453142 8 log.go:172] (0xc001c1d680) (1) Data frame handling I0105 11:32:14.453198 8 log.go:172] (0xc001c1d680) (1) Data frame sent I0105 11:32:14.453231 8 log.go:172] (0xc0012a08f0) (0xc00099c0a0) Stream removed, broadcasting: 5 I0105 11:32:14.453287 8 log.go:172] (0xc0012a08f0) (0xc001c1d680) Stream removed, broadcasting: 1 I0105 11:32:14.453327 8 log.go:172] (0xc0012a08f0) Go away received I0105 11:32:14.453548 8 log.go:172] (0xc0012a08f0) (0xc001c1d680) Stream removed, broadcasting: 1 I0105 11:32:14.453578 8 log.go:172] (0xc0012a08f0) (0xc000d6a1e0) Stream removed, broadcasting: 3 I0105 11:32:14.453602 8 log.go:172] (0xc0012a08f0) (0xc00099c0a0) Stream removed, broadcasting: 5 Jan 5 11:32:14.453: INFO: Exec stderr: "" Jan 5 11:32:14.453: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-j27gk PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 5 11:32:14.453: INFO: >>> kubeConfig: /root/.kube/config I0105 11:32:14.564683 8 log.go:172] (0xc0010e42c0) (0xc000d6a460) Create stream I0105 11:32:14.565182 8 log.go:172] (0xc0010e42c0) (0xc000d6a460) Stream added, broadcasting: 1 I0105 11:32:14.578270 8 log.go:172] (0xc0010e42c0) Reply frame received for 1 I0105 11:32:14.578432 8 log.go:172] (0xc0010e42c0) (0xc00099c140) Create stream I0105 11:32:14.578458 8 log.go:172] (0xc0010e42c0) (0xc00099c140) Stream added, broadcasting: 3 I0105 11:32:14.580619 8 log.go:172] (0xc0010e42c0) Reply frame received for 3 I0105 11:32:14.580847 8 log.go:172] (0xc0010e42c0) (0xc0018a6960) Create stream I0105 11:32:14.580886 8 log.go:172] (0xc0010e42c0) (0xc0018a6960) Stream added, broadcasting: 5 I0105 11:32:14.582468 8 log.go:172] (0xc0010e42c0) Reply frame received for 5 I0105 11:32:14.892104 8 log.go:172] (0xc0010e42c0) Data frame received for 3 I0105 11:32:14.892287 8 log.go:172] (0xc00099c140) (3) Data frame handling I0105 11:32:14.892377 8 log.go:172] (0xc00099c140) (3) Data frame sent I0105 11:32:15.006805 8 log.go:172] (0xc0010e42c0) (0xc00099c140) Stream removed, broadcasting: 3 I0105 11:32:15.006940 8 log.go:172] (0xc0010e42c0) Data frame received for 1 I0105 11:32:15.006974 8 log.go:172] (0xc0010e42c0) (0xc0018a6960) Stream removed, broadcasting: 5 I0105 11:32:15.007033 8 log.go:172] (0xc000d6a460) (1) Data frame handling I0105 11:32:15.007065 8 log.go:172] (0xc000d6a460) (1) Data frame sent I0105 11:32:15.007075 8 log.go:172] (0xc0010e42c0) (0xc000d6a460) Stream removed, broadcasting: 1 I0105 11:32:15.007090 8 log.go:172] (0xc0010e42c0) Go away received I0105 11:32:15.007377 8 log.go:172] (0xc0010e42c0) (0xc000d6a460) Stream removed, broadcasting: 1 I0105 11:32:15.007387 8 log.go:172] (0xc0010e42c0) (0xc00099c140) Stream removed, broadcasting: 3 I0105 11:32:15.007394 8 log.go:172] (0xc0010e42c0) (0xc0018a6960) Stream removed, broadcasting: 5 Jan 5 11:32:15.007: INFO: Exec stderr: "" Jan 5 11:32:15.007: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-j27gk PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 5 11:32:15.007: INFO: >>> kubeConfig: /root/.kube/config I0105 11:32:15.073477 8 log.go:172] (0xc0010e4790) (0xc000d6a8c0) Create stream I0105 11:32:15.073572 8 log.go:172] (0xc0010e4790) (0xc000d6a8c0) Stream added, broadcasting: 1 I0105 11:32:15.078796 8 log.go:172] (0xc0010e4790) Reply frame received for 1 I0105 11:32:15.078893 8 log.go:172] (0xc0010e4790) (0xc00099c1e0) Create stream I0105 11:32:15.078913 8 log.go:172] (0xc0010e4790) (0xc00099c1e0) Stream added, broadcasting: 3 I0105 11:32:15.079783 8 log.go:172] (0xc0010e4790) Reply frame received for 3 I0105 11:32:15.079817 8 log.go:172] (0xc0010e4790) (0xc000d6a960) Create stream I0105 11:32:15.079838 8 log.go:172] (0xc0010e4790) (0xc000d6a960) Stream added, broadcasting: 5 I0105 11:32:15.081132 8 log.go:172] (0xc0010e4790) Reply frame received for 5 I0105 11:32:15.181680 8 log.go:172] (0xc0010e4790) Data frame received for 3 I0105 11:32:15.181708 8 log.go:172] (0xc00099c1e0) (3) Data frame handling I0105 11:32:15.181726 8 log.go:172] (0xc00099c1e0) (3) Data frame sent I0105 11:32:15.281335 8 log.go:172] (0xc0010e4790) Data frame received for 1 I0105 11:32:15.281532 8 log.go:172] (0xc0010e4790) (0xc00099c1e0) Stream removed, broadcasting: 3 I0105 11:32:15.281612 8 log.go:172] (0xc000d6a8c0) (1) Data frame handling I0105 11:32:15.281656 8 log.go:172] (0xc000d6a8c0) (1) Data frame sent I0105 11:32:15.281694 8 log.go:172] (0xc0010e4790) (0xc000d6a960) Stream removed, broadcasting: 5 I0105 11:32:15.281766 8 log.go:172] (0xc0010e4790) (0xc000d6a8c0) Stream removed, broadcasting: 1 I0105 11:32:15.281784 8 log.go:172] (0xc0010e4790) Go away received I0105 11:32:15.282010 8 log.go:172] (0xc0010e4790) (0xc000d6a8c0) Stream removed, broadcasting: 1 I0105 11:32:15.282026 8 log.go:172] (0xc0010e4790) (0xc00099c1e0) Stream removed, broadcasting: 3 I0105 11:32:15.282038 8 log.go:172] (0xc0010e4790) (0xc000d6a960) Stream removed, broadcasting: 5 Jan 5 11:32:15.282: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jan 5 11:32:15.282: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-j27gk PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 5 11:32:15.282: INFO: >>> kubeConfig: /root/.kube/config I0105 11:32:15.356861 8 log.go:172] (0xc001a902c0) (0xc00099c460) Create stream I0105 11:32:15.356940 8 log.go:172] (0xc001a902c0) (0xc00099c460) Stream added, broadcasting: 1 I0105 11:32:15.360907 8 log.go:172] (0xc001a902c0) Reply frame received for 1 I0105 11:32:15.360960 8 log.go:172] (0xc001a902c0) (0xc000d6aa00) Create stream I0105 11:32:15.360974 8 log.go:172] (0xc001a902c0) (0xc000d6aa00) Stream added, broadcasting: 3 I0105 11:32:15.362012 8 log.go:172] (0xc001a902c0) Reply frame received for 3 I0105 11:32:15.362055 8 log.go:172] (0xc001a902c0) (0xc00099c5a0) Create stream I0105 11:32:15.362075 8 log.go:172] (0xc001a902c0) (0xc00099c5a0) Stream added, broadcasting: 5 I0105 11:32:15.364961 8 log.go:172] (0xc001a902c0) Reply frame received for 5 I0105 11:32:15.490446 8 log.go:172] (0xc001a902c0) Data frame received for 3 I0105 11:32:15.490501 8 log.go:172] (0xc000d6aa00) (3) Data frame handling I0105 11:32:15.490531 8 log.go:172] (0xc000d6aa00) (3) Data frame sent I0105 11:32:15.619554 8 log.go:172] (0xc001a902c0) Data frame received for 1 I0105 11:32:15.619642 8 log.go:172] (0xc001a902c0) (0xc000d6aa00) Stream removed, broadcasting: 3 I0105 11:32:15.619701 8 log.go:172] (0xc00099c460) (1) Data frame handling I0105 11:32:15.619724 8 log.go:172] (0xc00099c460) (1) Data frame sent I0105 11:32:15.619764 8 log.go:172] (0xc001a902c0) (0xc00099c5a0) Stream removed, broadcasting: 5 I0105 11:32:15.619792 8 log.go:172] (0xc001a902c0) (0xc00099c460) Stream removed, broadcasting: 1 I0105 11:32:15.619806 8 log.go:172] (0xc001a902c0) Go away received I0105 11:32:15.620511 8 log.go:172] (0xc001a902c0) (0xc00099c460) Stream removed, broadcasting: 1 I0105 11:32:15.620571 8 log.go:172] (0xc001a902c0) (0xc000d6aa00) Stream removed, broadcasting: 3 I0105 11:32:15.620585 8 log.go:172] (0xc001a902c0) (0xc00099c5a0) Stream removed, broadcasting: 5 Jan 5 11:32:15.620: INFO: Exec stderr: "" Jan 5 11:32:15.620: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-j27gk PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 5 11:32:15.620: INFO: >>> kubeConfig: /root/.kube/config I0105 11:32:15.689715 8 log.go:172] (0xc001a90790) (0xc00099caa0) Create stream I0105 11:32:15.689742 8 log.go:172] (0xc001a90790) (0xc00099caa0) Stream added, broadcasting: 1 I0105 11:32:15.700261 8 log.go:172] (0xc001a90790) Reply frame received for 1 I0105 11:32:15.700411 8 log.go:172] (0xc001a90790) (0xc0018a6a00) Create stream I0105 11:32:15.700424 8 log.go:172] (0xc001a90790) (0xc0018a6a00) Stream added, broadcasting: 3 I0105 11:32:15.702333 8 log.go:172] (0xc001a90790) Reply frame received for 3 I0105 11:32:15.702368 8 log.go:172] (0xc001a90790) (0xc001ffcbe0) Create stream I0105 11:32:15.702378 8 log.go:172] (0xc001a90790) (0xc001ffcbe0) Stream added, broadcasting: 5 I0105 11:32:15.704502 8 log.go:172] (0xc001a90790) Reply frame received for 5 I0105 11:32:15.866320 8 log.go:172] (0xc001a90790) Data frame received for 3 I0105 11:32:15.866439 8 log.go:172] (0xc0018a6a00) (3) Data frame handling I0105 11:32:15.866497 8 log.go:172] (0xc0018a6a00) (3) Data frame sent I0105 11:32:16.036218 8 log.go:172] (0xc001a90790) Data frame received for 1 I0105 11:32:16.036310 8 log.go:172] (0xc001a90790) (0xc0018a6a00) Stream removed, broadcasting: 3 I0105 11:32:16.036358 8 log.go:172] (0xc00099caa0) (1) Data frame handling I0105 11:32:16.036389 8 log.go:172] (0xc00099caa0) (1) Data frame sent I0105 11:32:16.036434 8 log.go:172] (0xc001a90790) (0xc001ffcbe0) Stream removed, broadcasting: 5 I0105 11:32:16.036478 8 log.go:172] (0xc001a90790) (0xc00099caa0) Stream removed, broadcasting: 1 I0105 11:32:16.036504 8 log.go:172] (0xc001a90790) Go away received I0105 11:32:16.036676 8 log.go:172] (0xc001a90790) (0xc00099caa0) Stream removed, broadcasting: 1 I0105 11:32:16.036695 8 log.go:172] (0xc001a90790) (0xc0018a6a00) Stream removed, broadcasting: 3 I0105 11:32:16.036712 8 log.go:172] (0xc001a90790) (0xc001ffcbe0) Stream removed, broadcasting: 5 Jan 5 11:32:16.036: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jan 5 11:32:16.036: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-j27gk PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 5 11:32:16.036: INFO: >>> kubeConfig: /root/.kube/config I0105 11:32:16.122179 8 log.go:172] (0xc001a90c60) (0xc00099cd20) Create stream I0105 11:32:16.122244 8 log.go:172] (0xc001a90c60) (0xc00099cd20) Stream added, broadcasting: 1 I0105 11:32:16.159198 8 log.go:172] (0xc001a90c60) Reply frame received for 1 I0105 11:32:16.159293 8 log.go:172] (0xc001a90c60) (0xc0020060a0) Create stream I0105 11:32:16.159317 8 log.go:172] (0xc001a90c60) (0xc0020060a0) Stream added, broadcasting: 3 I0105 11:32:16.161036 8 log.go:172] (0xc001a90c60) Reply frame received for 3 I0105 11:32:16.161068 8 log.go:172] (0xc001a90c60) (0xc001b80000) Create stream I0105 11:32:16.161081 8 log.go:172] (0xc001a90c60) (0xc001b80000) Stream added, broadcasting: 5 I0105 11:32:16.162394 8 log.go:172] (0xc001a90c60) Reply frame received for 5 I0105 11:32:16.259051 8 log.go:172] (0xc001a90c60) Data frame received for 3 I0105 11:32:16.259114 8 log.go:172] (0xc0020060a0) (3) Data frame handling I0105 11:32:16.259172 8 log.go:172] (0xc0020060a0) (3) Data frame sent I0105 11:32:16.360413 8 log.go:172] (0xc001a90c60) Data frame received for 1 I0105 11:32:16.360511 8 log.go:172] (0xc001a90c60) (0xc0020060a0) Stream removed, broadcasting: 3 I0105 11:32:16.360604 8 log.go:172] (0xc00099cd20) (1) Data frame handling I0105 11:32:16.360645 8 log.go:172] (0xc00099cd20) (1) Data frame sent I0105 11:32:16.360654 8 log.go:172] (0xc001a90c60) (0xc00099cd20) Stream removed, broadcasting: 1 I0105 11:32:16.361077 8 log.go:172] (0xc001a90c60) (0xc001b80000) Stream removed, broadcasting: 5 I0105 11:32:16.361145 8 log.go:172] (0xc001a90c60) (0xc00099cd20) Stream removed, broadcasting: 1 I0105 11:32:16.361155 8 log.go:172] (0xc001a90c60) (0xc0020060a0) Stream removed, broadcasting: 3 I0105 11:32:16.361167 8 log.go:172] (0xc001a90c60) (0xc001b80000) Stream removed, broadcasting: 5 I0105 11:32:16.361652 8 log.go:172] (0xc001a90c60) Go away received Jan 5 11:32:16.361: INFO: Exec stderr: "" Jan 5 11:32:16.361: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-j27gk PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 5 11:32:16.362: INFO: >>> kubeConfig: /root/.kube/config I0105 11:32:16.427068 8 log.go:172] (0xc0008f8420) (0xc0015f43c0) Create stream I0105 11:32:16.427158 8 log.go:172] (0xc0008f8420) (0xc0015f43c0) Stream added, broadcasting: 1 I0105 11:32:16.432186 8 log.go:172] (0xc0008f8420) Reply frame received for 1 I0105 11:32:16.432326 8 log.go:172] (0xc0008f8420) (0xc000eda000) Create stream I0105 11:32:16.432355 8 log.go:172] (0xc0008f8420) (0xc000eda000) Stream added, broadcasting: 3 I0105 11:32:16.441568 8 log.go:172] (0xc0008f8420) Reply frame received for 3 I0105 11:32:16.441637 8 log.go:172] (0xc0008f8420) (0xc0016b0140) Create stream I0105 11:32:16.441658 8 log.go:172] (0xc0008f8420) (0xc0016b0140) Stream added, broadcasting: 5 I0105 11:32:16.443733 8 log.go:172] (0xc0008f8420) Reply frame received for 5 I0105 11:32:16.662737 8 log.go:172] (0xc0008f8420) Data frame received for 3 I0105 11:32:16.662863 8 log.go:172] (0xc000eda000) (3) Data frame handling I0105 11:32:16.662940 8 log.go:172] (0xc000eda000) (3) Data frame sent I0105 11:32:16.932831 8 log.go:172] (0xc0008f8420) Data frame received for 1 I0105 11:32:16.932903 8 log.go:172] (0xc0015f43c0) (1) Data frame handling I0105 11:32:16.932929 8 log.go:172] (0xc0015f43c0) (1) Data frame sent I0105 11:32:16.932965 8 log.go:172] (0xc0008f8420) (0xc0015f43c0) Stream removed, broadcasting: 1 I0105 11:32:16.933284 8 log.go:172] (0xc0008f8420) (0xc000eda000) Stream removed, broadcasting: 3 I0105 11:32:16.933542 8 log.go:172] (0xc0008f8420) (0xc0016b0140) Stream removed, broadcasting: 5 I0105 11:32:16.933635 8 log.go:172] (0xc0008f8420) (0xc0015f43c0) Stream removed, broadcasting: 1 I0105 11:32:16.933650 8 log.go:172] (0xc0008f8420) (0xc000eda000) Stream removed, broadcasting: 3 I0105 11:32:16.933664 8 log.go:172] (0xc0008f8420) (0xc0016b0140) Stream removed, broadcasting: 5 Jan 5 11:32:16.934: INFO: Exec stderr: "" Jan 5 11:32:16.934: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-j27gk PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 5 11:32:16.934: INFO: >>> kubeConfig: /root/.kube/config I0105 11:32:16.934941 8 log.go:172] (0xc0008f8420) Go away received I0105 11:32:17.185212 8 log.go:172] (0xc001a902c0) (0xc001b803c0) Create stream I0105 11:32:17.185556 8 log.go:172] (0xc001a902c0) (0xc001b803c0) Stream added, broadcasting: 1 I0105 11:32:17.203534 8 log.go:172] (0xc001a902c0) Reply frame received for 1 I0105 11:32:17.203640 8 log.go:172] (0xc001a902c0) (0xc0015f4460) Create stream I0105 11:32:17.203659 8 log.go:172] (0xc001a902c0) (0xc0015f4460) Stream added, broadcasting: 3 I0105 11:32:17.205415 8 log.go:172] (0xc001a902c0) Reply frame received for 3 I0105 11:32:17.205464 8 log.go:172] (0xc001a902c0) (0xc000eda140) Create stream I0105 11:32:17.205477 8 log.go:172] (0xc001a902c0) (0xc000eda140) Stream added, broadcasting: 5 I0105 11:32:17.206658 8 log.go:172] (0xc001a902c0) Reply frame received for 5 I0105 11:32:17.340307 8 log.go:172] (0xc001a902c0) Data frame received for 3 I0105 11:32:17.340480 8 log.go:172] (0xc0015f4460) (3) Data frame handling I0105 11:32:17.340557 8 log.go:172] (0xc0015f4460) (3) Data frame sent I0105 11:32:17.483301 8 log.go:172] (0xc001a902c0) (0xc0015f4460) Stream removed, broadcasting: 3 I0105 11:32:17.483460 8 log.go:172] (0xc001a902c0) Data frame received for 1 I0105 11:32:17.483483 8 log.go:172] (0xc001b803c0) (1) Data frame handling I0105 11:32:17.483548 8 log.go:172] (0xc001b803c0) (1) Data frame sent I0105 11:32:17.483586 8 log.go:172] (0xc001a902c0) (0xc000eda140) Stream removed, broadcasting: 5 I0105 11:32:17.483624 8 log.go:172] (0xc001a902c0) (0xc001b803c0) Stream removed, broadcasting: 1 I0105 11:32:17.484095 8 log.go:172] (0xc001a902c0) Go away received I0105 11:32:17.484348 8 log.go:172] (0xc001a902c0) (0xc001b803c0) Stream removed, broadcasting: 1 I0105 11:32:17.484478 8 log.go:172] (0xc001a902c0) (0xc0015f4460) Stream removed, broadcasting: 3 I0105 11:32:17.484516 8 log.go:172] (0xc001a902c0) (0xc000eda140) Stream removed, broadcasting: 5 Jan 5 11:32:17.484: INFO: Exec stderr: "" Jan 5 11:32:17.484: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-j27gk PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 5 11:32:17.484: INFO: >>> kubeConfig: /root/.kube/config I0105 11:32:17.571657 8 log.go:172] (0xc0020b44d0) (0xc002006280) Create stream I0105 11:32:17.571791 8 log.go:172] (0xc0020b44d0) (0xc002006280) Stream added, broadcasting: 1 I0105 11:32:17.581103 8 log.go:172] (0xc0020b44d0) Reply frame received for 1 I0105 11:32:17.581171 8 log.go:172] (0xc0020b44d0) (0xc000eda280) Create stream I0105 11:32:17.581179 8 log.go:172] (0xc0020b44d0) (0xc000eda280) Stream added, broadcasting: 3 I0105 11:32:17.581994 8 log.go:172] (0xc0020b44d0) Reply frame received for 3 I0105 11:32:17.582017 8 log.go:172] (0xc0020b44d0) (0xc0020063c0) Create stream I0105 11:32:17.582024 8 log.go:172] (0xc0020b44d0) (0xc0020063c0) Stream added, broadcasting: 5 I0105 11:32:17.584312 8 log.go:172] (0xc0020b44d0) Reply frame received for 5 I0105 11:32:17.689049 8 log.go:172] (0xc0020b44d0) Data frame received for 3 I0105 11:32:17.689118 8 log.go:172] (0xc000eda280) (3) Data frame handling I0105 11:32:17.689157 8 log.go:172] (0xc000eda280) (3) Data frame sent I0105 11:32:17.937501 8 log.go:172] (0xc0020b44d0) Data frame received for 1 I0105 11:32:17.937700 8 log.go:172] (0xc002006280) (1) Data frame handling I0105 11:32:17.937755 8 log.go:172] (0xc002006280) (1) Data frame sent I0105 11:32:17.938130 8 log.go:172] (0xc0020b44d0) (0xc002006280) Stream removed, broadcasting: 1 I0105 11:32:17.938913 8 log.go:172] (0xc0020b44d0) (0xc000eda280) Stream removed, broadcasting: 3 I0105 11:32:17.939449 8 log.go:172] (0xc0020b44d0) (0xc0020063c0) Stream removed, broadcasting: 5 I0105 11:32:17.939585 8 log.go:172] (0xc0020b44d0) (0xc002006280) Stream removed, broadcasting: 1 I0105 11:32:17.939606 8 log.go:172] (0xc0020b44d0) (0xc000eda280) Stream removed, broadcasting: 3 I0105 11:32:17.939620 8 log.go:172] (0xc0020b44d0) (0xc0020063c0) Stream removed, broadcasting: 5 Jan 5 11:32:17.939: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:32:17.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-j27gk" for this suite. Jan 5 11:33:04.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:33:04.277: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-j27gk, resource: bindings, ignored listing per whitelist Jan 5 11:33:04.400: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-j27gk deletion completed in 46.224728371s • [SLOW TEST:77.418 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:33:04.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0105 11:33:35.412964 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 5 11:33:35.413: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:33:35.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-hfcsk" for this suite. Jan 5 11:33:45.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:33:45.946: INFO: namespace: e2e-tests-gc-hfcsk, resource: bindings, ignored listing per whitelist Jan 5 11:33:45.991: INFO: namespace e2e-tests-gc-hfcsk deletion completed in 10.570727864s • [SLOW TEST:41.591 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:33:45.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-lrhgn/configmap-test-3c4d2593-2faf-11ea-910c-0242ac110004 STEP: Creating a pod to test consume configMaps Jan 5 11:33:47.464: INFO: Waiting up to 5m0s for pod "pod-configmaps-3c527ee7-2faf-11ea-910c-0242ac110004" in namespace "e2e-tests-configmap-lrhgn" to be "success or failure" Jan 5 11:33:47.480: INFO: Pod "pod-configmaps-3c527ee7-2faf-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 16.237677ms Jan 5 11:33:49.752: INFO: Pod "pod-configmaps-3c527ee7-2faf-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287881602s Jan 5 11:33:51.795: INFO: Pod "pod-configmaps-3c527ee7-2faf-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330481907s Jan 5 11:33:53.819: INFO: Pod "pod-configmaps-3c527ee7-2faf-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.354598371s Jan 5 11:33:55.836: INFO: Pod "pod-configmaps-3c527ee7-2faf-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.371560768s Jan 5 11:33:57.859: INFO: Pod "pod-configmaps-3c527ee7-2faf-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.394585681s STEP: Saw pod success Jan 5 11:33:57.859: INFO: Pod "pod-configmaps-3c527ee7-2faf-11ea-910c-0242ac110004" satisfied condition "success or failure" Jan 5 11:33:57.879: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-3c527ee7-2faf-11ea-910c-0242ac110004 container env-test: STEP: delete the pod Jan 5 11:33:58.285: INFO: Waiting for pod pod-configmaps-3c527ee7-2faf-11ea-910c-0242ac110004 to disappear Jan 5 11:33:58.345: INFO: Pod pod-configmaps-3c527ee7-2faf-11ea-910c-0242ac110004 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:33:58.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-lrhgn" for this suite. Jan 5 11:34:04.575: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:34:04.768: INFO: namespace: e2e-tests-configmap-lrhgn, resource: bindings, ignored listing per whitelist Jan 5 11:34:04.842: INFO: namespace e2e-tests-configmap-lrhgn deletion completed in 6.485712217s • [SLOW TEST:18.850 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:34:04.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command Jan 5 11:34:05.017: INFO: Waiting up to 5m0s for pod "var-expansion-474af39e-2faf-11ea-910c-0242ac110004" in namespace "e2e-tests-var-expansion-7l5fc" to be "success or failure" Jan 5 11:34:05.125: INFO: Pod "var-expansion-474af39e-2faf-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 108.398649ms Jan 5 11:34:07.141: INFO: Pod "var-expansion-474af39e-2faf-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124267762s Jan 5 11:34:09.981: INFO: Pod "var-expansion-474af39e-2faf-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.964178242s Jan 5 11:34:12.344: INFO: Pod "var-expansion-474af39e-2faf-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.327374817s Jan 5 11:34:14.371: INFO: Pod "var-expansion-474af39e-2faf-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.353456961s Jan 5 11:34:16.427: INFO: Pod "var-expansion-474af39e-2faf-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.410364134s STEP: Saw pod success Jan 5 11:34:16.428: INFO: Pod "var-expansion-474af39e-2faf-11ea-910c-0242ac110004" satisfied condition "success or failure" Jan 5 11:34:16.433: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-474af39e-2faf-11ea-910c-0242ac110004 container dapi-container: STEP: delete the pod Jan 5 11:34:16.859: INFO: Waiting for pod var-expansion-474af39e-2faf-11ea-910c-0242ac110004 to disappear Jan 5 11:34:16.879: INFO: Pod var-expansion-474af39e-2faf-11ea-910c-0242ac110004 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:34:16.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-7l5fc" for this suite. Jan 5 11:34:23.012: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:34:23.269: INFO: namespace: e2e-tests-var-expansion-7l5fc, resource: bindings, ignored listing per whitelist Jan 5 11:34:23.276: INFO: namespace e2e-tests-var-expansion-7l5fc deletion completed in 6.387960313s • [SLOW TEST:18.433 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:34:23.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 5 11:34:23.469: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5249fdd4-2faf-11ea-910c-0242ac110004" in namespace "e2e-tests-projected-tcrhq" to be "success or failure" Jan 5 11:34:23.639: INFO: Pod "downwardapi-volume-5249fdd4-2faf-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 169.766028ms Jan 5 11:34:25.725: INFO: Pod "downwardapi-volume-5249fdd4-2faf-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.255860135s Jan 5 11:34:27.745: INFO: Pod "downwardapi-volume-5249fdd4-2faf-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.276129731s Jan 5 11:34:29.764: INFO: Pod "downwardapi-volume-5249fdd4-2faf-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.294517506s Jan 5 11:34:31.777: INFO: Pod "downwardapi-volume-5249fdd4-2faf-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.308084706s Jan 5 11:34:33.797: INFO: Pod "downwardapi-volume-5249fdd4-2faf-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.327438394s STEP: Saw pod success Jan 5 11:34:33.797: INFO: Pod "downwardapi-volume-5249fdd4-2faf-11ea-910c-0242ac110004" satisfied condition "success or failure" Jan 5 11:34:33.805: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-5249fdd4-2faf-11ea-910c-0242ac110004 container client-container: STEP: delete the pod Jan 5 11:34:34.842: INFO: Waiting for pod downwardapi-volume-5249fdd4-2faf-11ea-910c-0242ac110004 to disappear Jan 5 11:34:34.856: INFO: Pod downwardapi-volume-5249fdd4-2faf-11ea-910c-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:34:34.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-tcrhq" for this suite. Jan 5 11:34:43.026: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:34:43.101: INFO: namespace: e2e-tests-projected-tcrhq, resource: bindings, ignored listing per whitelist Jan 5 11:34:43.152: INFO: namespace e2e-tests-projected-tcrhq deletion completed in 8.274557724s • [SLOW TEST:19.876 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:34:43.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults Jan 5 11:34:43.462: INFO: Waiting up to 5m0s for pod "client-containers-5e299fbc-2faf-11ea-910c-0242ac110004" in namespace "e2e-tests-containers-48hmp" to be "success or failure" Jan 5 11:34:43.482: INFO: Pod "client-containers-5e299fbc-2faf-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 20.262733ms Jan 5 11:34:45.504: INFO: Pod "client-containers-5e299fbc-2faf-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0422377s Jan 5 11:34:47.549: INFO: Pod "client-containers-5e299fbc-2faf-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086872343s Jan 5 11:34:49.566: INFO: Pod "client-containers-5e299fbc-2faf-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.103461785s Jan 5 11:34:51.629: INFO: Pod "client-containers-5e299fbc-2faf-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.166966855s Jan 5 11:34:53.648: INFO: Pod "client-containers-5e299fbc-2faf-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.186295675s STEP: Saw pod success Jan 5 11:34:53.648: INFO: Pod "client-containers-5e299fbc-2faf-11ea-910c-0242ac110004" satisfied condition "success or failure" Jan 5 11:34:53.654: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-5e299fbc-2faf-11ea-910c-0242ac110004 container test-container: STEP: delete the pod Jan 5 11:34:53.833: INFO: Waiting for pod client-containers-5e299fbc-2faf-11ea-910c-0242ac110004 to disappear Jan 5 11:34:53.933: INFO: Pod client-containers-5e299fbc-2faf-11ea-910c-0242ac110004 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:34:53.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-48hmp" for this suite. Jan 5 11:35:00.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:35:00.165: INFO: namespace: e2e-tests-containers-48hmp, resource: bindings, ignored listing per whitelist Jan 5 11:35:00.167: INFO: namespace e2e-tests-containers-48hmp deletion completed in 6.213952204s • [SLOW TEST:17.014 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:35:00.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0105 11:35:01.600685 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 5 11:35:01.600: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:35:01.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-mgnc4" for this suite. Jan 5 11:35:08.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:35:08.760: INFO: namespace: e2e-tests-gc-mgnc4, resource: bindings, ignored listing per whitelist Jan 5 11:35:08.811: INFO: namespace e2e-tests-gc-mgnc4 deletion completed in 7.204328284s • [SLOW TEST:8.643 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:35:08.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-6d705c8a-2faf-11ea-910c-0242ac110004 STEP: Creating secret with name s-test-opt-upd-6d705d9e-2faf-11ea-910c-0242ac110004 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-6d705c8a-2faf-11ea-910c-0242ac110004 STEP: Updating secret s-test-opt-upd-6d705d9e-2faf-11ea-910c-0242ac110004 STEP: Creating secret with name s-test-opt-create-6d705e06-2faf-11ea-910c-0242ac110004 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:35:27.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-vqhnz" for this suite. Jan 5 11:35:51.538: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:35:51.599: INFO: namespace: e2e-tests-secrets-vqhnz, resource: bindings, ignored listing per whitelist Jan 5 11:35:51.708: INFO: namespace e2e-tests-secrets-vqhnz deletion completed in 24.249405519s • [SLOW TEST:42.897 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:35:51.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 5 11:36:22.040: INFO: Container started at 2020-01-05 11:35:59 +0000 UTC, pod became ready at 2020-01-05 11:36:21 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:36:22.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-w8kgx" for this suite. Jan 5 11:36:46.098: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:36:46.120: INFO: namespace: e2e-tests-container-probe-w8kgx, resource: bindings, ignored listing per whitelist Jan 5 11:36:46.294: INFO: namespace e2e-tests-container-probe-w8kgx deletion completed in 24.239978621s • [SLOW TEST:54.585 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:36:46.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-ffhqr Jan 5 11:36:56.537: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-ffhqr STEP: checking the pod's current state and verifying that restartCount is present Jan 5 11:36:56.549: INFO: Initial restart count of pod liveness-exec is 0 Jan 5 11:37:55.742: INFO: Restart count of pod e2e-tests-container-probe-ffhqr/liveness-exec is now 1 (59.193177829s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:37:55.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-ffhqr" for this suite. Jan 5 11:38:03.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:38:04.058: INFO: namespace: e2e-tests-container-probe-ffhqr, resource: bindings, ignored listing per whitelist Jan 5 11:38:04.145: INFO: namespace e2e-tests-container-probe-ffhqr deletion completed in 8.335207772s • [SLOW TEST:77.851 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:38:04.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-d5e49dde-2faf-11ea-910c-0242ac110004 STEP: Creating a pod to test consume configMaps Jan 5 11:38:04.282: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d5e7bde8-2faf-11ea-910c-0242ac110004" in namespace "e2e-tests-projected-hxz98" to be "success or failure" Jan 5 11:38:04.451: INFO: Pod "pod-projected-configmaps-d5e7bde8-2faf-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 168.669368ms Jan 5 11:38:06.684: INFO: Pod "pod-projected-configmaps-d5e7bde8-2faf-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.401393209s Jan 5 11:38:08.705: INFO: Pod "pod-projected-configmaps-d5e7bde8-2faf-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.42251503s Jan 5 11:38:10.927: INFO: Pod "pod-projected-configmaps-d5e7bde8-2faf-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.644935656s Jan 5 11:38:12.983: INFO: Pod "pod-projected-configmaps-d5e7bde8-2faf-11ea-910c-0242ac110004": Phase="Running", Reason="", readiness=true. Elapsed: 8.700411821s Jan 5 11:38:14.994: INFO: Pod "pod-projected-configmaps-d5e7bde8-2faf-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.711351557s STEP: Saw pod success Jan 5 11:38:14.994: INFO: Pod "pod-projected-configmaps-d5e7bde8-2faf-11ea-910c-0242ac110004" satisfied condition "success or failure" Jan 5 11:38:14.999: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-d5e7bde8-2faf-11ea-910c-0242ac110004 container projected-configmap-volume-test: STEP: delete the pod Jan 5 11:38:15.145: INFO: Waiting for pod pod-projected-configmaps-d5e7bde8-2faf-11ea-910c-0242ac110004 to disappear Jan 5 11:38:15.216: INFO: Pod pod-projected-configmaps-d5e7bde8-2faf-11ea-910c-0242ac110004 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:38:15.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hxz98" for this suite. Jan 5 11:38:23.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:38:23.344: INFO: namespace: e2e-tests-projected-hxz98, resource: bindings, ignored listing per whitelist Jan 5 11:38:23.416: INFO: namespace e2e-tests-projected-hxz98 deletion completed in 8.190916588s • [SLOW TEST:19.270 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:38:23.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-e16ad8d6-2faf-11ea-910c-0242ac110004 Jan 5 11:38:23.632: INFO: Pod name my-hostname-basic-e16ad8d6-2faf-11ea-910c-0242ac110004: Found 0 pods out of 1 Jan 5 11:38:29.555: INFO: Pod name my-hostname-basic-e16ad8d6-2faf-11ea-910c-0242ac110004: Found 1 pods out of 1 Jan 5 11:38:29.556: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-e16ad8d6-2faf-11ea-910c-0242ac110004" are running Jan 5 11:38:33.592: INFO: Pod "my-hostname-basic-e16ad8d6-2faf-11ea-910c-0242ac110004-sm5tl" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-05 11:38:23 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-05 11:38:23 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-e16ad8d6-2faf-11ea-910c-0242ac110004]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-05 11:38:23 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-e16ad8d6-2faf-11ea-910c-0242ac110004]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-05 11:38:23 +0000 UTC Reason: Message:}]) Jan 5 11:38:33.593: INFO: Trying to dial the pod Jan 5 11:38:38.642: INFO: Controller my-hostname-basic-e16ad8d6-2faf-11ea-910c-0242ac110004: Got expected result from replica 1 [my-hostname-basic-e16ad8d6-2faf-11ea-910c-0242ac110004-sm5tl]: "my-hostname-basic-e16ad8d6-2faf-11ea-910c-0242ac110004-sm5tl", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:38:38.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-xclpj" for this suite. Jan 5 11:38:46.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:38:46.750: INFO: namespace: e2e-tests-replication-controller-xclpj, resource: bindings, ignored listing per whitelist Jan 5 11:38:48.583: INFO: namespace e2e-tests-replication-controller-xclpj deletion completed in 9.933962453s • [SLOW TEST:25.167 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:38:48.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:38:58.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-rm8gx" for this suite. Jan 5 11:39:45.006: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:39:45.152: INFO: namespace: e2e-tests-kubelet-test-rm8gx, resource: bindings, ignored listing per whitelist Jan 5 11:39:45.156: INFO: namespace e2e-tests-kubelet-test-rm8gx deletion completed in 46.180508839s • [SLOW TEST:56.572 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:39:45.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info Jan 5 11:39:45.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Jan 5 11:39:47.198: INFO: stderr: "" Jan 5 11:39:47.198: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:39:47.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-jrgsr" for this suite. Jan 5 11:39:53.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:39:53.374: INFO: namespace: e2e-tests-kubectl-jrgsr, resource: bindings, ignored listing per whitelist Jan 5 11:39:53.577: INFO: namespace e2e-tests-kubectl-jrgsr deletion completed in 6.36821412s • [SLOW TEST:8.421 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:39:53.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-173ec053-2fb0-11ea-910c-0242ac110004 STEP: Creating a pod to test consume secrets Jan 5 11:39:53.941: INFO: Waiting up to 5m0s for pod "pod-secrets-1741c606-2fb0-11ea-910c-0242ac110004" in namespace "e2e-tests-secrets-7mw7d" to be "success or failure" Jan 5 11:39:54.218: INFO: Pod "pod-secrets-1741c606-2fb0-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 276.200379ms Jan 5 11:39:56.478: INFO: Pod "pod-secrets-1741c606-2fb0-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.537048201s Jan 5 11:39:58.515: INFO: Pod "pod-secrets-1741c606-2fb0-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.573422366s Jan 5 11:40:00.686: INFO: Pod "pod-secrets-1741c606-2fb0-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.744551508s Jan 5 11:40:02.727: INFO: Pod "pod-secrets-1741c606-2fb0-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.785262924s Jan 5 11:40:05.421: INFO: Pod "pod-secrets-1741c606-2fb0-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.479157524s STEP: Saw pod success Jan 5 11:40:05.421: INFO: Pod "pod-secrets-1741c606-2fb0-11ea-910c-0242ac110004" satisfied condition "success or failure" Jan 5 11:40:05.432: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-1741c606-2fb0-11ea-910c-0242ac110004 container secret-volume-test: STEP: delete the pod Jan 5 11:40:05.908: INFO: Waiting for pod pod-secrets-1741c606-2fb0-11ea-910c-0242ac110004 to disappear Jan 5 11:40:05.950: INFO: Pod pod-secrets-1741c606-2fb0-11ea-910c-0242ac110004 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:40:05.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-7mw7d" for this suite. Jan 5 11:40:12.102: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:40:12.200: INFO: namespace: e2e-tests-secrets-7mw7d, resource: bindings, ignored listing per whitelist Jan 5 11:40:12.230: INFO: namespace e2e-tests-secrets-7mw7d deletion completed in 6.263899693s • [SLOW TEST:18.653 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:40:12.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Jan 5 11:40:12.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-pbsjb' Jan 5 11:40:12.809: INFO: stderr: "" Jan 5 11:40:12.809: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 5 11:40:12.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-pbsjb' Jan 5 11:40:12.954: INFO: stderr: "" Jan 5 11:40:12.954: INFO: stdout: "update-demo-nautilus-8crbl update-demo-nautilus-92cx9 " Jan 5 11:40:12.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8crbl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pbsjb' Jan 5 11:40:13.144: INFO: stderr: "" Jan 5 11:40:13.144: INFO: stdout: "" Jan 5 11:40:13.144: INFO: update-demo-nautilus-8crbl is created but not running Jan 5 11:40:18.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-pbsjb' Jan 5 11:40:18.329: INFO: stderr: "" Jan 5 11:40:18.329: INFO: stdout: "update-demo-nautilus-8crbl update-demo-nautilus-92cx9 " Jan 5 11:40:18.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8crbl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pbsjb' Jan 5 11:40:18.410: INFO: stderr: "" Jan 5 11:40:18.410: INFO: stdout: "" Jan 5 11:40:18.410: INFO: update-demo-nautilus-8crbl is created but not running Jan 5 11:40:23.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-pbsjb' Jan 5 11:40:23.700: INFO: stderr: "" Jan 5 11:40:23.700: INFO: stdout: "update-demo-nautilus-8crbl update-demo-nautilus-92cx9 " Jan 5 11:40:23.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8crbl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pbsjb' Jan 5 11:40:23.811: INFO: stderr: "" Jan 5 11:40:23.812: INFO: stdout: "" Jan 5 11:40:23.812: INFO: update-demo-nautilus-8crbl is created but not running Jan 5 11:40:28.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-pbsjb' Jan 5 11:40:28.990: INFO: stderr: "" Jan 5 11:40:28.990: INFO: stdout: "update-demo-nautilus-8crbl update-demo-nautilus-92cx9 " Jan 5 11:40:28.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8crbl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pbsjb' Jan 5 11:40:29.084: INFO: stderr: "" Jan 5 11:40:29.084: INFO: stdout: "true" Jan 5 11:40:29.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8crbl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pbsjb' Jan 5 11:40:29.181: INFO: stderr: "" Jan 5 11:40:29.181: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 5 11:40:29.181: INFO: validating pod update-demo-nautilus-8crbl Jan 5 11:40:29.192: INFO: got data: { "image": "nautilus.jpg" } Jan 5 11:40:29.192: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 5 11:40:29.192: INFO: update-demo-nautilus-8crbl is verified up and running Jan 5 11:40:29.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-92cx9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pbsjb' Jan 5 11:40:29.293: INFO: stderr: "" Jan 5 11:40:29.293: INFO: stdout: "true" Jan 5 11:40:29.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-92cx9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pbsjb' Jan 5 11:40:29.429: INFO: stderr: "" Jan 5 11:40:29.429: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 5 11:40:29.429: INFO: validating pod update-demo-nautilus-92cx9 Jan 5 11:40:29.442: INFO: got data: { "image": "nautilus.jpg" } Jan 5 11:40:29.442: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 5 11:40:29.442: INFO: update-demo-nautilus-92cx9 is verified up and running STEP: scaling down the replication controller Jan 5 11:40:29.445: INFO: scanned /root for discovery docs: Jan 5 11:40:29.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-pbsjb' Jan 5 11:40:30.729: INFO: stderr: "" Jan 5 11:40:30.729: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 5 11:40:30.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-pbsjb' Jan 5 11:40:30.919: INFO: stderr: "" Jan 5 11:40:30.919: INFO: stdout: "update-demo-nautilus-8crbl update-demo-nautilus-92cx9 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 5 11:40:35.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-pbsjb' Jan 5 11:40:36.064: INFO: stderr: "" Jan 5 11:40:36.064: INFO: stdout: "update-demo-nautilus-92cx9 " Jan 5 11:40:36.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-92cx9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pbsjb' Jan 5 11:40:36.245: INFO: stderr: "" Jan 5 11:40:36.245: INFO: stdout: "true" Jan 5 11:40:36.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-92cx9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pbsjb' Jan 5 11:40:36.392: INFO: stderr: "" Jan 5 11:40:36.392: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 5 11:40:36.392: INFO: validating pod update-demo-nautilus-92cx9 Jan 5 11:40:36.403: INFO: got data: { "image": "nautilus.jpg" } Jan 5 11:40:36.403: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 5 11:40:36.403: INFO: update-demo-nautilus-92cx9 is verified up and running STEP: scaling up the replication controller Jan 5 11:40:36.406: INFO: scanned /root for discovery docs: Jan 5 11:40:36.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-pbsjb' Jan 5 11:40:37.939: INFO: stderr: "" Jan 5 11:40:37.939: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 5 11:40:37.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-pbsjb' Jan 5 11:40:38.339: INFO: stderr: "" Jan 5 11:40:38.339: INFO: stdout: "update-demo-nautilus-2mmcv update-demo-nautilus-92cx9 " Jan 5 11:40:38.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2mmcv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pbsjb' Jan 5 11:40:38.479: INFO: stderr: "" Jan 5 11:40:38.479: INFO: stdout: "" Jan 5 11:40:38.479: INFO: update-demo-nautilus-2mmcv is created but not running Jan 5 11:40:43.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-pbsjb' Jan 5 11:40:43.683: INFO: stderr: "" Jan 5 11:40:43.683: INFO: stdout: "update-demo-nautilus-2mmcv update-demo-nautilus-92cx9 " Jan 5 11:40:43.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2mmcv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pbsjb' Jan 5 11:40:43.905: INFO: stderr: "" Jan 5 11:40:43.905: INFO: stdout: "" Jan 5 11:40:43.905: INFO: update-demo-nautilus-2mmcv is created but not running Jan 5 11:40:48.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-pbsjb' Jan 5 11:40:49.057: INFO: stderr: "" Jan 5 11:40:49.057: INFO: stdout: "update-demo-nautilus-2mmcv update-demo-nautilus-92cx9 " Jan 5 11:40:49.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2mmcv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pbsjb' Jan 5 11:40:49.223: INFO: stderr: "" Jan 5 11:40:49.223: INFO: stdout: "true" Jan 5 11:40:49.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2mmcv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pbsjb' Jan 5 11:40:49.338: INFO: stderr: "" Jan 5 11:40:49.339: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 5 11:40:49.339: INFO: validating pod update-demo-nautilus-2mmcv Jan 5 11:40:49.352: INFO: got data: { "image": "nautilus.jpg" } Jan 5 11:40:49.352: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 5 11:40:49.352: INFO: update-demo-nautilus-2mmcv is verified up and running Jan 5 11:40:49.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-92cx9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pbsjb' Jan 5 11:40:49.458: INFO: stderr: "" Jan 5 11:40:49.458: INFO: stdout: "true" Jan 5 11:40:49.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-92cx9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pbsjb' Jan 5 11:40:49.591: INFO: stderr: "" Jan 5 11:40:49.591: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 5 11:40:49.591: INFO: validating pod update-demo-nautilus-92cx9 Jan 5 11:40:49.598: INFO: got data: { "image": "nautilus.jpg" } Jan 5 11:40:49.598: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 5 11:40:49.598: INFO: update-demo-nautilus-92cx9 is verified up and running STEP: using delete to clean up resources Jan 5 11:40:49.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-pbsjb' Jan 5 11:40:49.722: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 5 11:40:49.722: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 5 11:40:49.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-pbsjb' Jan 5 11:40:49.910: INFO: stderr: "No resources found.\n" Jan 5 11:40:49.911: INFO: stdout: "" Jan 5 11:40:49.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-pbsjb -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 5 11:40:50.091: INFO: stderr: "" Jan 5 11:40:50.091: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:40:50.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-pbsjb" for this suite. Jan 5 11:41:14.191: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:41:14.331: INFO: namespace: e2e-tests-kubectl-pbsjb, resource: bindings, ignored listing per whitelist Jan 5 11:41:14.344: INFO: namespace e2e-tests-kubectl-pbsjb deletion completed in 24.230904681s • [SLOW TEST:62.114 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:41:14.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-fmgz5 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-fmgz5 STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-fmgz5 STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-fmgz5 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-fmgz5 Jan 5 11:41:28.831: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-fmgz5, name: ss-0, uid: 4efd6eed-2fb0-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete. Jan 5 11:41:29.096: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-fmgz5, name: ss-0, uid: 4efd6eed-2fb0-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete. Jan 5 11:41:29.124: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-fmgz5, name: ss-0, uid: 4efd6eed-2fb0-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete. Jan 5 11:41:29.216: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-fmgz5 STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-fmgz5 STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-fmgz5 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jan 5 11:41:41.963: INFO: Deleting all statefulset in ns e2e-tests-statefulset-fmgz5 Jan 5 11:41:41.987: INFO: Scaling statefulset ss to 0 Jan 5 11:42:02.236: INFO: Waiting for statefulset status.replicas updated to 0 Jan 5 11:42:02.247: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:42:02.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-fmgz5" for this suite. Jan 5 11:42:10.349: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:42:10.516: INFO: namespace: e2e-tests-statefulset-fmgz5, resource: bindings, ignored listing per whitelist Jan 5 11:42:10.536: INFO: namespace e2e-tests-statefulset-fmgz5 deletion completed in 8.235331674s • [SLOW TEST:56.191 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:42:10.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 5 11:42:10.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Jan 5 11:42:11.021: INFO: stderr: "" Jan 5 11:42:11.021: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:42:11.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-wtnp4" for this suite. Jan 5 11:42:17.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:42:17.850: INFO: namespace: e2e-tests-kubectl-wtnp4, resource: bindings, ignored listing per whitelist Jan 5 11:42:17.919: INFO: namespace e2e-tests-kubectl-wtnp4 deletion completed in 6.88400537s • [SLOW TEST:7.384 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:42:17.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:42:18.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-ddfw6" for this suite. Jan 5 11:42:24.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:42:25.198: INFO: namespace: e2e-tests-services-ddfw6, resource: bindings, ignored listing per whitelist Jan 5 11:42:25.320: INFO: namespace e2e-tests-services-ddfw6 deletion completed in 7.201199805s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:7.401 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:42:25.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Jan 5 11:42:25.486: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 5 11:42:25.510: INFO: Waiting for terminating namespaces to be deleted... Jan 5 11:42:25.514: INFO: Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test Jan 5 11:42:25.528: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded) Jan 5 11:42:25.528: INFO: Container kube-proxy ready: true, restart count 0 Jan 5 11:42:25.528: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Jan 5 11:42:25.528: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded) Jan 5 11:42:25.528: INFO: Container weave ready: true, restart count 0 Jan 5 11:42:25.528: INFO: Container weave-npc ready: true, restart count 0 Jan 5 11:42:25.528: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Jan 5 11:42:25.528: INFO: Container coredns ready: true, restart count 0 Jan 5 11:42:25.528: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Jan 5 11:42:25.528: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Jan 5 11:42:25.528: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Jan 5 11:42:25.528: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Jan 5 11:42:25.528: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15e6fadcc65440e3], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:42:26.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-ft2tr" for this suite. Jan 5 11:42:32.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:42:32.736: INFO: namespace: e2e-tests-sched-pred-ft2tr, resource: bindings, ignored listing per whitelist Jan 5 11:42:32.773: INFO: namespace e2e-tests-sched-pred-ft2tr deletion completed in 6.183257757s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:7.452 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:42:32.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod Jan 5 11:42:43.518: INFO: Pod pod-hostip-76416f55-2fb0-11ea-910c-0242ac110004 has hostIP: 10.96.1.240 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 5 11:42:43.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-kxj9t" for this suite. Jan 5 11:43:07.696: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 11:43:07.808: INFO: namespace: e2e-tests-pods-kxj9t, resource: bindings, ignored listing per whitelist Jan 5 11:43:07.907: INFO: namespace e2e-tests-pods-kxj9t deletion completed in 24.35944457s • [SLOW TEST:35.134 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 5 11:43:07.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 5 11:43:08.181: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 136.295034ms)
Jan  5 11:43:08.199: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.683059ms)
Jan  5 11:43:08.206: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.781672ms)
Jan  5 11:43:08.216: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.826337ms)
Jan  5 11:43:08.221: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.223736ms)
Jan  5 11:43:08.228: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.508374ms)
Jan  5 11:43:08.238: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.595923ms)
Jan  5 11:43:08.244: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.283206ms)
Jan  5 11:43:08.248: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.257469ms)
Jan  5 11:43:08.252: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.960003ms)
Jan  5 11:43:08.256: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.130684ms)
Jan  5 11:43:08.261: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.579457ms)
Jan  5 11:43:08.270: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.53663ms)
Jan  5 11:43:08.283: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.752824ms)
Jan  5 11:43:08.288: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.139225ms)
Jan  5 11:43:08.293: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.511444ms)
Jan  5 11:43:08.298: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.905014ms)
Jan  5 11:43:08.308: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.918616ms)
Jan  5 11:43:08.314: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.857018ms)
Jan  5 11:43:08.317: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.710567ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 11:43:08.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-bj26c" for this suite.
Jan  5 11:43:14.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 11:43:14.656: INFO: namespace: e2e-tests-proxy-bj26c, resource: bindings, ignored listing per whitelist
Jan  5 11:43:14.668: INFO: namespace e2e-tests-proxy-bj26c deletion completed in 6.345865842s

• [SLOW TEST:6.760 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 11:43:14.668: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with configMap that has name projected-configmap-test-upd-8f053185-2fb0-11ea-910c-0242ac110004
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-8f053185-2fb0-11ea-910c-0242ac110004
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 11:44:41.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-5fs9j" for this suite.
Jan  5 11:45:07.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 11:45:07.494: INFO: namespace: e2e-tests-projected-5fs9j, resource: bindings, ignored listing per whitelist
Jan  5 11:45:07.544: INFO: namespace e2e-tests-projected-5fs9j deletion completed in 26.233053678s

• [SLOW TEST:112.876 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 11:45:07.545: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan  5 11:45:07.819: INFO: Number of nodes with available pods: 0
Jan  5 11:45:07.819: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 11:45:09.339: INFO: Number of nodes with available pods: 0
Jan  5 11:45:09.339: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 11:45:10.088: INFO: Number of nodes with available pods: 0
Jan  5 11:45:10.088: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 11:45:11.469: INFO: Number of nodes with available pods: 0
Jan  5 11:45:11.469: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 11:45:11.843: INFO: Number of nodes with available pods: 0
Jan  5 11:45:11.843: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 11:45:12.877: INFO: Number of nodes with available pods: 0
Jan  5 11:45:12.877: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 11:45:13.867: INFO: Number of nodes with available pods: 0
Jan  5 11:45:13.867: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 11:45:14.853: INFO: Number of nodes with available pods: 0
Jan  5 11:45:14.853: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 11:45:16.178: INFO: Number of nodes with available pods: 0
Jan  5 11:45:16.178: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 11:45:16.854: INFO: Number of nodes with available pods: 0
Jan  5 11:45:16.854: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 11:45:17.864: INFO: Number of nodes with available pods: 0
Jan  5 11:45:17.864: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 11:45:18.841: INFO: Number of nodes with available pods: 1
Jan  5 11:45:18.841: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jan  5 11:45:18.921: INFO: Number of nodes with available pods: 0
Jan  5 11:45:18.922: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 11:45:19.966: INFO: Number of nodes with available pods: 0
Jan  5 11:45:19.966: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 11:45:20.956: INFO: Number of nodes with available pods: 0
Jan  5 11:45:20.956: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 11:45:22.020: INFO: Number of nodes with available pods: 0
Jan  5 11:45:22.020: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 11:45:22.952: INFO: Number of nodes with available pods: 0
Jan  5 11:45:22.952: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 11:45:23.973: INFO: Number of nodes with available pods: 0
Jan  5 11:45:23.973: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 11:45:24.963: INFO: Number of nodes with available pods: 0
Jan  5 11:45:24.964: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 11:45:25.955: INFO: Number of nodes with available pods: 0
Jan  5 11:45:25.955: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 11:45:26.960: INFO: Number of nodes with available pods: 0
Jan  5 11:45:26.960: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 11:45:27.966: INFO: Number of nodes with available pods: 0
Jan  5 11:45:27.966: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 11:45:28.941: INFO: Number of nodes with available pods: 0
Jan  5 11:45:28.941: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 11:45:29.945: INFO: Number of nodes with available pods: 0
Jan  5 11:45:29.945: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 11:45:30.961: INFO: Number of nodes with available pods: 0
Jan  5 11:45:30.962: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 11:45:32.055: INFO: Number of nodes with available pods: 0
Jan  5 11:45:32.055: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 11:45:32.988: INFO: Number of nodes with available pods: 0
Jan  5 11:45:32.988: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 11:45:33.959: INFO: Number of nodes with available pods: 0
Jan  5 11:45:33.959: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 11:45:34.968: INFO: Number of nodes with available pods: 0
Jan  5 11:45:34.968: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 11:45:35.947: INFO: Number of nodes with available pods: 0
Jan  5 11:45:35.947: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 11:45:36.976: INFO: Number of nodes with available pods: 0
Jan  5 11:45:36.976: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 11:45:39.165: INFO: Number of nodes with available pods: 0
Jan  5 11:45:39.165: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 11:45:39.958: INFO: Number of nodes with available pods: 0
Jan  5 11:45:39.958: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 11:45:40.948: INFO: Number of nodes with available pods: 0
Jan  5 11:45:40.948: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 11:45:41.950: INFO: Number of nodes with available pods: 0
Jan  5 11:45:41.950: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 11:45:42.940: INFO: Number of nodes with available pods: 1
Jan  5 11:45:42.940: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-p9bkf, will wait for the garbage collector to delete the pods
Jan  5 11:45:43.064: INFO: Deleting DaemonSet.extensions daemon-set took: 53.998134ms
Jan  5 11:45:43.164: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.482863ms
Jan  5 11:45:51.179: INFO: Number of nodes with available pods: 0
Jan  5 11:45:51.179: INFO: Number of running nodes: 0, number of available pods: 0
Jan  5 11:45:51.197: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-p9bkf/daemonsets","resourceVersion":"17248070"},"items":null}

Jan  5 11:45:51.202: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-p9bkf/pods","resourceVersion":"17248070"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 11:45:51.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-p9bkf" for this suite.
Jan  5 11:45:59.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 11:45:59.328: INFO: namespace: e2e-tests-daemonsets-p9bkf, resource: bindings, ignored listing per whitelist
Jan  5 11:45:59.447: INFO: namespace e2e-tests-daemonsets-p9bkf deletion completed in 8.226653241s

• [SLOW TEST:51.902 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 11:45:59.448: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan  5 11:46:21.833: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  5 11:46:21.872: INFO: Pod pod-with-poststart-http-hook still exists
Jan  5 11:46:23.872: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  5 11:46:23.930: INFO: Pod pod-with-poststart-http-hook still exists
Jan  5 11:46:25.872: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  5 11:46:25.897: INFO: Pod pod-with-poststart-http-hook still exists
Jan  5 11:46:27.873: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  5 11:46:27.915: INFO: Pod pod-with-poststart-http-hook still exists
Jan  5 11:46:29.873: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  5 11:46:29.895: INFO: Pod pod-with-poststart-http-hook still exists
Jan  5 11:46:31.873: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  5 11:46:31.895: INFO: Pod pod-with-poststart-http-hook still exists
Jan  5 11:46:33.873: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  5 11:46:33.916: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 11:46:33.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-mql7c" for this suite.
Jan  5 11:46:58.012: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 11:46:58.347: INFO: namespace: e2e-tests-container-lifecycle-hook-mql7c, resource: bindings, ignored listing per whitelist
Jan  5 11:46:58.432: INFO: namespace e2e-tests-container-lifecycle-hook-mql7c deletion completed in 24.488260232s

• [SLOW TEST:58.984 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 11:46:58.433: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan  5 11:46:58.990: INFO: Waiting up to 5m0s for pod "downward-api-149ae8ba-2fb1-11ea-910c-0242ac110004" in namespace "e2e-tests-downward-api-nt97j" to be "success or failure"
Jan  5 11:46:59.013: INFO: Pod "downward-api-149ae8ba-2fb1-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 23.324619ms
Jan  5 11:47:01.024: INFO: Pod "downward-api-149ae8ba-2fb1-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034500166s
Jan  5 11:47:03.044: INFO: Pod "downward-api-149ae8ba-2fb1-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053878488s
Jan  5 11:47:05.067: INFO: Pod "downward-api-149ae8ba-2fb1-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077443066s
Jan  5 11:47:07.078: INFO: Pod "downward-api-149ae8ba-2fb1-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.087826648s
Jan  5 11:47:09.090: INFO: Pod "downward-api-149ae8ba-2fb1-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.100549504s
STEP: Saw pod success
Jan  5 11:47:09.090: INFO: Pod "downward-api-149ae8ba-2fb1-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 11:47:09.095: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-149ae8ba-2fb1-11ea-910c-0242ac110004 container dapi-container: 
STEP: delete the pod
Jan  5 11:47:09.736: INFO: Waiting for pod downward-api-149ae8ba-2fb1-11ea-910c-0242ac110004 to disappear
Jan  5 11:47:09.766: INFO: Pod downward-api-149ae8ba-2fb1-11ea-910c-0242ac110004 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 11:47:09.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-nt97j" for this suite.
Jan  5 11:47:15.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 11:47:16.008: INFO: namespace: e2e-tests-downward-api-nt97j, resource: bindings, ignored listing per whitelist
Jan  5 11:47:16.144: INFO: namespace e2e-tests-downward-api-nt97j deletion completed in 6.34618548s

• [SLOW TEST:17.711 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 11:47:16.145: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jan  5 11:47:29.553: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 11:47:30.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-sfsv4" for this suite.
Jan  5 11:47:57.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 11:47:57.645: INFO: namespace: e2e-tests-replicaset-sfsv4, resource: bindings, ignored listing per whitelist
Jan  5 11:47:57.702: INFO: namespace e2e-tests-replicaset-sfsv4 deletion completed in 27.062224318s

• [SLOW TEST:41.557 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 11:47:57.703: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  5 11:47:57.920: INFO: Creating deployment "nginx-deployment"
Jan  5 11:47:57.931: INFO: Waiting for observed generation 1
Jan  5 11:48:00.731: INFO: Waiting for all required pods to come up
Jan  5 11:48:00.750: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Jan  5 11:48:43.317: INFO: Waiting for deployment "nginx-deployment" to complete
Jan  5 11:48:43.335: INFO: Updating deployment "nginx-deployment" with a non-existent image
Jan  5 11:48:43.395: INFO: Updating deployment nginx-deployment
Jan  5 11:48:43.395: INFO: Waiting for observed generation 2
Jan  5 11:48:47.097: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jan  5 11:48:47.121: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jan  5 11:48:47.454: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan  5 11:48:47.748: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jan  5 11:48:47.748: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jan  5 11:48:47.755: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan  5 11:48:47.776: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Jan  5 11:48:47.776: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Jan  5 11:48:47.797: INFO: Updating deployment nginx-deployment
Jan  5 11:48:47.797: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Jan  5 11:48:49.353: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jan  5 11:48:50.147: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan  5 11:48:50.402: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-9vfpk,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9vfpk/deployments/nginx-deployment,UID:37bf8aba-2fb1-11ea-a994-fa163e34d433,ResourceVersion:17248579,Generation:3,CreationTimestamp:2020-01-05 11:47:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[{Progressing True 2020-01-05 11:48:44 +0000 UTC 2020-01-05 11:47:57 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-01-05 11:48:50 +0000 UTC 2020-01-05 11:48:50 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Jan  5 11:48:50.679: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-9vfpk,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9vfpk/replicasets/nginx-deployment-5c98f8fb5,UID:52d9f3fb-2fb1-11ea-a994-fa163e34d433,ResourceVersion:17248577,Generation:3,CreationTimestamp:2020-01-05 11:48:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 37bf8aba-2fb1-11ea-a994-fa163e34d433 0xc001a67f37 0xc001a67f38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  5 11:48:50.679: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Jan  5 11:48:50.680: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-9vfpk,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9vfpk/replicasets/nginx-deployment-85ddf47c5d,UID:37c41991-2fb1-11ea-a994-fa163e34d433,ResourceVersion:17248573,Generation:3,CreationTimestamp:2020-01-05 11:47:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 37bf8aba-2fb1-11ea-a994-fa163e34d433 0xc001a67ff7 0xc001a67ff8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Jan  5 11:48:51.989: INFO: Pod "nginx-deployment-5c98f8fb5-7dkds" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-7dkds,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9vfpk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vfpk/pods/nginx-deployment-5c98f8fb5-7dkds,UID:5720e160-2fb1-11ea-a994-fa163e34d433,ResourceVersion:17248597,Generation:0,CreationTimestamp:2020-01-05 11:48:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 52d9f3fb-2fb1-11ea-a994-fa163e34d433 0xc001a10e37 0xc001a10e38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s7kb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s7kb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5s7kb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001a10f10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001a10f30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:48:50 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 11:48:51.990: INFO: Pod "nginx-deployment-5c98f8fb5-bb27l" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-bb27l,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9vfpk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vfpk/pods/nginx-deployment-5c98f8fb5-bb27l,UID:572a2c44-2fb1-11ea-a994-fa163e34d433,ResourceVersion:17248595,Generation:0,CreationTimestamp:2020-01-05 11:48:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 52d9f3fb-2fb1-11ea-a994-fa163e34d433 0xc0000553d7 0xc0000553d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s7kb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s7kb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5s7kb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0003120b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0003121b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 11:48:51.991: INFO: Pod "nginx-deployment-5c98f8fb5-bdgh2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-bdgh2,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9vfpk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vfpk/pods/nginx-deployment-5c98f8fb5-bdgh2,UID:52f291e0-2fb1-11ea-a994-fa163e34d433,ResourceVersion:17248562,Generation:0,CreationTimestamp:2020-01-05 11:48:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 52d9f3fb-2fb1-11ea-a994-fa163e34d433 0xc000313d80 0xc000313d81}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s7kb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s7kb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5s7kb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00059bcc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00059bd30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:48:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:48:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:48:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:48:43 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-05 11:48:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 11:48:51.991: INFO: Pod "nginx-deployment-5c98f8fb5-cmm2r" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-cmm2r,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9vfpk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vfpk/pods/nginx-deployment-5c98f8fb5-cmm2r,UID:572a9d4f-2fb1-11ea-a994-fa163e34d433,ResourceVersion:17248592,Generation:0,CreationTimestamp:2020-01-05 11:48:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 52d9f3fb-2fb1-11ea-a994-fa163e34d433 0xc00059bfc7 0xc00059bfc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s7kb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s7kb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5s7kb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0003b89b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0003b9b70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 11:48:51.992: INFO: Pod "nginx-deployment-5c98f8fb5-czknl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-czknl,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9vfpk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vfpk/pods/nginx-deployment-5c98f8fb5-czknl,UID:52f404e3-2fb1-11ea-a994-fa163e34d433,ResourceVersion:17248558,Generation:0,CreationTimestamp:2020-01-05 11:48:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 52d9f3fb-2fb1-11ea-a994-fa163e34d433 0xc000bd2050 0xc000bd2051}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s7kb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s7kb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5s7kb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000bd20c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000bd20e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:48:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:48:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:48:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:48:43 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-05 11:48:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 11:48:51.993: INFO: Pod "nginx-deployment-5c98f8fb5-ds2k4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-ds2k4,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9vfpk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vfpk/pods/nginx-deployment-5c98f8fb5-ds2k4,UID:52ea5468-2fb1-11ea-a994-fa163e34d433,ResourceVersion:17248547,Generation:0,CreationTimestamp:2020-01-05 11:48:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 52d9f3fb-2fb1-11ea-a994-fa163e34d433 0xc000bd2337 0xc000bd2338}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s7kb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s7kb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5s7kb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000bd2460} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000bd2480}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:48:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:48:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:48:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:48:43 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-05 11:48:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 11:48:51.994: INFO: Pod "nginx-deployment-5c98f8fb5-lpzl4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-lpzl4,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9vfpk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vfpk/pods/nginx-deployment-5c98f8fb5-lpzl4,UID:5729ff74-2fb1-11ea-a994-fa163e34d433,ResourceVersion:17248594,Generation:0,CreationTimestamp:2020-01-05 11:48:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 52d9f3fb-2fb1-11ea-a994-fa163e34d433 0xc000bd2557 0xc000bd2558}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s7kb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s7kb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5s7kb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000bd25f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000bd2780}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 11:48:51.994: INFO: Pod "nginx-deployment-5c98f8fb5-lwrt9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-lwrt9,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9vfpk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vfpk/pods/nginx-deployment-5c98f8fb5-lwrt9,UID:57061a2b-2fb1-11ea-a994-fa163e34d433,ResourceVersion:17248588,Generation:0,CreationTimestamp:2020-01-05 11:48:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 52d9f3fb-2fb1-11ea-a994-fa163e34d433 0xc000bd27f0 0xc000bd27f1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s7kb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s7kb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5s7kb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000bd2970} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000bd2990}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:48:50 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 11:48:51.995: INFO: Pod "nginx-deployment-5c98f8fb5-mx2ps" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-mx2ps,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9vfpk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vfpk/pods/nginx-deployment-5c98f8fb5-mx2ps,UID:532cc3bb-2fb1-11ea-a994-fa163e34d433,ResourceVersion:17248564,Generation:0,CreationTimestamp:2020-01-05 11:48:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 52d9f3fb-2fb1-11ea-a994-fa163e34d433 0xc000bd2a17 0xc000bd2a18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s7kb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s7kb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5s7kb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000bd2a80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000bd2aa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:48:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:48:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:48:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:48:44 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-05 11:48:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 11:48:51.996: INFO: Pod "nginx-deployment-5c98f8fb5-rcfcr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-rcfcr,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9vfpk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vfpk/pods/nginx-deployment-5c98f8fb5-rcfcr,UID:5336134d-2fb1-11ea-a994-fa163e34d433,ResourceVersion:17248567,Generation:0,CreationTimestamp:2020-01-05 11:48:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 52d9f3fb-2fb1-11ea-a994-fa163e34d433 0xc000bd2d27 0xc000bd2d28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s7kb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s7kb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5s7kb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000bd2db0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000bd2de0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:48:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:48:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:48:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:48:44 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-05 11:48:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 11:48:51.997: INFO: Pod "nginx-deployment-5c98f8fb5-vzq7g" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-vzq7g,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9vfpk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vfpk/pods/nginx-deployment-5c98f8fb5-vzq7g,UID:571fc5dd-2fb1-11ea-a994-fa163e34d433,ResourceVersion:17248596,Generation:0,CreationTimestamp:2020-01-05 11:48:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 52d9f3fb-2fb1-11ea-a994-fa163e34d433 0xc000bd2eb7 0xc000bd2eb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s7kb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s7kb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5s7kb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000bd2fc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000bd2fe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:48:50 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 11:48:51.997: INFO: Pod "nginx-deployment-5c98f8fb5-zl72s" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-zl72s,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-9vfpk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vfpk/pods/nginx-deployment-5c98f8fb5-zl72s,UID:572982b5-2fb1-11ea-a994-fa163e34d433,ResourceVersion:17248591,Generation:0,CreationTimestamp:2020-01-05 11:48:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 52d9f3fb-2fb1-11ea-a994-fa163e34d433 0xc000bd3077 0xc000bd3078}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s7kb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s7kb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5s7kb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000bd30f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000bd3110}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 11:48:51.998: INFO: Pod "nginx-deployment-85ddf47c5d-8wg72" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-8wg72,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9vfpk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vfpk/pods/nginx-deployment-85ddf47c5d-8wg72,UID:57204986-2fb1-11ea-a994-fa163e34d433,ResourceVersion:17248590,Generation:0,CreationTimestamp:2020-01-05 11:48:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 37c41991-2fb1-11ea-a994-fa163e34d433 0xc000bd31e0 0xc000bd31e1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s7kb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s7kb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5s7kb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000bd3240} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000bd3260}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 11:48:51.999: INFO: Pod "nginx-deployment-85ddf47c5d-9q249" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9q249,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9vfpk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vfpk/pods/nginx-deployment-85ddf47c5d-9q249,UID:38218ac3-2fb1-11ea-a994-fa163e34d433,ResourceVersion:17248483,Generation:0,CreationTimestamp:2020-01-05 11:47:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 37c41991-2fb1-11ea-a994-fa163e34d433 0xc000bd32d0 0xc000bd32d1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s7kb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s7kb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5s7kb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000bd3330} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000bd3350}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:47:59 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:48:36 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:48:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:47:58 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.9,StartTime:2020-01-05 11:47:59 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-05 11:48:35 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://b40e88ee07c9347efefc25b626bf88ba7ec3a1ee428432cc75c4a6bbe8fef966}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 11:48:52.000: INFO: Pod "nginx-deployment-85ddf47c5d-bqc74" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bqc74,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9vfpk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vfpk/pods/nginx-deployment-85ddf47c5d-bqc74,UID:37fdd1ff-2fb1-11ea-a994-fa163e34d433,ResourceVersion:17248488,Generation:0,CreationTimestamp:2020-01-05 11:47:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 37c41991-2fb1-11ea-a994-fa163e34d433 0xc000bd3417 0xc000bd3418}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s7kb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s7kb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5s7kb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000bd3490} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000bd34b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:47:59 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:48:37 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:48:37 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:47:58 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.13,StartTime:2020-01-05 11:47:59 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-05 11:48:36 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://72f9871452c0c62b75062566c6b6e9cf7247eb1bdd378c8863a908d53aae57ec}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 11:48:52.001: INFO: Pod "nginx-deployment-85ddf47c5d-bv588" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bv588,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9vfpk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vfpk/pods/nginx-deployment-85ddf47c5d-bv588,UID:5706bd72-2fb1-11ea-a994-fa163e34d433,ResourceVersion:17248584,Generation:0,CreationTimestamp:2020-01-05 11:48:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 37c41991-2fb1-11ea-a994-fa163e34d433 0xc000bd35a7 0xc000bd35a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s7kb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s7kb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5s7kb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000bd3630} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000bd3650}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:48:50 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 11:48:52.001: INFO: Pod "nginx-deployment-85ddf47c5d-cnnhr" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-cnnhr,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9vfpk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vfpk/pods/nginx-deployment-85ddf47c5d-cnnhr,UID:37fcb8d1-2fb1-11ea-a994-fa163e34d433,ResourceVersion:17248491,Generation:0,CreationTimestamp:2020-01-05 11:47:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 37c41991-2fb1-11ea-a994-fa163e34d433 0xc000bd36c7 0xc000bd36c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s7kb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s7kb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5s7kb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000bd37a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000bd37c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:47:59 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:48:37 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:48:37 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:47:58 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.11,StartTime:2020-01-05 11:47:59 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-05 11:48:36 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://652ca931c2ac394e18c62775968384efde63de827dd4829de677791cebfd8b3e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 11:48:52.002: INFO: Pod "nginx-deployment-85ddf47c5d-d2x4l" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-d2x4l,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9vfpk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vfpk/pods/nginx-deployment-85ddf47c5d-d2x4l,UID:37fcbb57-2fb1-11ea-a994-fa163e34d433,ResourceVersion:17248473,Generation:0,CreationTimestamp:2020-01-05 11:47:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 37c41991-2fb1-11ea-a994-fa163e34d433 0xc000bd3897 0xc000bd3898}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s7kb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s7kb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5s7kb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000bd3980} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000bd39a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:47:59 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:48:36 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:48:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:47:58 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.8,StartTime:2020-01-05 11:47:59 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-05 11:48:35 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://da6ec2bc12bcf17a31d152ce36ce08cd73145ed29391f0b67bd4b595bbf1f0f2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 11:48:52.003: INFO: Pod "nginx-deployment-85ddf47c5d-gxqs9" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gxqs9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9vfpk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vfpk/pods/nginx-deployment-85ddf47c5d-gxqs9,UID:37cbc3e6-2fb1-11ea-a994-fa163e34d433,ResourceVersion:17248478,Generation:0,CreationTimestamp:2020-01-05 11:47:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 37c41991-2fb1-11ea-a994-fa163e34d433 0xc000bd3b57 0xc000bd3b58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s7kb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s7kb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5s7kb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000bd3c10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000bd3cc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:47:58 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:48:36 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:48:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:47:58 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2020-01-05 11:47:58 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-05 11:48:35 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://d73e62ebea8657cf5c589c75c7901b9ed7f2562b74af9c9e7268bf3c7adbf61d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 11:48:52.004: INFO: Pod "nginx-deployment-85ddf47c5d-htp4b" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-htp4b,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9vfpk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vfpk/pods/nginx-deployment-85ddf47c5d-htp4b,UID:3821572e-2fb1-11ea-a994-fa163e34d433,ResourceVersion:17248506,Generation:0,CreationTimestamp:2020-01-05 11:47:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 37c41991-2fb1-11ea-a994-fa163e34d433 0xc000bd3e17 0xc000bd3e18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s7kb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s7kb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5s7kb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000bd3e90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000bd3eb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:48:00 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:48:38 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:48:38 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:47:58 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.12,StartTime:2020-01-05 11:48:00 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-05 11:48:36 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://2a55d9c5f41eb8189993d6060550818d52fc31a355b809b84fb5316da4f38def}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 11:48:52.005: INFO: Pod "nginx-deployment-85ddf47c5d-lr74f" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lr74f,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9vfpk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vfpk/pods/nginx-deployment-85ddf47c5d-lr74f,UID:37f6d806-2fb1-11ea-a994-fa163e34d433,ResourceVersion:17248498,Generation:0,CreationTimestamp:2020-01-05 11:47:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 37c41991-2fb1-11ea-a994-fa163e34d433 0xc00112c517 0xc00112c518}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s7kb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s7kb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5s7kb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00112c650} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00112c690}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:47:58 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:48:38 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:48:38 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:47:58 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-05 11:47:58 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-05 11:48:35 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://6203b60bc1b334aeced3e3f13c4a8af075080ae85e95a83dbd2c1512bcc7efcb}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 11:48:52.006: INFO: Pod "nginx-deployment-85ddf47c5d-lst7g" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lst7g,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9vfpk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vfpk/pods/nginx-deployment-85ddf47c5d-lst7g,UID:37fc3600-2fb1-11ea-a994-fa163e34d433,ResourceVersion:17248502,Generation:0,CreationTimestamp:2020-01-05 11:47:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 37c41991-2fb1-11ea-a994-fa163e34d433 0xc00112cf37 0xc00112cf38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s7kb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s7kb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5s7kb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00112d020} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00112d0d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:47:59 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:48:38 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:48:38 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 11:47:58 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2020-01-05 11:47:59 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-05 11:48:33 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://b103ee7d0013688142d49a85f4c1b1db08833631894054dd1c5a7458d74dcce4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 11:48:52.007: INFO: Pod "nginx-deployment-85ddf47c5d-p6tjg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-p6tjg,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-9vfpk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9vfpk/pods/nginx-deployment-85ddf47c5d-p6tjg,UID:572027bb-2fb1-11ea-a994-fa163e34d433,ResourceVersion:17248589,Generation:0,CreationTimestamp:2020-01-05 11:48:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 37c41991-2fb1-11ea-a994-fa163e34d433 0xc00112d437 0xc00112d438}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s7kb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s7kb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5s7kb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00112d4f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00112d510}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 11:48:52.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-9vfpk" for this suite.
Jan  5 11:49:23.200: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 11:49:29.306: INFO: namespace: e2e-tests-deployment-9vfpk, resource: bindings, ignored listing per whitelist
Jan  5 11:49:29.418: INFO: namespace e2e-tests-deployment-9vfpk deletion completed in 37.065813386s

• [SLOW TEST:91.715 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 11:49:29.419: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-475dk
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StatefulSet
Jan  5 11:49:32.946: INFO: Found 0 stateful pods, waiting for 3
Jan  5 11:49:42.964: INFO: Found 1 stateful pods, waiting for 3
Jan  5 11:49:52.963: INFO: Found 1 stateful pods, waiting for 3
Jan  5 11:50:02.991: INFO: Found 2 stateful pods, waiting for 3
Jan  5 11:50:13.087: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  5 11:50:13.087: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  5 11:50:13.088: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  5 11:50:22.968: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  5 11:50:22.968: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  5 11:50:22.968: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Jan  5 11:50:23.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-475dk ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  5 11:50:23.676: INFO: stderr: "I0105 11:50:23.262008    2240 log.go:172] (0xc0008602c0) (0xc00071a640) Create stream\nI0105 11:50:23.262115    2240 log.go:172] (0xc0008602c0) (0xc00071a640) Stream added, broadcasting: 1\nI0105 11:50:23.266122    2240 log.go:172] (0xc0008602c0) Reply frame received for 1\nI0105 11:50:23.266172    2240 log.go:172] (0xc0008602c0) (0xc00065eb40) Create stream\nI0105 11:50:23.266194    2240 log.go:172] (0xc0008602c0) (0xc00065eb40) Stream added, broadcasting: 3\nI0105 11:50:23.267417    2240 log.go:172] (0xc0008602c0) Reply frame received for 3\nI0105 11:50:23.267441    2240 log.go:172] (0xc0008602c0) (0xc000676000) Create stream\nI0105 11:50:23.267449    2240 log.go:172] (0xc0008602c0) (0xc000676000) Stream added, broadcasting: 5\nI0105 11:50:23.268465    2240 log.go:172] (0xc0008602c0) Reply frame received for 5\nI0105 11:50:23.493582    2240 log.go:172] (0xc0008602c0) Data frame received for 3\nI0105 11:50:23.493631    2240 log.go:172] (0xc00065eb40) (3) Data frame handling\nI0105 11:50:23.493650    2240 log.go:172] (0xc00065eb40) (3) Data frame sent\nI0105 11:50:23.667970    2240 log.go:172] (0xc0008602c0) Data frame received for 1\nI0105 11:50:23.668309    2240 log.go:172] (0xc0008602c0) (0xc00065eb40) Stream removed, broadcasting: 3\nI0105 11:50:23.668474    2240 log.go:172] (0xc00071a640) (1) Data frame handling\nI0105 11:50:23.668712    2240 log.go:172] (0xc00071a640) (1) Data frame sent\nI0105 11:50:23.668810    2240 log.go:172] (0xc0008602c0) (0xc00071a640) Stream removed, broadcasting: 1\nI0105 11:50:23.669273    2240 log.go:172] (0xc0008602c0) (0xc000676000) Stream removed, broadcasting: 5\nI0105 11:50:23.669313    2240 log.go:172] (0xc0008602c0) Go away received\nI0105 11:50:23.669583    2240 log.go:172] (0xc0008602c0) (0xc00071a640) Stream removed, broadcasting: 1\nI0105 11:50:23.669690    2240 log.go:172] (0xc0008602c0) (0xc00065eb40) Stream removed, broadcasting: 3\nI0105 11:50:23.669791    2240 log.go:172] (0xc0008602c0) (0xc000676000) Stream removed, broadcasting: 5\n"
Jan  5 11:50:23.676: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  5 11:50:23.676: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan  5 11:50:33.855: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Jan  5 11:50:44.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-475dk ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 11:50:44.733: INFO: stderr: "I0105 11:50:44.268421    2262 log.go:172] (0xc0006fc370) (0xc0005d12c0) Create stream\nI0105 11:50:44.268815    2262 log.go:172] (0xc0006fc370) (0xc0005d12c0) Stream added, broadcasting: 1\nI0105 11:50:44.275306    2262 log.go:172] (0xc0006fc370) Reply frame received for 1\nI0105 11:50:44.275356    2262 log.go:172] (0xc0006fc370) (0xc0007b6000) Create stream\nI0105 11:50:44.275409    2262 log.go:172] (0xc0006fc370) (0xc0007b6000) Stream added, broadcasting: 3\nI0105 11:50:44.276447    2262 log.go:172] (0xc0006fc370) Reply frame received for 3\nI0105 11:50:44.276474    2262 log.go:172] (0xc0006fc370) (0xc00064a000) Create stream\nI0105 11:50:44.276485    2262 log.go:172] (0xc0006fc370) (0xc00064a000) Stream added, broadcasting: 5\nI0105 11:50:44.278348    2262 log.go:172] (0xc0006fc370) Reply frame received for 5\nI0105 11:50:44.388462    2262 log.go:172] (0xc0006fc370) Data frame received for 3\nI0105 11:50:44.388528    2262 log.go:172] (0xc0007b6000) (3) Data frame handling\nI0105 11:50:44.388541    2262 log.go:172] (0xc0007b6000) (3) Data frame sent\nI0105 11:50:44.723795    2262 log.go:172] (0xc0006fc370) (0xc0007b6000) Stream removed, broadcasting: 3\nI0105 11:50:44.724651    2262 log.go:172] (0xc0006fc370) Data frame received for 1\nI0105 11:50:44.724740    2262 log.go:172] (0xc0005d12c0) (1) Data frame handling\nI0105 11:50:44.724772    2262 log.go:172] (0xc0005d12c0) (1) Data frame sent\nI0105 11:50:44.724824    2262 log.go:172] (0xc0006fc370) (0xc0005d12c0) Stream removed, broadcasting: 1\nI0105 11:50:44.724863    2262 log.go:172] (0xc0006fc370) (0xc00064a000) Stream removed, broadcasting: 5\nI0105 11:50:44.724887    2262 log.go:172] (0xc0006fc370) Go away received\nI0105 11:50:44.725055    2262 log.go:172] (0xc0006fc370) (0xc0005d12c0) Stream removed, broadcasting: 1\nI0105 11:50:44.725142    2262 log.go:172] (0xc0006fc370) (0xc0007b6000) Stream removed, broadcasting: 3\nI0105 11:50:44.725160    2262 log.go:172] (0xc0006fc370) (0xc00064a000) Stream removed, broadcasting: 5\n"
Jan  5 11:50:44.733: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  5 11:50:44.733: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  5 11:50:54.845: INFO: Waiting for StatefulSet e2e-tests-statefulset-475dk/ss2 to complete update
Jan  5 11:50:54.845: INFO: Waiting for Pod e2e-tests-statefulset-475dk/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  5 11:50:54.845: INFO: Waiting for Pod e2e-tests-statefulset-475dk/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  5 11:51:04.879: INFO: Waiting for StatefulSet e2e-tests-statefulset-475dk/ss2 to complete update
Jan  5 11:51:04.879: INFO: Waiting for Pod e2e-tests-statefulset-475dk/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  5 11:51:04.879: INFO: Waiting for Pod e2e-tests-statefulset-475dk/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  5 11:51:14.882: INFO: Waiting for StatefulSet e2e-tests-statefulset-475dk/ss2 to complete update
Jan  5 11:51:14.883: INFO: Waiting for Pod e2e-tests-statefulset-475dk/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  5 11:51:14.883: INFO: Waiting for Pod e2e-tests-statefulset-475dk/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  5 11:51:24.974: INFO: Waiting for StatefulSet e2e-tests-statefulset-475dk/ss2 to complete update
Jan  5 11:51:24.974: INFO: Waiting for Pod e2e-tests-statefulset-475dk/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  5 11:51:34.893: INFO: Waiting for StatefulSet e2e-tests-statefulset-475dk/ss2 to complete update
Jan  5 11:51:34.893: INFO: Waiting for Pod e2e-tests-statefulset-475dk/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  5 11:51:45.126: INFO: Waiting for StatefulSet e2e-tests-statefulset-475dk/ss2 to complete update
STEP: Rolling back to a previous revision
Jan  5 11:51:54.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-475dk ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  5 11:51:55.617: INFO: stderr: "I0105 11:51:55.172454    2283 log.go:172] (0xc00014c6e0) (0xc00073a640) Create stream\nI0105 11:51:55.172657    2283 log.go:172] (0xc00014c6e0) (0xc00073a640) Stream added, broadcasting: 1\nI0105 11:51:55.179062    2283 log.go:172] (0xc00014c6e0) Reply frame received for 1\nI0105 11:51:55.179115    2283 log.go:172] (0xc00014c6e0) (0xc00058adc0) Create stream\nI0105 11:51:55.179123    2283 log.go:172] (0xc00014c6e0) (0xc00058adc0) Stream added, broadcasting: 3\nI0105 11:51:55.180187    2283 log.go:172] (0xc00014c6e0) Reply frame received for 3\nI0105 11:51:55.180217    2283 log.go:172] (0xc00014c6e0) (0xc000588000) Create stream\nI0105 11:51:55.180229    2283 log.go:172] (0xc00014c6e0) (0xc000588000) Stream added, broadcasting: 5\nI0105 11:51:55.181025    2283 log.go:172] (0xc00014c6e0) Reply frame received for 5\nI0105 11:51:55.464993    2283 log.go:172] (0xc00014c6e0) Data frame received for 3\nI0105 11:51:55.465046    2283 log.go:172] (0xc00058adc0) (3) Data frame handling\nI0105 11:51:55.465065    2283 log.go:172] (0xc00058adc0) (3) Data frame sent\nI0105 11:51:55.571977    2283 log.go:172] (0xc00014c6e0) Data frame received for 1\nI0105 11:51:55.572045    2283 log.go:172] (0xc00073a640) (1) Data frame handling\nI0105 11:51:55.572084    2283 log.go:172] (0xc00073a640) (1) Data frame sent\nI0105 11:51:55.572112    2283 log.go:172] (0xc00014c6e0) (0xc00073a640) Stream removed, broadcasting: 1\nI0105 11:51:55.605273    2283 log.go:172] (0xc00014c6e0) (0xc00058adc0) Stream removed, broadcasting: 3\nI0105 11:51:55.605938    2283 log.go:172] (0xc00014c6e0) (0xc000588000) Stream removed, broadcasting: 5\nI0105 11:51:55.606130    2283 log.go:172] (0xc00014c6e0) Go away received\nI0105 11:51:55.606186    2283 log.go:172] (0xc00014c6e0) (0xc00073a640) Stream removed, broadcasting: 1\nI0105 11:51:55.606275    2283 log.go:172] (0xc00014c6e0) (0xc00058adc0) Stream removed, broadcasting: 3\nI0105 11:51:55.606360    2283 log.go:172] (0xc00014c6e0) (0xc000588000) Stream removed, broadcasting: 5\n"
Jan  5 11:51:55.618: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  5 11:51:55.618: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  5 11:52:05.784: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Jan  5 11:52:15.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-475dk ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 11:52:16.971: INFO: stderr: "I0105 11:52:16.089922    2304 log.go:172] (0xc00015c790) (0xc0007b52c0) Create stream\nI0105 11:52:16.090075    2304 log.go:172] (0xc00015c790) (0xc0007b52c0) Stream added, broadcasting: 1\nI0105 11:52:16.096122    2304 log.go:172] (0xc00015c790) Reply frame received for 1\nI0105 11:52:16.096168    2304 log.go:172] (0xc00015c790) (0xc0007b5360) Create stream\nI0105 11:52:16.096187    2304 log.go:172] (0xc00015c790) (0xc0007b5360) Stream added, broadcasting: 3\nI0105 11:52:16.098543    2304 log.go:172] (0xc00015c790) Reply frame received for 3\nI0105 11:52:16.098620    2304 log.go:172] (0xc00015c790) (0xc0007b5400) Create stream\nI0105 11:52:16.098630    2304 log.go:172] (0xc00015c790) (0xc0007b5400) Stream added, broadcasting: 5\nI0105 11:52:16.100293    2304 log.go:172] (0xc00015c790) Reply frame received for 5\nI0105 11:52:16.741139    2304 log.go:172] (0xc00015c790) Data frame received for 3\nI0105 11:52:16.741180    2304 log.go:172] (0xc0007b5360) (3) Data frame handling\nI0105 11:52:16.741198    2304 log.go:172] (0xc0007b5360) (3) Data frame sent\nI0105 11:52:16.956847    2304 log.go:172] (0xc00015c790) Data frame received for 1\nI0105 11:52:16.957205    2304 log.go:172] (0xc00015c790) (0xc0007b5400) Stream removed, broadcasting: 5\nI0105 11:52:16.957281    2304 log.go:172] (0xc0007b52c0) (1) Data frame handling\nI0105 11:52:16.957310    2304 log.go:172] (0xc0007b52c0) (1) Data frame sent\nI0105 11:52:16.957446    2304 log.go:172] (0xc00015c790) (0xc0007b5360) Stream removed, broadcasting: 3\nI0105 11:52:16.957493    2304 log.go:172] (0xc00015c790) (0xc0007b52c0) Stream removed, broadcasting: 1\nI0105 11:52:16.957508    2304 log.go:172] (0xc00015c790) Go away received\nI0105 11:52:16.958056    2304 log.go:172] (0xc00015c790) (0xc0007b52c0) Stream removed, broadcasting: 1\nI0105 11:52:16.958097    2304 log.go:172] (0xc00015c790) (0xc0007b5360) Stream removed, broadcasting: 3\nI0105 11:52:16.958116    2304 log.go:172] (0xc00015c790) (0xc0007b5400) Stream removed, broadcasting: 5\n"
Jan  5 11:52:16.971: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  5 11:52:16.971: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  5 11:52:27.098: INFO: Waiting for StatefulSet e2e-tests-statefulset-475dk/ss2 to complete update
Jan  5 11:52:27.098: INFO: Waiting for Pod e2e-tests-statefulset-475dk/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  5 11:52:27.098: INFO: Waiting for Pod e2e-tests-statefulset-475dk/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  5 11:52:27.098: INFO: Waiting for Pod e2e-tests-statefulset-475dk/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  5 11:52:37.134: INFO: Waiting for StatefulSet e2e-tests-statefulset-475dk/ss2 to complete update
Jan  5 11:52:37.134: INFO: Waiting for Pod e2e-tests-statefulset-475dk/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  5 11:52:37.134: INFO: Waiting for Pod e2e-tests-statefulset-475dk/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  5 11:52:47.109: INFO: Waiting for StatefulSet e2e-tests-statefulset-475dk/ss2 to complete update
Jan  5 11:52:47.109: INFO: Waiting for Pod e2e-tests-statefulset-475dk/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  5 11:52:47.109: INFO: Waiting for Pod e2e-tests-statefulset-475dk/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  5 11:52:57.129: INFO: Waiting for StatefulSet e2e-tests-statefulset-475dk/ss2 to complete update
Jan  5 11:52:57.129: INFO: Waiting for Pod e2e-tests-statefulset-475dk/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  5 11:53:07.141: INFO: Waiting for StatefulSet e2e-tests-statefulset-475dk/ss2 to complete update
Jan  5 11:53:07.141: INFO: Waiting for Pod e2e-tests-statefulset-475dk/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  5 11:53:17.135: INFO: Waiting for StatefulSet e2e-tests-statefulset-475dk/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan  5 11:53:27.128: INFO: Deleting all statefulset in ns e2e-tests-statefulset-475dk
Jan  5 11:53:27.135: INFO: Scaling statefulset ss2 to 0
Jan  5 11:53:47.189: INFO: Waiting for statefulset status.replicas updated to 0
Jan  5 11:53:47.202: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 11:53:47.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-475dk" for this suite.
Jan  5 11:53:55.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 11:53:55.417: INFO: namespace: e2e-tests-statefulset-475dk, resource: bindings, ignored listing per whitelist
Jan  5 11:53:55.706: INFO: namespace e2e-tests-statefulset-475dk deletion completed in 8.46062505s

• [SLOW TEST:266.288 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 11:53:55.707: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  5 11:53:55.986: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0d29b9e4-2fb2-11ea-910c-0242ac110004" in namespace "e2e-tests-downward-api-9g275" to be "success or failure"
Jan  5 11:53:56.002: INFO: Pod "downwardapi-volume-0d29b9e4-2fb2-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 15.747558ms
Jan  5 11:53:58.129: INFO: Pod "downwardapi-volume-0d29b9e4-2fb2-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142563717s
Jan  5 11:54:00.146: INFO: Pod "downwardapi-volume-0d29b9e4-2fb2-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.159287431s
Jan  5 11:54:02.338: INFO: Pod "downwardapi-volume-0d29b9e4-2fb2-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.351054135s
Jan  5 11:54:04.352: INFO: Pod "downwardapi-volume-0d29b9e4-2fb2-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.365664255s
Jan  5 11:54:06.365: INFO: Pod "downwardapi-volume-0d29b9e4-2fb2-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.378156126s
STEP: Saw pod success
Jan  5 11:54:06.365: INFO: Pod "downwardapi-volume-0d29b9e4-2fb2-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 11:54:06.370: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-0d29b9e4-2fb2-11ea-910c-0242ac110004 container client-container: 
STEP: delete the pod
Jan  5 11:54:06.592: INFO: Waiting for pod downwardapi-volume-0d29b9e4-2fb2-11ea-910c-0242ac110004 to disappear
Jan  5 11:54:06.605: INFO: Pod downwardapi-volume-0d29b9e4-2fb2-11ea-910c-0242ac110004 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 11:54:06.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-9g275" for this suite.
Jan  5 11:54:12.729: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 11:54:12.775: INFO: namespace: e2e-tests-downward-api-9g275, resource: bindings, ignored listing per whitelist
Jan  5 11:54:12.980: INFO: namespace e2e-tests-downward-api-9g275 deletion completed in 6.366237093s

• [SLOW TEST:17.273 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 11:54:12.980: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  5 11:54:13.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-smx5s'
Jan  5 11:54:15.481: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  5 11:54:15.481: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Jan  5 11:54:15.497: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Jan  5 11:54:15.668: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Jan  5 11:54:15.693: INFO: scanned /root for discovery docs: 
Jan  5 11:54:15.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-smx5s'
Jan  5 11:54:40.354: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan  5 11:54:40.354: INFO: stdout: "Created e2e-test-nginx-rc-e48d2810cd5923b090675416074359c0\nScaling up e2e-test-nginx-rc-e48d2810cd5923b090675416074359c0 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-e48d2810cd5923b090675416074359c0 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-e48d2810cd5923b090675416074359c0 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Jan  5 11:54:40.354: INFO: stdout: "Created e2e-test-nginx-rc-e48d2810cd5923b090675416074359c0\nScaling up e2e-test-nginx-rc-e48d2810cd5923b090675416074359c0 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-e48d2810cd5923b090675416074359c0 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-e48d2810cd5923b090675416074359c0 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Jan  5 11:54:40.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-smx5s'
Jan  5 11:54:40.664: INFO: stderr: ""
Jan  5 11:54:40.664: INFO: stdout: "e2e-test-nginx-rc-4f6zs e2e-test-nginx-rc-e48d2810cd5923b090675416074359c0-jf5kl "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan  5 11:54:45.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-smx5s'
Jan  5 11:54:45.827: INFO: stderr: ""
Jan  5 11:54:45.827: INFO: stdout: "e2e-test-nginx-rc-e48d2810cd5923b090675416074359c0-jf5kl "
Jan  5 11:54:45.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-e48d2810cd5923b090675416074359c0-jf5kl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-smx5s'
Jan  5 11:54:45.980: INFO: stderr: ""
Jan  5 11:54:45.980: INFO: stdout: "true"
Jan  5 11:54:45.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-e48d2810cd5923b090675416074359c0-jf5kl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-smx5s'
Jan  5 11:54:46.101: INFO: stderr: ""
Jan  5 11:54:46.101: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Jan  5 11:54:46.101: INFO: e2e-test-nginx-rc-e48d2810cd5923b090675416074359c0-jf5kl is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Jan  5 11:54:46.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-smx5s'
Jan  5 11:54:46.233: INFO: stderr: ""
Jan  5 11:54:46.233: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 11:54:46.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-smx5s" for this suite.
Jan  5 11:55:10.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 11:55:10.495: INFO: namespace: e2e-tests-kubectl-smx5s, resource: bindings, ignored listing per whitelist
Jan  5 11:55:10.647: INFO: namespace e2e-tests-kubectl-smx5s deletion completed in 24.407960679s

• [SLOW TEST:57.667 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 11:55:10.648: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-x9cjr
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  5 11:55:10.972: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  5 11:55:53.442: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-x9cjr PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  5 11:55:53.442: INFO: >>> kubeConfig: /root/.kube/config
I0105 11:55:53.553385       8 log.go:172] (0xc0008f8630) (0xc001b81a40) Create stream
I0105 11:55:53.553588       8 log.go:172] (0xc0008f8630) (0xc001b81a40) Stream added, broadcasting: 1
I0105 11:55:53.560716       8 log.go:172] (0xc0008f8630) Reply frame received for 1
I0105 11:55:53.560767       8 log.go:172] (0xc0008f8630) (0xc001b81ae0) Create stream
I0105 11:55:53.560781       8 log.go:172] (0xc0008f8630) (0xc001b81ae0) Stream added, broadcasting: 3
I0105 11:55:53.562631       8 log.go:172] (0xc0008f8630) Reply frame received for 3
I0105 11:55:53.562725       8 log.go:172] (0xc0008f8630) (0xc001c1caa0) Create stream
I0105 11:55:53.562744       8 log.go:172] (0xc0008f8630) (0xc001c1caa0) Stream added, broadcasting: 5
I0105 11:55:53.564065       8 log.go:172] (0xc0008f8630) Reply frame received for 5
I0105 11:55:53.804787       8 log.go:172] (0xc0008f8630) Data frame received for 3
I0105 11:55:53.804881       8 log.go:172] (0xc001b81ae0) (3) Data frame handling
I0105 11:55:53.804915       8 log.go:172] (0xc001b81ae0) (3) Data frame sent
I0105 11:55:54.026726       8 log.go:172] (0xc0008f8630) Data frame received for 1
I0105 11:55:54.026934       8 log.go:172] (0xc0008f8630) (0xc001b81ae0) Stream removed, broadcasting: 3
I0105 11:55:54.027017       8 log.go:172] (0xc001b81a40) (1) Data frame handling
I0105 11:55:54.027099       8 log.go:172] (0xc001b81a40) (1) Data frame sent
I0105 11:55:54.027443       8 log.go:172] (0xc0008f8630) (0xc001c1caa0) Stream removed, broadcasting: 5
I0105 11:55:54.027775       8 log.go:172] (0xc0008f8630) (0xc001b81a40) Stream removed, broadcasting: 1
I0105 11:55:54.027968       8 log.go:172] (0xc0008f8630) Go away received
I0105 11:55:54.028950       8 log.go:172] (0xc0008f8630) (0xc001b81a40) Stream removed, broadcasting: 1
I0105 11:55:54.029000       8 log.go:172] (0xc0008f8630) (0xc001b81ae0) Stream removed, broadcasting: 3
I0105 11:55:54.029030       8 log.go:172] (0xc0008f8630) (0xc001c1caa0) Stream removed, broadcasting: 5
Jan  5 11:55:54.029: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 11:55:54.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-x9cjr" for this suite.
Jan  5 11:56:18.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 11:56:18.193: INFO: namespace: e2e-tests-pod-network-test-x9cjr, resource: bindings, ignored listing per whitelist
Jan  5 11:56:18.255: INFO: namespace e2e-tests-pod-network-test-x9cjr deletion completed in 24.200442047s

• [SLOW TEST:67.607 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 11:56:18.255: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 11:56:30.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-cc6bw" for this suite.
Jan  5 11:57:24.726: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 11:57:24.790: INFO: namespace: e2e-tests-kubelet-test-cc6bw, resource: bindings, ignored listing per whitelist
Jan  5 11:57:24.865: INFO: namespace e2e-tests-kubelet-test-cc6bw deletion completed in 54.169608133s

• [SLOW TEST:66.610 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 11:57:24.866: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan  5 11:57:25.101: INFO: Waiting up to 5m0s for pod "downward-api-89cd5b0e-2fb2-11ea-910c-0242ac110004" in namespace "e2e-tests-downward-api-m5vbr" to be "success or failure"
Jan  5 11:57:25.151: INFO: Pod "downward-api-89cd5b0e-2fb2-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 50.228284ms
Jan  5 11:57:27.508: INFO: Pod "downward-api-89cd5b0e-2fb2-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.407807463s
Jan  5 11:57:29.530: INFO: Pod "downward-api-89cd5b0e-2fb2-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.42900586s
Jan  5 11:57:31.561: INFO: Pod "downward-api-89cd5b0e-2fb2-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.460225163s
Jan  5 11:57:33.577: INFO: Pod "downward-api-89cd5b0e-2fb2-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.476553024s
Jan  5 11:57:36.565: INFO: Pod "downward-api-89cd5b0e-2fb2-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.464735939s
STEP: Saw pod success
Jan  5 11:57:36.566: INFO: Pod "downward-api-89cd5b0e-2fb2-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 11:57:36.579: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-89cd5b0e-2fb2-11ea-910c-0242ac110004 container dapi-container: 
STEP: delete the pod
Jan  5 11:57:36.799: INFO: Waiting for pod downward-api-89cd5b0e-2fb2-11ea-910c-0242ac110004 to disappear
Jan  5 11:57:36.812: INFO: Pod downward-api-89cd5b0e-2fb2-11ea-910c-0242ac110004 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 11:57:36.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-m5vbr" for this suite.
Jan  5 11:57:42.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 11:57:42.990: INFO: namespace: e2e-tests-downward-api-m5vbr, resource: bindings, ignored listing per whitelist
Jan  5 11:57:43.011: INFO: namespace e2e-tests-downward-api-m5vbr deletion completed in 6.183435923s

• [SLOW TEST:18.145 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 11:57:43.012: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-949d4eca-2fb2-11ea-910c-0242ac110004
STEP: Creating a pod to test consume configMaps
Jan  5 11:57:43.245: INFO: Waiting up to 5m0s for pod "pod-configmaps-949e88f6-2fb2-11ea-910c-0242ac110004" in namespace "e2e-tests-configmap-w26md" to be "success or failure"
Jan  5 11:57:43.257: INFO: Pod "pod-configmaps-949e88f6-2fb2-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 12.223714ms
Jan  5 11:57:45.302: INFO: Pod "pod-configmaps-949e88f6-2fb2-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056961752s
Jan  5 11:57:47.326: INFO: Pod "pod-configmaps-949e88f6-2fb2-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080619045s
Jan  5 11:57:49.576: INFO: Pod "pod-configmaps-949e88f6-2fb2-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.330943034s
Jan  5 11:57:51.609: INFO: Pod "pod-configmaps-949e88f6-2fb2-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.363663357s
Jan  5 11:57:53.630: INFO: Pod "pod-configmaps-949e88f6-2fb2-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.385408365s
STEP: Saw pod success
Jan  5 11:57:53.630: INFO: Pod "pod-configmaps-949e88f6-2fb2-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 11:57:53.638: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-949e88f6-2fb2-11ea-910c-0242ac110004 container configmap-volume-test: 
STEP: delete the pod
Jan  5 11:57:53.792: INFO: Waiting for pod pod-configmaps-949e88f6-2fb2-11ea-910c-0242ac110004 to disappear
Jan  5 11:57:53.896: INFO: Pod pod-configmaps-949e88f6-2fb2-11ea-910c-0242ac110004 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 11:57:53.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-w26md" for this suite.
Jan  5 11:57:59.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 11:58:00.050: INFO: namespace: e2e-tests-configmap-w26md, resource: bindings, ignored listing per whitelist
Jan  5 11:58:00.109: INFO: namespace e2e-tests-configmap-w26md deletion completed in 6.189521091s

• [SLOW TEST:17.098 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 11:58:00.110: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  5 11:58:00.322: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9ecbea17-2fb2-11ea-910c-0242ac110004" in namespace "e2e-tests-projected-xdq5z" to be "success or failure"
Jan  5 11:58:00.331: INFO: Pod "downwardapi-volume-9ecbea17-2fb2-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.53914ms
Jan  5 11:58:02.379: INFO: Pod "downwardapi-volume-9ecbea17-2fb2-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05755406s
Jan  5 11:58:04.405: INFO: Pod "downwardapi-volume-9ecbea17-2fb2-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082862632s
Jan  5 11:58:06.431: INFO: Pod "downwardapi-volume-9ecbea17-2fb2-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.109521534s
Jan  5 11:58:08.465: INFO: Pod "downwardapi-volume-9ecbea17-2fb2-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.143323835s
Jan  5 11:58:10.495: INFO: Pod "downwardapi-volume-9ecbea17-2fb2-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.173603219s
STEP: Saw pod success
Jan  5 11:58:10.496: INFO: Pod "downwardapi-volume-9ecbea17-2fb2-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 11:58:10.515: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-9ecbea17-2fb2-11ea-910c-0242ac110004 container client-container: 
STEP: delete the pod
Jan  5 11:58:11.411: INFO: Waiting for pod downwardapi-volume-9ecbea17-2fb2-11ea-910c-0242ac110004 to disappear
Jan  5 11:58:11.430: INFO: Pod downwardapi-volume-9ecbea17-2fb2-11ea-910c-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 11:58:11.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-xdq5z" for this suite.
Jan  5 11:58:17.583: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 11:58:17.619: INFO: namespace: e2e-tests-projected-xdq5z, resource: bindings, ignored listing per whitelist
Jan  5 11:58:17.731: INFO: namespace e2e-tests-projected-xdq5z deletion completed in 6.265256517s

• [SLOW TEST:17.622 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 11:58:17.732: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan  5 11:58:17.875: INFO: Waiting up to 5m0s for pod "pod-a942d4f1-2fb2-11ea-910c-0242ac110004" in namespace "e2e-tests-emptydir-df7vn" to be "success or failure"
Jan  5 11:58:17.951: INFO: Pod "pod-a942d4f1-2fb2-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 76.520574ms
Jan  5 11:58:20.016: INFO: Pod "pod-a942d4f1-2fb2-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.141809597s
Jan  5 11:58:22.043: INFO: Pod "pod-a942d4f1-2fb2-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.168283398s
Jan  5 11:58:24.151: INFO: Pod "pod-a942d4f1-2fb2-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.276520907s
Jan  5 11:58:26.191: INFO: Pod "pod-a942d4f1-2fb2-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.316678864s
Jan  5 11:58:28.283: INFO: Pod "pod-a942d4f1-2fb2-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.40817435s
STEP: Saw pod success
Jan  5 11:58:28.283: INFO: Pod "pod-a942d4f1-2fb2-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 11:58:28.292: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-a942d4f1-2fb2-11ea-910c-0242ac110004 container test-container: 
STEP: delete the pod
Jan  5 11:58:28.381: INFO: Waiting for pod pod-a942d4f1-2fb2-11ea-910c-0242ac110004 to disappear
Jan  5 11:58:28.426: INFO: Pod pod-a942d4f1-2fb2-11ea-910c-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 11:58:28.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-df7vn" for this suite.
Jan  5 11:58:34.502: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 11:58:34.768: INFO: namespace: e2e-tests-emptydir-df7vn, resource: bindings, ignored listing per whitelist
Jan  5 11:58:34.770: INFO: namespace e2e-tests-emptydir-df7vn deletion completed in 6.325425371s

• [SLOW TEST:17.038 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 11:58:34.771: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan  5 11:58:35.056: INFO: Number of nodes with available pods: 0
Jan  5 11:58:35.056: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 11:58:36.084: INFO: Number of nodes with available pods: 0
Jan  5 11:58:36.084: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 11:58:37.865: INFO: Number of nodes with available pods: 0
Jan  5 11:58:37.866: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 11:58:38.080: INFO: Number of nodes with available pods: 0
Jan  5 11:58:38.080: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 11:58:39.074: INFO: Number of nodes with available pods: 0
Jan  5 11:58:39.074: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 11:58:40.082: INFO: Number of nodes with available pods: 0
Jan  5 11:58:40.082: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 11:58:41.699: INFO: Number of nodes with available pods: 0
Jan  5 11:58:41.700: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 11:58:42.096: INFO: Number of nodes with available pods: 0
Jan  5 11:58:42.096: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 11:58:43.076: INFO: Number of nodes with available pods: 0
Jan  5 11:58:43.076: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 11:58:44.075: INFO: Number of nodes with available pods: 0
Jan  5 11:58:44.075: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 11:58:45.083: INFO: Number of nodes with available pods: 1
Jan  5 11:58:45.083: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jan  5 11:58:45.154: INFO: Number of nodes with available pods: 1
Jan  5 11:58:45.154: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-k5l8w, will wait for the garbage collector to delete the pods
Jan  5 11:58:46.852: INFO: Deleting DaemonSet.extensions daemon-set took: 102.975443ms
Jan  5 11:58:47.353: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.396539ms
Jan  5 11:58:52.105: INFO: Number of nodes with available pods: 0
Jan  5 11:58:52.105: INFO: Number of running nodes: 0, number of available pods: 0
Jan  5 11:58:52.112: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-k5l8w/daemonsets","resourceVersion":"17250236"},"items":null}

Jan  5 11:58:52.118: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-k5l8w/pods","resourceVersion":"17250236"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 11:58:52.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-k5l8w" for this suite.
Jan  5 11:58:58.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 11:58:58.401: INFO: namespace: e2e-tests-daemonsets-k5l8w, resource: bindings, ignored listing per whitelist
Jan  5 11:58:58.419: INFO: namespace e2e-tests-daemonsets-k5l8w deletion completed in 6.275075683s

• [SLOW TEST:23.648 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 11:58:58.419: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-c1929582-2fb2-11ea-910c-0242ac110004
STEP: Creating a pod to test consume secrets
Jan  5 11:58:58.674: INFO: Waiting up to 5m0s for pod "pod-secrets-c193fae7-2fb2-11ea-910c-0242ac110004" in namespace "e2e-tests-secrets-8rkxn" to be "success or failure"
Jan  5 11:58:58.687: INFO: Pod "pod-secrets-c193fae7-2fb2-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 12.944284ms
Jan  5 11:59:00.706: INFO: Pod "pod-secrets-c193fae7-2fb2-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031878874s
Jan  5 11:59:02.723: INFO: Pod "pod-secrets-c193fae7-2fb2-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048969998s
Jan  5 11:59:04.731: INFO: Pod "pod-secrets-c193fae7-2fb2-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056740531s
Jan  5 11:59:06.761: INFO: Pod "pod-secrets-c193fae7-2fb2-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.086188366s
STEP: Saw pod success
Jan  5 11:59:06.761: INFO: Pod "pod-secrets-c193fae7-2fb2-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 11:59:06.774: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-c193fae7-2fb2-11ea-910c-0242ac110004 container secret-volume-test: 
STEP: delete the pod
Jan  5 11:59:06.966: INFO: Waiting for pod pod-secrets-c193fae7-2fb2-11ea-910c-0242ac110004 to disappear
Jan  5 11:59:06.983: INFO: Pod pod-secrets-c193fae7-2fb2-11ea-910c-0242ac110004 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 11:59:06.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-8rkxn" for this suite.
Jan  5 11:59:13.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 11:59:13.413: INFO: namespace: e2e-tests-secrets-8rkxn, resource: bindings, ignored listing per whitelist
Jan  5 11:59:13.437: INFO: namespace e2e-tests-secrets-8rkxn deletion completed in 6.440630119s

• [SLOW TEST:15.018 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 11:59:13.437: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-cab101c7-2fb2-11ea-910c-0242ac110004
STEP: Creating secret with name s-test-opt-upd-cab10302-2fb2-11ea-910c-0242ac110004
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-cab101c7-2fb2-11ea-910c-0242ac110004
STEP: Updating secret s-test-opt-upd-cab10302-2fb2-11ea-910c-0242ac110004
STEP: Creating secret with name s-test-opt-create-cab1032c-2fb2-11ea-910c-0242ac110004
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 11:59:32.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-8gvqr" for this suite.
Jan  5 11:59:56.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 11:59:56.833: INFO: namespace: e2e-tests-projected-8gvqr, resource: bindings, ignored listing per whitelist
Jan  5 11:59:56.843: INFO: namespace e2e-tests-projected-8gvqr deletion completed in 24.38401569s

• [SLOW TEST:43.406 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 11:59:56.843: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan  5 11:59:57.071: INFO: Waiting up to 5m0s for pod "pod-e459a698-2fb2-11ea-910c-0242ac110004" in namespace "e2e-tests-emptydir-l9kmk" to be "success or failure"
Jan  5 11:59:57.098: INFO: Pod "pod-e459a698-2fb2-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 27.092177ms
Jan  5 11:59:59.109: INFO: Pod "pod-e459a698-2fb2-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038614849s
Jan  5 12:00:01.124: INFO: Pod "pod-e459a698-2fb2-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053173867s
Jan  5 12:00:03.140: INFO: Pod "pod-e459a698-2fb2-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068810516s
Jan  5 12:00:05.151: INFO: Pod "pod-e459a698-2fb2-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.080131246s
Jan  5 12:00:07.164: INFO: Pod "pod-e459a698-2fb2-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.093603716s
STEP: Saw pod success
Jan  5 12:00:07.165: INFO: Pod "pod-e459a698-2fb2-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 12:00:07.176: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-e459a698-2fb2-11ea-910c-0242ac110004 container test-container: 
STEP: delete the pod
Jan  5 12:00:07.322: INFO: Waiting for pod pod-e459a698-2fb2-11ea-910c-0242ac110004 to disappear
Jan  5 12:00:07.330: INFO: Pod pod-e459a698-2fb2-11ea-910c-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:00:07.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-l9kmk" for this suite.
Jan  5 12:00:14.479: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:00:14.728: INFO: namespace: e2e-tests-emptydir-l9kmk, resource: bindings, ignored listing per whitelist
Jan  5 12:00:14.885: INFO: namespace e2e-tests-emptydir-l9kmk deletion completed in 7.541902705s

• [SLOW TEST:18.042 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:00:14.886: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:00:15.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-6dqsk" for this suite.
Jan  5 12:00:21.330: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:00:21.455: INFO: namespace: e2e-tests-kubelet-test-6dqsk, resource: bindings, ignored listing per whitelist
Jan  5 12:00:21.503: INFO: namespace e2e-tests-kubelet-test-6dqsk deletion completed in 6.205011014s

• [SLOW TEST:6.617 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:00:21.504: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jan  5 12:00:21.683: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-9d9lz,SelfLink:/api/v1/namespaces/e2e-tests-watch-9d9lz/configmaps/e2e-watch-test-configmap-a,UID:f30f6445-2fb2-11ea-a994-fa163e34d433,ResourceVersion:17250473,Generation:0,CreationTimestamp:2020-01-05 12:00:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  5 12:00:21.683: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-9d9lz,SelfLink:/api/v1/namespaces/e2e-tests-watch-9d9lz/configmaps/e2e-watch-test-configmap-a,UID:f30f6445-2fb2-11ea-a994-fa163e34d433,ResourceVersion:17250473,Generation:0,CreationTimestamp:2020-01-05 12:00:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jan  5 12:00:31.709: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-9d9lz,SelfLink:/api/v1/namespaces/e2e-tests-watch-9d9lz/configmaps/e2e-watch-test-configmap-a,UID:f30f6445-2fb2-11ea-a994-fa163e34d433,ResourceVersion:17250486,Generation:0,CreationTimestamp:2020-01-05 12:00:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan  5 12:00:31.710: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-9d9lz,SelfLink:/api/v1/namespaces/e2e-tests-watch-9d9lz/configmaps/e2e-watch-test-configmap-a,UID:f30f6445-2fb2-11ea-a994-fa163e34d433,ResourceVersion:17250486,Generation:0,CreationTimestamp:2020-01-05 12:00:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jan  5 12:00:41.744: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-9d9lz,SelfLink:/api/v1/namespaces/e2e-tests-watch-9d9lz/configmaps/e2e-watch-test-configmap-a,UID:f30f6445-2fb2-11ea-a994-fa163e34d433,ResourceVersion:17250498,Generation:0,CreationTimestamp:2020-01-05 12:00:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  5 12:00:41.745: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-9d9lz,SelfLink:/api/v1/namespaces/e2e-tests-watch-9d9lz/configmaps/e2e-watch-test-configmap-a,UID:f30f6445-2fb2-11ea-a994-fa163e34d433,ResourceVersion:17250498,Generation:0,CreationTimestamp:2020-01-05 12:00:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jan  5 12:00:51.760: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-9d9lz,SelfLink:/api/v1/namespaces/e2e-tests-watch-9d9lz/configmaps/e2e-watch-test-configmap-a,UID:f30f6445-2fb2-11ea-a994-fa163e34d433,ResourceVersion:17250511,Generation:0,CreationTimestamp:2020-01-05 12:00:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  5 12:00:51.760: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-9d9lz,SelfLink:/api/v1/namespaces/e2e-tests-watch-9d9lz/configmaps/e2e-watch-test-configmap-a,UID:f30f6445-2fb2-11ea-a994-fa163e34d433,ResourceVersion:17250511,Generation:0,CreationTimestamp:2020-01-05 12:00:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jan  5 12:01:01.787: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-9d9lz,SelfLink:/api/v1/namespaces/e2e-tests-watch-9d9lz/configmaps/e2e-watch-test-configmap-b,UID:0af46be0-2fb3-11ea-a994-fa163e34d433,ResourceVersion:17250524,Generation:0,CreationTimestamp:2020-01-05 12:01:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  5 12:01:01.788: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-9d9lz,SelfLink:/api/v1/namespaces/e2e-tests-watch-9d9lz/configmaps/e2e-watch-test-configmap-b,UID:0af46be0-2fb3-11ea-a994-fa163e34d433,ResourceVersion:17250524,Generation:0,CreationTimestamp:2020-01-05 12:01:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jan  5 12:01:11.816: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-9d9lz,SelfLink:/api/v1/namespaces/e2e-tests-watch-9d9lz/configmaps/e2e-watch-test-configmap-b,UID:0af46be0-2fb3-11ea-a994-fa163e34d433,ResourceVersion:17250538,Generation:0,CreationTimestamp:2020-01-05 12:01:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  5 12:01:11.816: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-9d9lz,SelfLink:/api/v1/namespaces/e2e-tests-watch-9d9lz/configmaps/e2e-watch-test-configmap-b,UID:0af46be0-2fb3-11ea-a994-fa163e34d433,ResourceVersion:17250538,Generation:0,CreationTimestamp:2020-01-05 12:01:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:01:21.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-9d9lz" for this suite.
Jan  5 12:01:27.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:01:28.026: INFO: namespace: e2e-tests-watch-9d9lz, resource: bindings, ignored listing per whitelist
Jan  5 12:01:28.153: INFO: namespace e2e-tests-watch-9d9lz deletion completed in 6.291881417s

• [SLOW TEST:66.649 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:01:28.155: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  5 12:01:28.275: INFO: Creating deployment "test-recreate-deployment"
Jan  5 12:01:28.289: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jan  5 12:01:28.299: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Jan  5 12:01:30.333: INFO: Waiting deployment "test-recreate-deployment" to complete
Jan  5 12:01:30.339: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713822488, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713822488, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713822488, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713822488, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  5 12:01:32.351: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713822488, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713822488, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713822488, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713822488, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  5 12:01:34.356: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713822488, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713822488, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713822488, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713822488, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  5 12:01:36.353: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713822488, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713822488, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713822488, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713822488, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  5 12:01:38.355: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jan  5 12:01:38.377: INFO: Updating deployment test-recreate-deployment
Jan  5 12:01:38.377: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan  5 12:01:39.112: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-zmpnn,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-zmpnn/deployments/test-recreate-deployment,UID:1ac15358-2fb3-11ea-a994-fa163e34d433,ResourceVersion:17250619,Generation:2,CreationTimestamp:2020-01-05 12:01:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-01-05 12:01:38 +0000 UTC 2020-01-05 12:01:38 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-05 12:01:38 +0000 UTC 2020-01-05 12:01:28 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Jan  5 12:01:39.125: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-zmpnn,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-zmpnn/replicasets/test-recreate-deployment-589c4bfd,UID:20f4b476-2fb3-11ea-a994-fa163e34d433,ResourceVersion:17250616,Generation:1,CreationTimestamp:2020-01-05 12:01:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 1ac15358-2fb3-11ea-a994-fa163e34d433 0xc00150ffdf 0xc00127a2a0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  5 12:01:39.125: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jan  5 12:01:39.125: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-zmpnn,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-zmpnn/replicasets/test-recreate-deployment-5bf7f65dc,UID:1ac4f444-2fb3-11ea-a994-fa163e34d433,ResourceVersion:17250607,Generation:2,CreationTimestamp:2020-01-05 12:01:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 1ac15358-2fb3-11ea-a994-fa163e34d433 0xc00127a360 0xc00127a361}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  5 12:01:39.140: INFO: Pod "test-recreate-deployment-589c4bfd-jzgw4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-jzgw4,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-zmpnn,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zmpnn/pods/test-recreate-deployment-589c4bfd-jzgw4,UID:20f5c200-2fb3-11ea-a994-fa163e34d433,ResourceVersion:17250618,Generation:0,CreationTimestamp:2020-01-05 12:01:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 20f4b476-2fb3-11ea-a994-fa163e34d433 0xc00183a0bf 0xc00183a0d0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-t9qfm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9qfm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-t9qfm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00183a130} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00183a150}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:01:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:01:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:01:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:01:38 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-05 12:01:38 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:01:39.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-zmpnn" for this suite.
Jan  5 12:01:47.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:01:47.272: INFO: namespace: e2e-tests-deployment-zmpnn, resource: bindings, ignored listing per whitelist
Jan  5 12:01:47.397: INFO: namespace e2e-tests-deployment-zmpnn deletion completed in 8.242835332s

• [SLOW TEST:19.242 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:01:47.398: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Creating an uninitialized pod in the namespace
Jan  5 12:01:59.318: INFO: error from create uninitialized namespace: 
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
Jan  5 12:03:31.374: INFO: Unexpected error occurred: timed out waiting for the condition
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
STEP: Collecting events from namespace "e2e-tests-namespaces-l94n4".
STEP: Found 0 events.
Jan  5 12:03:31.415: INFO: POD                                                 NODE                        PHASE    GRACE  CONDITIONS
Jan  5 12:03:31.415: INFO: test-pod-uninitialized                              hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:01:59 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:02:08 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:02:08 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:01:59 +0000 UTC  }]
Jan  5 12:03:31.415: INFO: coredns-54ff9cd656-79kxx                            hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  }]
Jan  5 12:03:31.415: INFO: coredns-54ff9cd656-bmkk4                            hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC  }]
Jan  5 12:03:31.415: INFO: etcd-hunter-server-hu5at5svl7ps                     hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:56 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:56 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Jan  5 12:03:31.415: INFO: kube-apiserver-hunter-server-hu5at5svl7ps           hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Jan  5 12:03:31.415: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:18:59 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:18:59 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Jan  5 12:03:31.415: INFO: kube-proxy-bqnnz                                    hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:23 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:22 +0000 UTC  }]
Jan  5 12:03:31.415: INFO: kube-scheduler-hunter-server-hu5at5svl7ps           hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:20:53 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:20:53 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC  }]
Jan  5 12:03:31.415: INFO: weave-net-tqwf2                                     hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:23 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:07:02 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:23 +0000 UTC  }]
Jan  5 12:03:31.415: INFO: 
Jan  5 12:03:31.421: INFO: 
Logging node info for node hunter-server-hu5at5svl7ps
Jan  5 12:03:31.427: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:hunter-server-hu5at5svl7ps,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/hunter-server-hu5at5svl7ps,UID:79f3887d-b692-11e9-a994-fa163e34d433,ResourceVersion:17250806,Generation:0,CreationTimestamp:2019-08-04 08:33:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/hostname: hunter-server-hu5at5svl7ps,node-role.kubernetes.io/master: ,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.96.0.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-08-04 08:33:41 +0000 UTC 2019-08-04 08:33:41 +0000 UTC WeaveIsUp Weave pod has set this} {MemoryPressure False 2020-01-05 12:03:29 +0000 UTC 2019-08-04 08:32:55 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-01-05 12:03:29 +0000 UTC 2019-08-04 08:32:55 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-01-05 12:03:29 +0000 UTC 2019-08-04 08:32:55 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-01-05 12:03:29 +0000 UTC 2019-08-04 08:33:44 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.96.1.240} {Hostname hunter-server-hu5at5svl7ps}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:09742db8afaa4010be44cec974ef8dd2,SystemUUID:09742DB8-AFAA-4010-BE44-CEC974EF8DD2,BootID:e5092afb-2b29-4458-9662-9eee6c0a1f90,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.13.8,KubeProxyVersion:v1.13.8,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 373099368} {[k8s.gcr.io/etcd@sha256:905d7ca17fd02bc24c0eba9a062753aba15db3e31422390bc3238eb762339b20 k8s.gcr.io/etcd:3.2.24] 219655340} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 195659796} {[k8s.gcr.io/kube-apiserver@sha256:782fb3e5e34a3025e5c2fc92d5a73fc5eb5223fbd1760a551f2d02e1b484c899 k8s.gcr.io/kube-apiserver:v1.13.8] 181093118} {[weaveworks/weave-kube@sha256:8fea236b8e64192c454e459b40381bd48795bd54d791fa684d818afdc12bd100 weaveworks/weave-kube:2.5.2] 148150868} {[k8s.gcr.io/kube-controller-manager@sha256:46889a90fff5324ad813c1024d0b7713a5529117570e3611657a0acfb58c8f43 k8s.gcr.io/kube-controller-manager:v1.13.8] 146353566} {[nginx@sha256:662b1a542362596b094b0b3fa30a8528445b75aed9f2d009f72401a0f8870c1f nginx@sha256:9916837e6b165e967e2beb5a586b1c980084d08eb3b3d7f79178a0c79426d880] 126346569} {[nginx@sha256:b2d89d0a210398b4d1120b3e3a7672c16a4ba09c2c4a0395f18b9f7999b768f2 nginx:latest] 126323778} {[nginx@sha256:50cf965a6e08ec5784009d0fccb380fc479826b6e0e65684d9879170a9df8566 nginx@sha256:73113849b52b099e447eabb83a2722635562edc798f5b86bdf853faa0a49ec70] 126323486} {[nginx@sha256:922c815aa4df050d4df476e92daed4231f466acc8ee90e0e774951b0fd7195a4] 126215561} {[nginx@sha256:77ebc94e0cec30b20f9056bac1066b09fbdc049401b71850922c63fc0cc1762e] 125993293} {[nginx@sha256:9688d0dae8812dd2437947b756393eb0779487e361aa2ffbc3a529dca61f102c] 125976833} {[nginx@sha256:aeded0f2a861747f43a01cf1018cf9efe2bdd02afd57d2b11fcc7fcadc16ccd1] 125972845} {[nginx@sha256:1a8935aae56694cee3090d39df51b4e7fcbfe6877df24a4c5c0782dfeccc97e1 nginx@sha256:53ddb41e46de3d63376579acf46f9a41a8d7de33645db47a486de9769201fec9 nginx@sha256:a8517b1d89209c88eeb48709bc06d706c261062813720a352a8e4f8d96635d9d] 125958368} {[nginx@sha256:5411d8897c3da841a1f45f895b43ad4526eb62d3393c3287124a56be49962d41] 125850912} {[nginx@sha256:eb3320e2f9ca409b7c0aa71aea3cf7ce7d018f03a372564dbdb023646958770b] 125850346} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 98945667} {[k8s.gcr.io/kube-proxy@sha256:c27502f9ab958f59f95bda6a4ffd266e3ca42a75aae641db4aac7e93dd383b6e k8s.gcr.io/kube-proxy:v1.13.8] 80245404} {[k8s.gcr.io/kube-scheduler@sha256:fdcc2d056ba5937f66301b9071b2c322fad53254e6ddf277592d99f267e5745f k8s.gcr.io/kube-scheduler:v1.13.8] 79601406} {[weaveworks/weave-npc@sha256:56c93a359d54107558720a2859b83cb28a31c70c82a1aaa3dc4704e6c62e3b15 weaveworks/weave-npc:2.5.2] 49569458} {[k8s.gcr.io/coredns@sha256:81936728011c0df9404cb70b95c17bbc8af922ec9a70d0561a5d01fefa6ffa51 k8s.gcr.io/coredns:1.2.6] 40017418} {[gcr.io/kubernetes-e2e-test-images/nettest@sha256:6aa91bc71993260a87513e31b672ec14ce84bc253cd5233406c6946d3a8f55a1 gcr.io/kubernetes-e2e-test-images/nettest:1.0] 27413498} {[nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 nginx:1.15-alpine] 16087791} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 9349974} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 5851985} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/porter@sha256:d6389405e453950618ae7749d9eee388f0eb32e0328a7e6583c41433aa5f2a77 gcr.io/kubernetes-e2e-test-images/porter:1.0] 4681408} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:748662321b68a4b73b5a56961b61b980ad3683fc6bcae62c1306018fcdba1809 gcr.io/kubernetes-e2e-test-images/liveness:1.0] 4608721} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 1450451} {[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29] 1154361} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
Jan  5 12:03:31.428: INFO: 
Logging kubelet events for node hunter-server-hu5at5svl7ps
Jan  5 12:03:31.432: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps
Jan  5 12:03:31.462: INFO: kube-scheduler-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
Jan  5 12:03:31.462: INFO: coredns-54ff9cd656-79kxx started at 2019-08-04 08:33:46 +0000 UTC (0+1 container statuses recorded)
Jan  5 12:03:31.462: INFO: 	Container coredns ready: true, restart count 0
Jan  5 12:03:31.462: INFO: kube-proxy-bqnnz started at 2019-08-04 08:33:23 +0000 UTC (0+1 container statuses recorded)
Jan  5 12:03:31.462: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  5 12:03:31.462: INFO: etcd-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
Jan  5 12:03:31.462: INFO: test-pod-uninitialized started at 2020-01-05 12:01:59 +0000 UTC (0+1 container statuses recorded)
Jan  5 12:03:31.462: INFO: 	Container nginx ready: true, restart count 0
Jan  5 12:03:31.462: INFO: weave-net-tqwf2 started at 2019-08-04 08:33:23 +0000 UTC (0+2 container statuses recorded)
Jan  5 12:03:31.462: INFO: 	Container weave ready: true, restart count 0
Jan  5 12:03:31.462: INFO: 	Container weave-npc ready: true, restart count 0
Jan  5 12:03:31.462: INFO: coredns-54ff9cd656-bmkk4 started at 2019-08-04 08:33:46 +0000 UTC (0+1 container statuses recorded)
Jan  5 12:03:31.462: INFO: 	Container coredns ready: true, restart count 0
Jan  5 12:03:31.462: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
Jan  5 12:03:31.462: INFO: kube-apiserver-hunter-server-hu5at5svl7ps started at  (0+0 container statuses recorded)
W0105 12:03:31.470361       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  5 12:03:31.554: INFO: 
Latency metrics for node hunter-server-hu5at5svl7ps
Jan  5 12:03:31.555: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:12.490437s}
Jan  5 12:03:31.555: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.99 Latency:12.046644s}
Jan  5 12:03:31.555: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.9 Latency:12.01459s}
Jan  5 12:03:31.555: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:10.0865s}
Jan  5 12:03:31.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-l94n4" for this suite.
Jan  5 12:03:37.607: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:03:37.696: INFO: namespace: e2e-tests-namespaces-l94n4, resource: bindings, ignored listing per whitelist
Jan  5 12:03:37.793: INFO: namespace e2e-tests-namespaces-l94n4 deletion completed in 6.227334893s
STEP: Destroying namespace "e2e-tests-nsdeletetest-lhhnd" for this suite.
Jan  5 12:03:37.799: INFO: Couldn't delete ns: "e2e-tests-nsdeletetest-lhhnd": Operation cannot be fulfilled on namespaces "e2e-tests-nsdeletetest-lhhnd": The system is ensuring all content is removed from this namespace.  Upon completion, this namespace will automatically be purged by the system. (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:""}, Status:"Failure", Message:"Operation cannot be fulfilled on namespaces \"e2e-tests-nsdeletetest-lhhnd\": The system is ensuring all content is removed from this namespace.  Upon completion, this namespace will automatically be purged by the system.", Reason:"Conflict", Details:(*v1.StatusDetails)(0xc0023463c0), Code:409}})

• Failure [110.403 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all pods are removed when a namespace is deleted [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Expected error:
      <*errors.errorString | 0xc0000d98a0>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  not to have occurred

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/namespace.go:161
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:03:37.802: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on tmpfs
Jan  5 12:03:37.995: INFO: Waiting up to 5m0s for pod "pod-6811b0a6-2fb3-11ea-910c-0242ac110004" in namespace "e2e-tests-emptydir-mqr7d" to be "success or failure"
Jan  5 12:03:37.999: INFO: Pod "pod-6811b0a6-2fb3-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 3.526963ms
Jan  5 12:03:40.037: INFO: Pod "pod-6811b0a6-2fb3-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041956409s
Jan  5 12:03:42.151: INFO: Pod "pod-6811b0a6-2fb3-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.155628506s
Jan  5 12:03:44.465: INFO: Pod "pod-6811b0a6-2fb3-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.469561376s
Jan  5 12:03:46.489: INFO: Pod "pod-6811b0a6-2fb3-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.494126944s
Jan  5 12:03:48.539: INFO: Pod "pod-6811b0a6-2fb3-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.544025029s
STEP: Saw pod success
Jan  5 12:03:48.540: INFO: Pod "pod-6811b0a6-2fb3-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 12:03:48.573: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-6811b0a6-2fb3-11ea-910c-0242ac110004 container test-container: 
STEP: delete the pod
Jan  5 12:03:48.826: INFO: Waiting for pod pod-6811b0a6-2fb3-11ea-910c-0242ac110004 to disappear
Jan  5 12:03:48.833: INFO: Pod pod-6811b0a6-2fb3-11ea-910c-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:03:48.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-mqr7d" for this suite.
Jan  5 12:03:55.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:03:55.927: INFO: namespace: e2e-tests-emptydir-mqr7d, resource: bindings, ignored listing per whitelist
Jan  5 12:03:55.932: INFO: namespace e2e-tests-emptydir-mqr7d deletion completed in 7.092034837s

• [SLOW TEST:18.130 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:03:55.932: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:04:06.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-qchc7" for this suite.
Jan  5 12:04:12.389: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:04:12.938: INFO: namespace: e2e-tests-emptydir-wrapper-qchc7, resource: bindings, ignored listing per whitelist
Jan  5 12:04:12.992: INFO: namespace e2e-tests-emptydir-wrapper-qchc7 deletion completed in 6.710209683s

• [SLOW TEST:17.059 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:04:12.993: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan  5 12:04:14.424: INFO: Waiting up to 5m0s for pod "pod-7db45aa0-2fb3-11ea-910c-0242ac110004" in namespace "e2e-tests-emptydir-kxf5l" to be "success or failure"
Jan  5 12:04:14.437: INFO: Pod "pod-7db45aa0-2fb3-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 13.265614ms
Jan  5 12:04:16.506: INFO: Pod "pod-7db45aa0-2fb3-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082115397s
Jan  5 12:04:18.537: INFO: Pod "pod-7db45aa0-2fb3-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113068577s
Jan  5 12:04:21.559: INFO: Pod "pod-7db45aa0-2fb3-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.134820895s
Jan  5 12:04:23.601: INFO: Pod "pod-7db45aa0-2fb3-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.176985544s
Jan  5 12:04:25.622: INFO: Pod "pod-7db45aa0-2fb3-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.197628086s
STEP: Saw pod success
Jan  5 12:04:25.622: INFO: Pod "pod-7db45aa0-2fb3-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 12:04:25.630: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-7db45aa0-2fb3-11ea-910c-0242ac110004 container test-container: 
STEP: delete the pod
Jan  5 12:04:26.204: INFO: Waiting for pod pod-7db45aa0-2fb3-11ea-910c-0242ac110004 to disappear
Jan  5 12:04:26.627: INFO: Pod pod-7db45aa0-2fb3-11ea-910c-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:04:26.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-kxf5l" for this suite.
Jan  5 12:04:32.713: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:04:32.973: INFO: namespace: e2e-tests-emptydir-kxf5l, resource: bindings, ignored listing per whitelist
Jan  5 12:04:33.092: INFO: namespace e2e-tests-emptydir-kxf5l deletion completed in 6.433944898s

• [SLOW TEST:20.099 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:04:33.092: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan  5 12:04:33.317: INFO: Waiting up to 5m0s for pod "pod-890a2c76-2fb3-11ea-910c-0242ac110004" in namespace "e2e-tests-emptydir-7nzqs" to be "success or failure"
Jan  5 12:04:33.343: INFO: Pod "pod-890a2c76-2fb3-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 26.269098ms
Jan  5 12:04:35.508: INFO: Pod "pod-890a2c76-2fb3-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.190951255s
Jan  5 12:04:37.525: INFO: Pod "pod-890a2c76-2fb3-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.208155817s
Jan  5 12:04:39.919: INFO: Pod "pod-890a2c76-2fb3-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.602282419s
Jan  5 12:04:41.949: INFO: Pod "pod-890a2c76-2fb3-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.631590953s
Jan  5 12:04:43.982: INFO: Pod "pod-890a2c76-2fb3-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.665176583s
STEP: Saw pod success
Jan  5 12:04:43.982: INFO: Pod "pod-890a2c76-2fb3-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 12:04:44.005: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-890a2c76-2fb3-11ea-910c-0242ac110004 container test-container: 
STEP: delete the pod
Jan  5 12:04:44.170: INFO: Waiting for pod pod-890a2c76-2fb3-11ea-910c-0242ac110004 to disappear
Jan  5 12:04:44.179: INFO: Pod pod-890a2c76-2fb3-11ea-910c-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:04:44.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-7nzqs" for this suite.
Jan  5 12:04:50.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:04:50.578: INFO: namespace: e2e-tests-emptydir-7nzqs, resource: bindings, ignored listing per whitelist
Jan  5 12:04:50.605: INFO: namespace e2e-tests-emptydir-7nzqs deletion completed in 6.42064522s

• [SLOW TEST:17.513 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:04:50.606: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-936da4ca-2fb3-11ea-910c-0242ac110004
STEP: Creating a pod to test consume configMaps
Jan  5 12:04:50.841: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-936e52b2-2fb3-11ea-910c-0242ac110004" in namespace "e2e-tests-projected-vg2fb" to be "success or failure"
Jan  5 12:04:50.879: INFO: Pod "pod-projected-configmaps-936e52b2-2fb3-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 37.949658ms
Jan  5 12:04:52.894: INFO: Pod "pod-projected-configmaps-936e52b2-2fb3-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053125794s
Jan  5 12:04:54.914: INFO: Pod "pod-projected-configmaps-936e52b2-2fb3-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073070822s
Jan  5 12:04:56.936: INFO: Pod "pod-projected-configmaps-936e52b2-2fb3-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.094961927s
Jan  5 12:04:58.956: INFO: Pod "pod-projected-configmaps-936e52b2-2fb3-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.114509788s
Jan  5 12:05:00.968: INFO: Pod "pod-projected-configmaps-936e52b2-2fb3-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.127051188s
STEP: Saw pod success
Jan  5 12:05:00.968: INFO: Pod "pod-projected-configmaps-936e52b2-2fb3-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 12:05:00.972: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-936e52b2-2fb3-11ea-910c-0242ac110004 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  5 12:05:01.830: INFO: Waiting for pod pod-projected-configmaps-936e52b2-2fb3-11ea-910c-0242ac110004 to disappear
Jan  5 12:05:02.039: INFO: Pod pod-projected-configmaps-936e52b2-2fb3-11ea-910c-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:05:02.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-vg2fb" for this suite.
Jan  5 12:05:08.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:05:08.119: INFO: namespace: e2e-tests-projected-vg2fb, resource: bindings, ignored listing per whitelist
Jan  5 12:05:08.399: INFO: namespace e2e-tests-projected-vg2fb deletion completed in 6.348346559s

• [SLOW TEST:17.793 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:05:08.399: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  5 12:05:09.307: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"9e59490a-2fb3-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001a5265a), BlockOwnerDeletion:(*bool)(0xc001a5265b)}}
Jan  5 12:05:09.328: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"9e391aa7-2fb3-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001a528e2), BlockOwnerDeletion:(*bool)(0xc001a528e3)}}
Jan  5 12:05:09.339: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"9e437af5-2fb3-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001ff8b22), BlockOwnerDeletion:(*bool)(0xc001ff8b23)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:05:14.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-d4hmt" for this suite.
Jan  5 12:05:22.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:05:22.590: INFO: namespace: e2e-tests-gc-d4hmt, resource: bindings, ignored listing per whitelist
Jan  5 12:05:22.652: INFO: namespace e2e-tests-gc-d4hmt deletion completed in 8.253010605s

• [SLOW TEST:14.253 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:05:22.653: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating api versions
Jan  5 12:05:22.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jan  5 12:05:23.084: INFO: stderr: ""
Jan  5 12:05:23.084: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:05:23.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-n97p7" for this suite.
Jan  5 12:05:29.138: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:05:29.240: INFO: namespace: e2e-tests-kubectl-n97p7, resource: bindings, ignored listing per whitelist
Jan  5 12:05:29.278: INFO: namespace e2e-tests-kubectl-n97p7 deletion completed in 6.180636242s

• [SLOW TEST:6.626 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:05:29.279: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0105 12:05:43.907818       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  5 12:05:43.907: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:05:43.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-h4xht" for this suite.
Jan  5 12:06:08.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:06:08.390: INFO: namespace: e2e-tests-gc-h4xht, resource: bindings, ignored listing per whitelist
Jan  5 12:06:08.613: INFO: namespace e2e-tests-gc-h4xht deletion completed in 24.693118018s

• [SLOW TEST:39.334 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:06:08.614: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  5 12:06:09.025: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jan  5 12:06:14.051: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan  5 12:06:20.084: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan  5 12:06:20.138: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-cqx7s,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-cqx7s/deployments/test-cleanup-deployment,UID:c8b20994-2fb3-11ea-a994-fa163e34d433,ResourceVersion:17251322,Generation:1,CreationTimestamp:2020-01-05 12:06:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Jan  5 12:06:20.150: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:06:20.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-cqx7s" for this suite.
Jan  5 12:06:26.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:06:26.279: INFO: namespace: e2e-tests-deployment-cqx7s, resource: bindings, ignored listing per whitelist
Jan  5 12:06:26.349: INFO: namespace e2e-tests-deployment-cqx7s deletion completed in 6.173601647s

• [SLOW TEST:17.735 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:06:26.349: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  5 12:06:27.836: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cd3e424f-2fb3-11ea-910c-0242ac110004" in namespace "e2e-tests-projected-4sl2f" to be "success or failure"
Jan  5 12:06:27.877: INFO: Pod "downwardapi-volume-cd3e424f-2fb3-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 41.759924ms
Jan  5 12:06:30.422: INFO: Pod "downwardapi-volume-cd3e424f-2fb3-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.585812038s
Jan  5 12:06:32.451: INFO: Pod "downwardapi-volume-cd3e424f-2fb3-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.615495238s
Jan  5 12:06:34.608: INFO: Pod "downwardapi-volume-cd3e424f-2fb3-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.772247192s
Jan  5 12:06:36.651: INFO: Pod "downwardapi-volume-cd3e424f-2fb3-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.81551157s
Jan  5 12:06:38.668: INFO: Pod "downwardapi-volume-cd3e424f-2fb3-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.832109899s
STEP: Saw pod success
Jan  5 12:06:38.668: INFO: Pod "downwardapi-volume-cd3e424f-2fb3-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 12:06:38.675: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-cd3e424f-2fb3-11ea-910c-0242ac110004 container client-container: 
STEP: delete the pod
Jan  5 12:06:38.808: INFO: Waiting for pod downwardapi-volume-cd3e424f-2fb3-11ea-910c-0242ac110004 to disappear
Jan  5 12:06:38.827: INFO: Pod downwardapi-volume-cd3e424f-2fb3-11ea-910c-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:06:38.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-4sl2f" for this suite.
Jan  5 12:06:44.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:06:45.194: INFO: namespace: e2e-tests-projected-4sl2f, resource: bindings, ignored listing per whitelist
Jan  5 12:06:45.215: INFO: namespace e2e-tests-projected-4sl2f deletion completed in 6.37768831s

• [SLOW TEST:18.865 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:06:45.215: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  5 12:06:45.646: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jan  5 12:06:45.689: INFO: Number of nodes with available pods: 0
Jan  5 12:06:45.689: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jan  5 12:06:46.047: INFO: Number of nodes with available pods: 0
Jan  5 12:06:46.047: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 12:06:47.080: INFO: Number of nodes with available pods: 0
Jan  5 12:06:47.080: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 12:06:48.066: INFO: Number of nodes with available pods: 0
Jan  5 12:06:48.066: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 12:06:49.076: INFO: Number of nodes with available pods: 0
Jan  5 12:06:49.077: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 12:06:50.069: INFO: Number of nodes with available pods: 0
Jan  5 12:06:50.069: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 12:06:51.539: INFO: Number of nodes with available pods: 0
Jan  5 12:06:51.540: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 12:06:52.091: INFO: Number of nodes with available pods: 0
Jan  5 12:06:52.091: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 12:06:53.126: INFO: Number of nodes with available pods: 0
Jan  5 12:06:53.126: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 12:06:54.075: INFO: Number of nodes with available pods: 0
Jan  5 12:06:54.075: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 12:06:55.061: INFO: Number of nodes with available pods: 1
Jan  5 12:06:55.061: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jan  5 12:06:55.109: INFO: Number of nodes with available pods: 1
Jan  5 12:06:55.109: INFO: Number of running nodes: 0, number of available pods: 1
Jan  5 12:06:56.126: INFO: Number of nodes with available pods: 0
Jan  5 12:06:56.126: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jan  5 12:06:56.192: INFO: Number of nodes with available pods: 0
Jan  5 12:06:56.192: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 12:06:57.210: INFO: Number of nodes with available pods: 0
Jan  5 12:06:57.210: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 12:06:58.262: INFO: Number of nodes with available pods: 0
Jan  5 12:06:58.262: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 12:06:59.208: INFO: Number of nodes with available pods: 0
Jan  5 12:06:59.208: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 12:07:00.209: INFO: Number of nodes with available pods: 0
Jan  5 12:07:00.209: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 12:07:01.210: INFO: Number of nodes with available pods: 0
Jan  5 12:07:01.210: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 12:07:02.201: INFO: Number of nodes with available pods: 0
Jan  5 12:07:02.201: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 12:07:03.266: INFO: Number of nodes with available pods: 0
Jan  5 12:07:03.266: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 12:07:04.207: INFO: Number of nodes with available pods: 0
Jan  5 12:07:04.207: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 12:07:05.215: INFO: Number of nodes with available pods: 0
Jan  5 12:07:05.215: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 12:07:06.221: INFO: Number of nodes with available pods: 0
Jan  5 12:07:06.222: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 12:07:07.206: INFO: Number of nodes with available pods: 0
Jan  5 12:07:07.206: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 12:07:08.241: INFO: Number of nodes with available pods: 0
Jan  5 12:07:08.241: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 12:07:09.211: INFO: Number of nodes with available pods: 0
Jan  5 12:07:09.211: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 12:07:10.207: INFO: Number of nodes with available pods: 0
Jan  5 12:07:10.207: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 12:07:11.213: INFO: Number of nodes with available pods: 0
Jan  5 12:07:11.213: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 12:07:12.205: INFO: Number of nodes with available pods: 1
Jan  5 12:07:12.205: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-g77d8, will wait for the garbage collector to delete the pods
Jan  5 12:07:12.351: INFO: Deleting DaemonSet.extensions daemon-set took: 78.678849ms
Jan  5 12:07:12.452: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.259936ms
Jan  5 12:07:18.978: INFO: Number of nodes with available pods: 0
Jan  5 12:07:18.978: INFO: Number of running nodes: 0, number of available pods: 0
Jan  5 12:07:19.052: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-g77d8/daemonsets","resourceVersion":"17251484"},"items":null}

Jan  5 12:07:19.058: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-g77d8/pods","resourceVersion":"17251484"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:07:19.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-g77d8" for this suite.
Jan  5 12:07:27.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:07:27.463: INFO: namespace: e2e-tests-daemonsets-g77d8, resource: bindings, ignored listing per whitelist
Jan  5 12:07:27.484: INFO: namespace e2e-tests-daemonsets-g77d8 deletion completed in 8.378673109s

• [SLOW TEST:42.269 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:07:27.484: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jan  5 12:07:27.801: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-4vmwq,SelfLink:/api/v1/namespaces/e2e-tests-watch-4vmwq/configmaps/e2e-watch-test-watch-closed,UID:f109c5dd-2fb3-11ea-a994-fa163e34d433,ResourceVersion:17251518,Generation:0,CreationTimestamp:2020-01-05 12:07:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  5 12:07:27.802: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-4vmwq,SelfLink:/api/v1/namespaces/e2e-tests-watch-4vmwq/configmaps/e2e-watch-test-watch-closed,UID:f109c5dd-2fb3-11ea-a994-fa163e34d433,ResourceVersion:17251519,Generation:0,CreationTimestamp:2020-01-05 12:07:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jan  5 12:07:27.833: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-4vmwq,SelfLink:/api/v1/namespaces/e2e-tests-watch-4vmwq/configmaps/e2e-watch-test-watch-closed,UID:f109c5dd-2fb3-11ea-a994-fa163e34d433,ResourceVersion:17251520,Generation:0,CreationTimestamp:2020-01-05 12:07:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  5 12:07:27.834: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-4vmwq,SelfLink:/api/v1/namespaces/e2e-tests-watch-4vmwq/configmaps/e2e-watch-test-watch-closed,UID:f109c5dd-2fb3-11ea-a994-fa163e34d433,ResourceVersion:17251521,Generation:0,CreationTimestamp:2020-01-05 12:07:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:07:27.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-4vmwq" for this suite.
Jan  5 12:07:33.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:07:34.127: INFO: namespace: e2e-tests-watch-4vmwq, resource: bindings, ignored listing per whitelist
Jan  5 12:07:34.149: INFO: namespace e2e-tests-watch-4vmwq deletion completed in 6.291793002s

• [SLOW TEST:6.665 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:07:34.150: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-d44cm/configmap-test-f4eead5f-2fb3-11ea-910c-0242ac110004
STEP: Creating a pod to test consume configMaps
Jan  5 12:07:34.339: INFO: Waiting up to 5m0s for pod "pod-configmaps-f4f025bd-2fb3-11ea-910c-0242ac110004" in namespace "e2e-tests-configmap-d44cm" to be "success or failure"
Jan  5 12:07:34.346: INFO: Pod "pod-configmaps-f4f025bd-2fb3-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.465432ms
Jan  5 12:07:36.398: INFO: Pod "pod-configmaps-f4f025bd-2fb3-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058843755s
Jan  5 12:07:38.417: INFO: Pod "pod-configmaps-f4f025bd-2fb3-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077210669s
Jan  5 12:07:40.433: INFO: Pod "pod-configmaps-f4f025bd-2fb3-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.093924341s
Jan  5 12:07:42.457: INFO: Pod "pod-configmaps-f4f025bd-2fb3-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.117374975s
Jan  5 12:07:44.470: INFO: Pod "pod-configmaps-f4f025bd-2fb3-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.130307688s
Jan  5 12:07:46.628: INFO: Pod "pod-configmaps-f4f025bd-2fb3-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.288537524s
STEP: Saw pod success
Jan  5 12:07:46.628: INFO: Pod "pod-configmaps-f4f025bd-2fb3-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 12:07:46.638: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-f4f025bd-2fb3-11ea-910c-0242ac110004 container env-test: 
STEP: delete the pod
Jan  5 12:07:47.063: INFO: Waiting for pod pod-configmaps-f4f025bd-2fb3-11ea-910c-0242ac110004 to disappear
Jan  5 12:07:47.083: INFO: Pod pod-configmaps-f4f025bd-2fb3-11ea-910c-0242ac110004 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:07:47.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-d44cm" for this suite.
Jan  5 12:07:53.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:07:53.305: INFO: namespace: e2e-tests-configmap-d44cm, resource: bindings, ignored listing per whitelist
Jan  5 12:07:53.366: INFO: namespace e2e-tests-configmap-d44cm deletion completed in 6.270798046s

• [SLOW TEST:19.217 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:07:53.367: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan  5 12:07:53.666: INFO: Waiting up to 5m0s for pod "pod-00618608-2fb4-11ea-910c-0242ac110004" in namespace "e2e-tests-emptydir-p8p6l" to be "success or failure"
Jan  5 12:07:53.688: INFO: Pod "pod-00618608-2fb4-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 21.303629ms
Jan  5 12:07:55.744: INFO: Pod "pod-00618608-2fb4-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077137361s
Jan  5 12:07:57.761: INFO: Pod "pod-00618608-2fb4-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09387989s
Jan  5 12:07:59.776: INFO: Pod "pod-00618608-2fb4-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.109409824s
Jan  5 12:08:02.131: INFO: Pod "pod-00618608-2fb4-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.463935276s
Jan  5 12:08:04.313: INFO: Pod "pod-00618608-2fb4-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.646057304s
STEP: Saw pod success
Jan  5 12:08:04.313: INFO: Pod "pod-00618608-2fb4-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 12:08:04.332: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-00618608-2fb4-11ea-910c-0242ac110004 container test-container: 
STEP: delete the pod
Jan  5 12:08:04.472: INFO: Waiting for pod pod-00618608-2fb4-11ea-910c-0242ac110004 to disappear
Jan  5 12:08:04.485: INFO: Pod pod-00618608-2fb4-11ea-910c-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:08:04.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-p8p6l" for this suite.
Jan  5 12:08:10.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:08:10.724: INFO: namespace: e2e-tests-emptydir-p8p6l, resource: bindings, ignored listing per whitelist
Jan  5 12:08:10.755: INFO: namespace e2e-tests-emptydir-p8p6l deletion completed in 6.25649547s

• [SLOW TEST:17.388 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:08:10.756: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-0ac2b7f0-2fb4-11ea-910c-0242ac110004
STEP: Creating a pod to test consume secrets
Jan  5 12:08:10.968: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0ac3c108-2fb4-11ea-910c-0242ac110004" in namespace "e2e-tests-projected-vs7mz" to be "success or failure"
Jan  5 12:08:11.032: INFO: Pod "pod-projected-secrets-0ac3c108-2fb4-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 64.084605ms
Jan  5 12:08:13.051: INFO: Pod "pod-projected-secrets-0ac3c108-2fb4-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082957684s
Jan  5 12:08:15.064: INFO: Pod "pod-projected-secrets-0ac3c108-2fb4-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09606891s
Jan  5 12:08:17.133: INFO: Pod "pod-projected-secrets-0ac3c108-2fb4-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.165101221s
Jan  5 12:08:19.149: INFO: Pod "pod-projected-secrets-0ac3c108-2fb4-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.181544167s
Jan  5 12:08:21.161: INFO: Pod "pod-projected-secrets-0ac3c108-2fb4-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.193446764s
Jan  5 12:08:23.182: INFO: Pod "pod-projected-secrets-0ac3c108-2fb4-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.214197468s
STEP: Saw pod success
Jan  5 12:08:23.182: INFO: Pod "pod-projected-secrets-0ac3c108-2fb4-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 12:08:23.188: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-0ac3c108-2fb4-11ea-910c-0242ac110004 container projected-secret-volume-test: 
STEP: delete the pod
Jan  5 12:08:23.315: INFO: Waiting for pod pod-projected-secrets-0ac3c108-2fb4-11ea-910c-0242ac110004 to disappear
Jan  5 12:08:23.354: INFO: Pod pod-projected-secrets-0ac3c108-2fb4-11ea-910c-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:08:23.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-vs7mz" for this suite.
Jan  5 12:08:29.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:08:29.871: INFO: namespace: e2e-tests-projected-vs7mz, resource: bindings, ignored listing per whitelist
Jan  5 12:08:29.935: INFO: namespace e2e-tests-projected-vs7mz deletion completed in 6.437023527s

• [SLOW TEST:19.179 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:08:29.937: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-rjkft
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Jan  5 12:08:30.178: INFO: Found 0 stateful pods, waiting for 3
Jan  5 12:08:40.195: INFO: Found 2 stateful pods, waiting for 3
Jan  5 12:08:50.305: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  5 12:08:50.305: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  5 12:08:50.305: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  5 12:09:00.263: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  5 12:09:00.264: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  5 12:09:00.264: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan  5 12:09:00.342: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jan  5 12:09:10.642: INFO: Updating stateful set ss2
Jan  5 12:09:10.698: INFO: Waiting for Pod e2e-tests-statefulset-rjkft/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Jan  5 12:09:21.084: INFO: Found 1 stateful pods, waiting for 3
Jan  5 12:09:31.107: INFO: Found 2 stateful pods, waiting for 3
Jan  5 12:09:41.116: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  5 12:09:41.116: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  5 12:09:41.116: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  5 12:09:51.106: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  5 12:09:51.106: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  5 12:09:51.106: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jan  5 12:09:51.173: INFO: Updating stateful set ss2
Jan  5 12:09:51.275: INFO: Waiting for Pod e2e-tests-statefulset-rjkft/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  5 12:10:01.301: INFO: Waiting for Pod e2e-tests-statefulset-rjkft/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  5 12:10:11.321: INFO: Updating stateful set ss2
Jan  5 12:10:11.346: INFO: Waiting for StatefulSet e2e-tests-statefulset-rjkft/ss2 to complete update
Jan  5 12:10:11.346: INFO: Waiting for Pod e2e-tests-statefulset-rjkft/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  5 12:10:21.402: INFO: Waiting for StatefulSet e2e-tests-statefulset-rjkft/ss2 to complete update
Jan  5 12:10:21.402: INFO: Waiting for Pod e2e-tests-statefulset-rjkft/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  5 12:10:31.386: INFO: Waiting for StatefulSet e2e-tests-statefulset-rjkft/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan  5 12:10:41.378: INFO: Deleting all statefulset in ns e2e-tests-statefulset-rjkft
Jan  5 12:10:41.386: INFO: Scaling statefulset ss2 to 0
Jan  5 12:11:21.594: INFO: Waiting for statefulset status.replicas updated to 0
Jan  5 12:11:21.609: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:11:21.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-rjkft" for this suite.
Jan  5 12:11:29.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:11:30.071: INFO: namespace: e2e-tests-statefulset-rjkft, resource: bindings, ignored listing per whitelist
Jan  5 12:11:30.075: INFO: namespace e2e-tests-statefulset-rjkft deletion completed in 8.299680126s

• [SLOW TEST:180.138 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:11:30.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jan  5 12:11:30.375: INFO: Pod name pod-release: Found 0 pods out of 1
Jan  5 12:11:35.399: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:11:37.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-fmfxt" for this suite.
Jan  5 12:11:46.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:11:46.575: INFO: namespace: e2e-tests-replication-controller-fmfxt, resource: bindings, ignored listing per whitelist
Jan  5 12:11:46.635: INFO: namespace e2e-tests-replication-controller-fmfxt deletion completed in 9.346111801s

• [SLOW TEST:16.559 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:11:46.635: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-27s9
STEP: Creating a pod to test atomic-volume-subpath
Jan  5 12:11:49.013: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-27s9" in namespace "e2e-tests-subpath-5vdst" to be "success or failure"
Jan  5 12:11:49.029: INFO: Pod "pod-subpath-test-configmap-27s9": Phase="Pending", Reason="", readiness=false. Elapsed: 15.915066ms
Jan  5 12:11:51.227: INFO: Pod "pod-subpath-test-configmap-27s9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21351922s
Jan  5 12:11:53.243: INFO: Pod "pod-subpath-test-configmap-27s9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.229488718s
Jan  5 12:11:56.050: INFO: Pod "pod-subpath-test-configmap-27s9": Phase="Pending", Reason="", readiness=false. Elapsed: 7.036296454s
Jan  5 12:11:58.127: INFO: Pod "pod-subpath-test-configmap-27s9": Phase="Pending", Reason="", readiness=false. Elapsed: 9.113756072s
Jan  5 12:12:00.192: INFO: Pod "pod-subpath-test-configmap-27s9": Phase="Pending", Reason="", readiness=false. Elapsed: 11.179115012s
Jan  5 12:12:02.213: INFO: Pod "pod-subpath-test-configmap-27s9": Phase="Pending", Reason="", readiness=false. Elapsed: 13.200163059s
Jan  5 12:12:04.362: INFO: Pod "pod-subpath-test-configmap-27s9": Phase="Pending", Reason="", readiness=false. Elapsed: 15.348781928s
Jan  5 12:12:06.385: INFO: Pod "pod-subpath-test-configmap-27s9": Phase="Running", Reason="", readiness=true. Elapsed: 17.371293311s
Jan  5 12:12:08.402: INFO: Pod "pod-subpath-test-configmap-27s9": Phase="Running", Reason="", readiness=false. Elapsed: 19.388337912s
Jan  5 12:12:10.421: INFO: Pod "pod-subpath-test-configmap-27s9": Phase="Running", Reason="", readiness=false. Elapsed: 21.407683895s
Jan  5 12:12:12.439: INFO: Pod "pod-subpath-test-configmap-27s9": Phase="Running", Reason="", readiness=false. Elapsed: 23.426056426s
Jan  5 12:12:14.462: INFO: Pod "pod-subpath-test-configmap-27s9": Phase="Running", Reason="", readiness=false. Elapsed: 25.448692811s
Jan  5 12:12:16.496: INFO: Pod "pod-subpath-test-configmap-27s9": Phase="Running", Reason="", readiness=false. Elapsed: 27.482754757s
Jan  5 12:12:18.543: INFO: Pod "pod-subpath-test-configmap-27s9": Phase="Running", Reason="", readiness=false. Elapsed: 29.530203964s
Jan  5 12:12:20.577: INFO: Pod "pod-subpath-test-configmap-27s9": Phase="Running", Reason="", readiness=false. Elapsed: 31.564253213s
Jan  5 12:12:22.625: INFO: Pod "pod-subpath-test-configmap-27s9": Phase="Running", Reason="", readiness=false. Elapsed: 33.611807559s
Jan  5 12:12:24.662: INFO: Pod "pod-subpath-test-configmap-27s9": Phase="Running", Reason="", readiness=false. Elapsed: 35.648678387s
Jan  5 12:12:26.679: INFO: Pod "pod-subpath-test-configmap-27s9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.666097853s
STEP: Saw pod success
Jan  5 12:12:26.679: INFO: Pod "pod-subpath-test-configmap-27s9" satisfied condition "success or failure"
Jan  5 12:12:26.685: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-27s9 container test-container-subpath-configmap-27s9: 
STEP: delete the pod
Jan  5 12:12:27.800: INFO: Waiting for pod pod-subpath-test-configmap-27s9 to disappear
Jan  5 12:12:27.829: INFO: Pod pod-subpath-test-configmap-27s9 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-27s9
Jan  5 12:12:27.829: INFO: Deleting pod "pod-subpath-test-configmap-27s9" in namespace "e2e-tests-subpath-5vdst"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:12:27.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-5vdst" for this suite.
Jan  5 12:12:33.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:12:34.072: INFO: namespace: e2e-tests-subpath-5vdst, resource: bindings, ignored listing per whitelist
Jan  5 12:12:34.298: INFO: namespace e2e-tests-subpath-5vdst deletion completed in 6.446722184s

• [SLOW TEST:47.663 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:12:34.299: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  5 12:12:34.552: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a7db44e3-2fb4-11ea-910c-0242ac110004" in namespace "e2e-tests-projected-rpldf" to be "success or failure"
Jan  5 12:12:34.655: INFO: Pod "downwardapi-volume-a7db44e3-2fb4-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 103.269973ms
Jan  5 12:12:36.750: INFO: Pod "downwardapi-volume-a7db44e3-2fb4-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.197887748s
Jan  5 12:12:38.775: INFO: Pod "downwardapi-volume-a7db44e3-2fb4-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.222566843s
Jan  5 12:12:41.492: INFO: Pod "downwardapi-volume-a7db44e3-2fb4-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.939634254s
Jan  5 12:12:43.507: INFO: Pod "downwardapi-volume-a7db44e3-2fb4-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.954879956s
Jan  5 12:12:45.520: INFO: Pod "downwardapi-volume-a7db44e3-2fb4-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.968293605s
STEP: Saw pod success
Jan  5 12:12:45.520: INFO: Pod "downwardapi-volume-a7db44e3-2fb4-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 12:12:45.530: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-a7db44e3-2fb4-11ea-910c-0242ac110004 container client-container: 
STEP: delete the pod
Jan  5 12:12:46.820: INFO: Waiting for pod downwardapi-volume-a7db44e3-2fb4-11ea-910c-0242ac110004 to disappear
Jan  5 12:12:46.839: INFO: Pod downwardapi-volume-a7db44e3-2fb4-11ea-910c-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:12:46.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-rpldf" for this suite.
Jan  5 12:12:52.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:12:53.092: INFO: namespace: e2e-tests-projected-rpldf, resource: bindings, ignored listing per whitelist
Jan  5 12:12:53.182: INFO: namespace e2e-tests-projected-rpldf deletion completed in 6.325094189s

• [SLOW TEST:18.883 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:12:53.182: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Jan  5 12:12:53.636: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:12:53.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-2l7wl" for this suite.
Jan  5 12:12:59.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:13:00.116: INFO: namespace: e2e-tests-kubectl-2l7wl, resource: bindings, ignored listing per whitelist
Jan  5 12:13:00.118: INFO: namespace e2e-tests-kubectl-2l7wl deletion completed in 6.327821785s

• [SLOW TEST:6.936 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:13:00.119: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan  5 12:13:00.309: INFO: Waiting up to 5m0s for pod "downward-api-b739cbc0-2fb4-11ea-910c-0242ac110004" in namespace "e2e-tests-downward-api-qmv8d" to be "success or failure"
Jan  5 12:13:00.327: INFO: Pod "downward-api-b739cbc0-2fb4-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 18.829992ms
Jan  5 12:13:02.912: INFO: Pod "downward-api-b739cbc0-2fb4-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.603520919s
Jan  5 12:13:04.946: INFO: Pod "downward-api-b739cbc0-2fb4-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.637639433s
Jan  5 12:13:07.071: INFO: Pod "downward-api-b739cbc0-2fb4-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.762771639s
Jan  5 12:13:09.469: INFO: Pod "downward-api-b739cbc0-2fb4-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.160789139s
Jan  5 12:13:11.485: INFO: Pod "downward-api-b739cbc0-2fb4-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.176351227s
STEP: Saw pod success
Jan  5 12:13:11.485: INFO: Pod "downward-api-b739cbc0-2fb4-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 12:13:11.492: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-b739cbc0-2fb4-11ea-910c-0242ac110004 container dapi-container: 
STEP: delete the pod
Jan  5 12:13:11.918: INFO: Waiting for pod downward-api-b739cbc0-2fb4-11ea-910c-0242ac110004 to disappear
Jan  5 12:13:11.941: INFO: Pod downward-api-b739cbc0-2fb4-11ea-910c-0242ac110004 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:13:11.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-qmv8d" for this suite.
Jan  5 12:13:18.056: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:13:18.282: INFO: namespace: e2e-tests-downward-api-qmv8d, resource: bindings, ignored listing per whitelist
Jan  5 12:13:18.299: INFO: namespace e2e-tests-downward-api-qmv8d deletion completed in 6.289224547s

• [SLOW TEST:18.181 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:13:18.300: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-c20f6fe1-2fb4-11ea-910c-0242ac110004
STEP: Creating a pod to test consume secrets
Jan  5 12:13:18.552: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c211e14f-2fb4-11ea-910c-0242ac110004" in namespace "e2e-tests-projected-tdmh8" to be "success or failure"
Jan  5 12:13:18.584: INFO: Pod "pod-projected-secrets-c211e14f-2fb4-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 31.516444ms
Jan  5 12:13:20.607: INFO: Pod "pod-projected-secrets-c211e14f-2fb4-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054391856s
Jan  5 12:13:22.634: INFO: Pod "pod-projected-secrets-c211e14f-2fb4-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081136992s
Jan  5 12:13:24.821: INFO: Pod "pod-projected-secrets-c211e14f-2fb4-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.268050399s
Jan  5 12:13:26.934: INFO: Pod "pod-projected-secrets-c211e14f-2fb4-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.381698261s
Jan  5 12:13:28.951: INFO: Pod "pod-projected-secrets-c211e14f-2fb4-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.398235277s
STEP: Saw pod success
Jan  5 12:13:28.951: INFO: Pod "pod-projected-secrets-c211e14f-2fb4-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 12:13:28.963: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-c211e14f-2fb4-11ea-910c-0242ac110004 container projected-secret-volume-test: 
STEP: delete the pod
Jan  5 12:13:29.531: INFO: Waiting for pod pod-projected-secrets-c211e14f-2fb4-11ea-910c-0242ac110004 to disappear
Jan  5 12:13:29.543: INFO: Pod pod-projected-secrets-c211e14f-2fb4-11ea-910c-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:13:29.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-tdmh8" for this suite.
Jan  5 12:13:36.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:13:36.151: INFO: namespace: e2e-tests-projected-tdmh8, resource: bindings, ignored listing per whitelist
Jan  5 12:13:36.204: INFO: namespace e2e-tests-projected-tdmh8 deletion completed in 6.64858212s

• [SLOW TEST:17.905 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:13:36.205: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-ccbc2ba0-2fb4-11ea-910c-0242ac110004
STEP: Creating a pod to test consume secrets
Jan  5 12:13:36.393: INFO: Waiting up to 5m0s for pod "pod-secrets-ccbd4c22-2fb4-11ea-910c-0242ac110004" in namespace "e2e-tests-secrets-rq8t7" to be "success or failure"
Jan  5 12:13:36.422: INFO: Pod "pod-secrets-ccbd4c22-2fb4-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 28.245993ms
Jan  5 12:13:38.453: INFO: Pod "pod-secrets-ccbd4c22-2fb4-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059323915s
Jan  5 12:13:40.473: INFO: Pod "pod-secrets-ccbd4c22-2fb4-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079050523s
Jan  5 12:13:43.360: INFO: Pod "pod-secrets-ccbd4c22-2fb4-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.96610231s
Jan  5 12:13:45.378: INFO: Pod "pod-secrets-ccbd4c22-2fb4-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.984868204s
Jan  5 12:13:47.396: INFO: Pod "pod-secrets-ccbd4c22-2fb4-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.002294018s
STEP: Saw pod success
Jan  5 12:13:47.396: INFO: Pod "pod-secrets-ccbd4c22-2fb4-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 12:13:47.402: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-ccbd4c22-2fb4-11ea-910c-0242ac110004 container secret-env-test: 
STEP: delete the pod
Jan  5 12:13:47.578: INFO: Waiting for pod pod-secrets-ccbd4c22-2fb4-11ea-910c-0242ac110004 to disappear
Jan  5 12:13:47.589: INFO: Pod pod-secrets-ccbd4c22-2fb4-11ea-910c-0242ac110004 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:13:47.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-rq8t7" for this suite.
Jan  5 12:13:53.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:13:53.837: INFO: namespace: e2e-tests-secrets-rq8t7, resource: bindings, ignored listing per whitelist
Jan  5 12:13:54.035: INFO: namespace e2e-tests-secrets-rq8t7 deletion completed in 6.430887861s

• [SLOW TEST:17.830 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:13:54.036: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-d7737b8d-2fb4-11ea-910c-0242ac110004
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:14:08.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-d2hhr" for this suite.
Jan  5 12:14:32.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:14:32.773: INFO: namespace: e2e-tests-configmap-d2hhr, resource: bindings, ignored listing per whitelist
Jan  5 12:14:33.083: INFO: namespace e2e-tests-configmap-d2hhr deletion completed in 24.525816018s

• [SLOW TEST:39.048 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:14:33.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Jan  5 12:14:33.263: INFO: Waiting up to 5m0s for pod "var-expansion-eea3489a-2fb4-11ea-910c-0242ac110004" in namespace "e2e-tests-var-expansion-gc7wx" to be "success or failure"
Jan  5 12:14:33.268: INFO: Pod "var-expansion-eea3489a-2fb4-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 5.305369ms
Jan  5 12:14:35.522: INFO: Pod "var-expansion-eea3489a-2fb4-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.258712902s
Jan  5 12:14:37.543: INFO: Pod "var-expansion-eea3489a-2fb4-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.280061803s
Jan  5 12:14:39.565: INFO: Pod "var-expansion-eea3489a-2fb4-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.302033655s
Jan  5 12:14:41.652: INFO: Pod "var-expansion-eea3489a-2fb4-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.388689497s
Jan  5 12:14:43.887: INFO: Pod "var-expansion-eea3489a-2fb4-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.624319413s
Jan  5 12:14:45.913: INFO: Pod "var-expansion-eea3489a-2fb4-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.649667794s
STEP: Saw pod success
Jan  5 12:14:45.913: INFO: Pod "var-expansion-eea3489a-2fb4-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 12:14:45.921: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-eea3489a-2fb4-11ea-910c-0242ac110004 container dapi-container: 
STEP: delete the pod
Jan  5 12:14:46.595: INFO: Waiting for pod var-expansion-eea3489a-2fb4-11ea-910c-0242ac110004 to disappear
Jan  5 12:14:46.621: INFO: Pod var-expansion-eea3489a-2fb4-11ea-910c-0242ac110004 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:14:46.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-gc7wx" for this suite.
Jan  5 12:14:52.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:14:52.970: INFO: namespace: e2e-tests-var-expansion-gc7wx, resource: bindings, ignored listing per whitelist
Jan  5 12:14:52.976: INFO: namespace e2e-tests-var-expansion-gc7wx deletion completed in 6.331990485s

• [SLOW TEST:19.892 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:14:52.976: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Jan  5 12:14:53.176: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix786677973/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:14:53.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-8tbsr" for this suite.
Jan  5 12:14:59.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:14:59.473: INFO: namespace: e2e-tests-kubectl-8tbsr, resource: bindings, ignored listing per whitelist
Jan  5 12:14:59.521: INFO: namespace e2e-tests-kubectl-8tbsr deletion completed in 6.234780345s

• [SLOW TEST:6.545 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:14:59.523: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan  5 12:15:10.608: INFO: Successfully updated pod "pod-update-fe848590-2fb4-11ea-910c-0242ac110004"
STEP: verifying the updated pod is in kubernetes
Jan  5 12:15:10.639: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:15:10.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-628tq" for this suite.
Jan  5 12:15:34.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:15:34.999: INFO: namespace: e2e-tests-pods-628tq, resource: bindings, ignored listing per whitelist
Jan  5 12:15:35.046: INFO: namespace e2e-tests-pods-628tq deletion completed in 24.397081796s

• [SLOW TEST:35.524 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:15:35.047: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-2wlt
STEP: Creating a pod to test atomic-volume-subpath
Jan  5 12:15:35.257: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-2wlt" in namespace "e2e-tests-subpath-l88bs" to be "success or failure"
Jan  5 12:15:35.268: INFO: Pod "pod-subpath-test-downwardapi-2wlt": Phase="Pending", Reason="", readiness=false. Elapsed: 11.350956ms
Jan  5 12:15:37.294: INFO: Pod "pod-subpath-test-downwardapi-2wlt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036750653s
Jan  5 12:15:39.323: INFO: Pod "pod-subpath-test-downwardapi-2wlt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066319334s
Jan  5 12:15:41.776: INFO: Pod "pod-subpath-test-downwardapi-2wlt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.519136253s
Jan  5 12:15:43.796: INFO: Pod "pod-subpath-test-downwardapi-2wlt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.539674745s
Jan  5 12:15:45.866: INFO: Pod "pod-subpath-test-downwardapi-2wlt": Phase="Pending", Reason="", readiness=false. Elapsed: 10.609445775s
Jan  5 12:15:47.897: INFO: Pod "pod-subpath-test-downwardapi-2wlt": Phase="Pending", Reason="", readiness=false. Elapsed: 12.640257698s
Jan  5 12:15:49.926: INFO: Pod "pod-subpath-test-downwardapi-2wlt": Phase="Pending", Reason="", readiness=false. Elapsed: 14.669077712s
Jan  5 12:15:51.946: INFO: Pod "pod-subpath-test-downwardapi-2wlt": Phase="Pending", Reason="", readiness=false. Elapsed: 16.689298565s
Jan  5 12:15:54.012: INFO: Pod "pod-subpath-test-downwardapi-2wlt": Phase="Running", Reason="", readiness=false. Elapsed: 18.755482437s
Jan  5 12:15:56.031: INFO: Pod "pod-subpath-test-downwardapi-2wlt": Phase="Running", Reason="", readiness=false. Elapsed: 20.774231734s
Jan  5 12:15:58.052: INFO: Pod "pod-subpath-test-downwardapi-2wlt": Phase="Running", Reason="", readiness=false. Elapsed: 22.794864581s
Jan  5 12:16:00.071: INFO: Pod "pod-subpath-test-downwardapi-2wlt": Phase="Running", Reason="", readiness=false. Elapsed: 24.814428135s
Jan  5 12:16:02.087: INFO: Pod "pod-subpath-test-downwardapi-2wlt": Phase="Running", Reason="", readiness=false. Elapsed: 26.830333017s
Jan  5 12:16:04.099: INFO: Pod "pod-subpath-test-downwardapi-2wlt": Phase="Running", Reason="", readiness=false. Elapsed: 28.842449218s
Jan  5 12:16:06.116: INFO: Pod "pod-subpath-test-downwardapi-2wlt": Phase="Running", Reason="", readiness=false. Elapsed: 30.859391579s
Jan  5 12:16:08.173: INFO: Pod "pod-subpath-test-downwardapi-2wlt": Phase="Running", Reason="", readiness=false. Elapsed: 32.916130372s
Jan  5 12:16:10.191: INFO: Pod "pod-subpath-test-downwardapi-2wlt": Phase="Running", Reason="", readiness=false. Elapsed: 34.933835549s
Jan  5 12:16:12.216: INFO: Pod "pod-subpath-test-downwardapi-2wlt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.959365748s
STEP: Saw pod success
Jan  5 12:16:12.216: INFO: Pod "pod-subpath-test-downwardapi-2wlt" satisfied condition "success or failure"
Jan  5 12:16:12.221: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-2wlt container test-container-subpath-downwardapi-2wlt: 
STEP: delete the pod
Jan  5 12:16:13.052: INFO: Waiting for pod pod-subpath-test-downwardapi-2wlt to disappear
Jan  5 12:16:13.068: INFO: Pod pod-subpath-test-downwardapi-2wlt no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-2wlt
Jan  5 12:16:13.068: INFO: Deleting pod "pod-subpath-test-downwardapi-2wlt" in namespace "e2e-tests-subpath-l88bs"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:16:13.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-l88bs" for this suite.
Jan  5 12:16:19.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:16:19.457: INFO: namespace: e2e-tests-subpath-l88bs, resource: bindings, ignored listing per whitelist
Jan  5 12:16:19.496: INFO: namespace e2e-tests-subpath-l88bs deletion completed in 6.416529425s

• [SLOW TEST:44.449 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:16:19.496: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  5 12:16:19.924: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Jan  5 12:16:19.933: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-4hbt4/daemonsets","resourceVersion":"17252802"},"items":null}

Jan  5 12:16:19.938: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-4hbt4/pods","resourceVersion":"17252802"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:16:20.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-4hbt4" for this suite.
Jan  5 12:16:26.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:16:26.177: INFO: namespace: e2e-tests-daemonsets-4hbt4, resource: bindings, ignored listing per whitelist
Jan  5 12:16:26.206: INFO: namespace e2e-tests-daemonsets-4hbt4 deletion completed in 6.19859426s

S [SKIPPING] [6.709 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Jan  5 12:16:19.924: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:16:26.206: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  5 12:16:26.367: INFO: Waiting up to 5m0s for pod "downwardapi-volume-320d313d-2fb5-11ea-910c-0242ac110004" in namespace "e2e-tests-downward-api-8dghn" to be "success or failure"
Jan  5 12:16:26.391: INFO: Pod "downwardapi-volume-320d313d-2fb5-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 24.249374ms
Jan  5 12:16:28.494: INFO: Pod "downwardapi-volume-320d313d-2fb5-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127013817s
Jan  5 12:16:30.522: INFO: Pod "downwardapi-volume-320d313d-2fb5-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.154629366s
Jan  5 12:16:32.711: INFO: Pod "downwardapi-volume-320d313d-2fb5-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.343950799s
Jan  5 12:16:34.724: INFO: Pod "downwardapi-volume-320d313d-2fb5-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.357202249s
Jan  5 12:16:36.748: INFO: Pod "downwardapi-volume-320d313d-2fb5-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.381254728s
Jan  5 12:16:38.766: INFO: Pod "downwardapi-volume-320d313d-2fb5-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.398650664s
STEP: Saw pod success
Jan  5 12:16:38.766: INFO: Pod "downwardapi-volume-320d313d-2fb5-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 12:16:38.772: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-320d313d-2fb5-11ea-910c-0242ac110004 container client-container: 
STEP: delete the pod
Jan  5 12:16:39.004: INFO: Waiting for pod downwardapi-volume-320d313d-2fb5-11ea-910c-0242ac110004 to disappear
Jan  5 12:16:39.136: INFO: Pod downwardapi-volume-320d313d-2fb5-11ea-910c-0242ac110004 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:16:39.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-8dghn" for this suite.
Jan  5 12:16:45.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:16:45.263: INFO: namespace: e2e-tests-downward-api-8dghn, resource: bindings, ignored listing per whitelist
Jan  5 12:16:45.424: INFO: namespace e2e-tests-downward-api-8dghn deletion completed in 6.272005053s

• [SLOW TEST:19.219 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:16:45.426: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-bgg84
Jan  5 12:16:55.642: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-bgg84
STEP: checking the pod's current state and verifying that restartCount is present
Jan  5 12:16:55.648: INFO: Initial restart count of pod liveness-http is 0
Jan  5 12:17:15.952: INFO: Restart count of pod e2e-tests-container-probe-bgg84/liveness-http is now 1 (20.304023937s elapsed)
Jan  5 12:17:34.628: INFO: Restart count of pod e2e-tests-container-probe-bgg84/liveness-http is now 2 (38.980033118s elapsed)
Jan  5 12:17:53.246: INFO: Restart count of pod e2e-tests-container-probe-bgg84/liveness-http is now 3 (57.598398989s elapsed)
Jan  5 12:18:15.532: INFO: Restart count of pod e2e-tests-container-probe-bgg84/liveness-http is now 4 (1m19.883988142s elapsed)
Jan  5 12:19:24.424: INFO: Restart count of pod e2e-tests-container-probe-bgg84/liveness-http is now 5 (2m28.775756618s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:19:24.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-bgg84" for this suite.
Jan  5 12:19:30.645: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:19:30.734: INFO: namespace: e2e-tests-container-probe-bgg84, resource: bindings, ignored listing per whitelist
Jan  5 12:19:30.773: INFO: namespace e2e-tests-container-probe-bgg84 deletion completed in 6.186132375s

• [SLOW TEST:165.347 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:19:30.773: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan  5 12:19:30.970: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:19:53.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-z9bjz" for this suite.
Jan  5 12:20:17.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:20:18.044: INFO: namespace: e2e-tests-init-container-z9bjz, resource: bindings, ignored listing per whitelist
Jan  5 12:20:18.107: INFO: namespace e2e-tests-init-container-z9bjz deletion completed in 24.298839355s

• [SLOW TEST:47.334 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:20:18.108: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan  5 12:20:29.215: INFO: Successfully updated pod "annotationupdatebc5f07e2-2fb5-11ea-910c-0242ac110004"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:20:31.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-8ggjx" for this suite.
Jan  5 12:20:55.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:20:55.581: INFO: namespace: e2e-tests-projected-8ggjx, resource: bindings, ignored listing per whitelist
Jan  5 12:20:55.745: INFO: namespace e2e-tests-projected-8ggjx deletion completed in 24.307972144s

• [SLOW TEST:37.638 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:20:55.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  5 12:20:55.985: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d2c24bc7-2fb5-11ea-910c-0242ac110004" in namespace "e2e-tests-projected-tplvk" to be "success or failure"
Jan  5 12:20:56.012: INFO: Pod "downwardapi-volume-d2c24bc7-2fb5-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 26.483031ms
Jan  5 12:20:58.138: INFO: Pod "downwardapi-volume-d2c24bc7-2fb5-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153226699s
Jan  5 12:21:00.165: INFO: Pod "downwardapi-volume-d2c24bc7-2fb5-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.179467896s
Jan  5 12:21:02.200: INFO: Pod "downwardapi-volume-d2c24bc7-2fb5-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.214547387s
Jan  5 12:21:04.502: INFO: Pod "downwardapi-volume-d2c24bc7-2fb5-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.517273138s
Jan  5 12:21:06.532: INFO: Pod "downwardapi-volume-d2c24bc7-2fb5-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.546594063s
STEP: Saw pod success
Jan  5 12:21:06.532: INFO: Pod "downwardapi-volume-d2c24bc7-2fb5-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 12:21:06.538: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-d2c24bc7-2fb5-11ea-910c-0242ac110004 container client-container: 
STEP: delete the pod
Jan  5 12:21:06.681: INFO: Waiting for pod downwardapi-volume-d2c24bc7-2fb5-11ea-910c-0242ac110004 to disappear
Jan  5 12:21:06.706: INFO: Pod downwardapi-volume-d2c24bc7-2fb5-11ea-910c-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:21:06.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-tplvk" for this suite.
Jan  5 12:21:12.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:21:12.976: INFO: namespace: e2e-tests-projected-tplvk, resource: bindings, ignored listing per whitelist
Jan  5 12:21:13.074: INFO: namespace e2e-tests-projected-tplvk deletion completed in 6.313961221s

• [SLOW TEST:17.327 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:21:13.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-dd10be30-2fb5-11ea-910c-0242ac110004
STEP: Creating configMap with name cm-test-opt-upd-dd10be92-2fb5-11ea-910c-0242ac110004
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-dd10be30-2fb5-11ea-910c-0242ac110004
STEP: Updating configmap cm-test-opt-upd-dd10be92-2fb5-11ea-910c-0242ac110004
STEP: Creating configMap with name cm-test-opt-create-dd10beb8-2fb5-11ea-910c-0242ac110004
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:21:31.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-v675t" for this suite.
Jan  5 12:21:57.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:21:57.761: INFO: namespace: e2e-tests-configmap-v675t, resource: bindings, ignored listing per whitelist
Jan  5 12:21:57.796: INFO: namespace e2e-tests-configmap-v675t deletion completed in 26.144943721s

• [SLOW TEST:44.722 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:21:57.797: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-f7bcd965-2fb5-11ea-910c-0242ac110004
STEP: Creating a pod to test consume configMaps
Jan  5 12:21:58.055: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f7bdd773-2fb5-11ea-910c-0242ac110004" in namespace "e2e-tests-projected-wqvw7" to be "success or failure"
Jan  5 12:21:58.154: INFO: Pod "pod-projected-configmaps-f7bdd773-2fb5-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 98.507652ms
Jan  5 12:22:00.277: INFO: Pod "pod-projected-configmaps-f7bdd773-2fb5-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.22149317s
Jan  5 12:22:02.309: INFO: Pod "pod-projected-configmaps-f7bdd773-2fb5-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.253833634s
Jan  5 12:22:04.333: INFO: Pod "pod-projected-configmaps-f7bdd773-2fb5-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.277840165s
Jan  5 12:22:06.347: INFO: Pod "pod-projected-configmaps-f7bdd773-2fb5-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.291692045s
Jan  5 12:22:08.361: INFO: Pod "pod-projected-configmaps-f7bdd773-2fb5-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.305277714s
STEP: Saw pod success
Jan  5 12:22:08.361: INFO: Pod "pod-projected-configmaps-f7bdd773-2fb5-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 12:22:08.383: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-f7bdd773-2fb5-11ea-910c-0242ac110004 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  5 12:22:09.046: INFO: Waiting for pod pod-projected-configmaps-f7bdd773-2fb5-11ea-910c-0242ac110004 to disappear
Jan  5 12:22:09.378: INFO: Pod pod-projected-configmaps-f7bdd773-2fb5-11ea-910c-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:22:09.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-wqvw7" for this suite.
Jan  5 12:22:15.804: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:22:15.936: INFO: namespace: e2e-tests-projected-wqvw7, resource: bindings, ignored listing per whitelist
Jan  5 12:22:16.010: INFO: namespace e2e-tests-projected-wqvw7 deletion completed in 6.615558215s

• [SLOW TEST:18.214 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:22:16.011: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  5 12:22:16.255: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 48.002141ms)
Jan  5 12:22:16.262: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.924011ms)
Jan  5 12:22:16.268: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.470905ms)
Jan  5 12:22:16.274: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.805533ms)
Jan  5 12:22:16.280: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.812166ms)
Jan  5 12:22:16.285: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.555177ms)
Jan  5 12:22:16.290: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.688769ms)
Jan  5 12:22:16.295: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.068086ms)
Jan  5 12:22:16.300: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.429811ms)
Jan  5 12:22:16.306: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.525218ms)
Jan  5 12:22:16.311: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.386341ms)
Jan  5 12:22:16.319: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.832641ms)
Jan  5 12:22:16.323: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.261967ms)
Jan  5 12:22:16.327: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.203311ms)
Jan  5 12:22:16.332: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.637018ms)
Jan  5 12:22:16.337: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.499858ms)
Jan  5 12:22:16.341: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.729107ms)
Jan  5 12:22:16.354: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.452413ms)
Jan  5 12:22:16.368: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.63211ms)
Jan  5 12:22:16.377: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.964144ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:22:16.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-29cjt" for this suite.
Jan  5 12:22:22.463: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:22:22.677: INFO: namespace: e2e-tests-proxy-29cjt, resource: bindings, ignored listing per whitelist
Jan  5 12:22:22.695: INFO: namespace e2e-tests-proxy-29cjt deletion completed in 6.312262959s

• [SLOW TEST:6.683 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:22:22.695: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-0689a9d4-2fb6-11ea-910c-0242ac110004
STEP: Creating a pod to test consume configMaps
Jan  5 12:22:22.927: INFO: Waiting up to 5m0s for pod "pod-configmaps-068af541-2fb6-11ea-910c-0242ac110004" in namespace "e2e-tests-configmap-mglg8" to be "success or failure"
Jan  5 12:22:22.955: INFO: Pod "pod-configmaps-068af541-2fb6-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 27.957068ms
Jan  5 12:22:24.980: INFO: Pod "pod-configmaps-068af541-2fb6-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053161035s
Jan  5 12:22:26.998: INFO: Pod "pod-configmaps-068af541-2fb6-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071019701s
Jan  5 12:22:29.560: INFO: Pod "pod-configmaps-068af541-2fb6-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.632378646s
Jan  5 12:22:31.572: INFO: Pod "pod-configmaps-068af541-2fb6-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.644740167s
Jan  5 12:22:33.593: INFO: Pod "pod-configmaps-068af541-2fb6-11ea-910c-0242ac110004": Phase="Running", Reason="", readiness=true. Elapsed: 10.666023972s
Jan  5 12:22:35.618: INFO: Pod "pod-configmaps-068af541-2fb6-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.690930698s
STEP: Saw pod success
Jan  5 12:22:35.618: INFO: Pod "pod-configmaps-068af541-2fb6-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 12:22:35.626: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-068af541-2fb6-11ea-910c-0242ac110004 container configmap-volume-test: 
STEP: delete the pod
Jan  5 12:22:35.944: INFO: Waiting for pod pod-configmaps-068af541-2fb6-11ea-910c-0242ac110004 to disappear
Jan  5 12:22:35.964: INFO: Pod pod-configmaps-068af541-2fb6-11ea-910c-0242ac110004 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:22:35.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-mglg8" for this suite.
Jan  5 12:22:42.068: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:22:42.314: INFO: namespace: e2e-tests-configmap-mglg8, resource: bindings, ignored listing per whitelist
Jan  5 12:22:42.401: INFO: namespace e2e-tests-configmap-mglg8 deletion completed in 6.42546205s

• [SLOW TEST:19.706 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:22:42.403: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-8q9w
STEP: Creating a pod to test atomic-volume-subpath
Jan  5 12:22:42.832: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-8q9w" in namespace "e2e-tests-subpath-m2lwx" to be "success or failure"
Jan  5 12:22:42.923: INFO: Pod "pod-subpath-test-projected-8q9w": Phase="Pending", Reason="", readiness=false. Elapsed: 91.230494ms
Jan  5 12:22:44.940: INFO: Pod "pod-subpath-test-projected-8q9w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108861955s
Jan  5 12:22:47.042: INFO: Pod "pod-subpath-test-projected-8q9w": Phase="Pending", Reason="", readiness=false. Elapsed: 4.209938443s
Jan  5 12:22:49.535: INFO: Pod "pod-subpath-test-projected-8q9w": Phase="Pending", Reason="", readiness=false. Elapsed: 6.702919264s
Jan  5 12:22:51.566: INFO: Pod "pod-subpath-test-projected-8q9w": Phase="Pending", Reason="", readiness=false. Elapsed: 8.734181412s
Jan  5 12:22:53.577: INFO: Pod "pod-subpath-test-projected-8q9w": Phase="Pending", Reason="", readiness=false. Elapsed: 10.745597165s
Jan  5 12:22:55.589: INFO: Pod "pod-subpath-test-projected-8q9w": Phase="Pending", Reason="", readiness=false. Elapsed: 12.757812803s
Jan  5 12:22:57.635: INFO: Pod "pod-subpath-test-projected-8q9w": Phase="Pending", Reason="", readiness=false. Elapsed: 14.803477354s
Jan  5 12:22:59.675: INFO: Pod "pod-subpath-test-projected-8q9w": Phase="Running", Reason="", readiness=false. Elapsed: 16.843123436s
Jan  5 12:23:01.703: INFO: Pod "pod-subpath-test-projected-8q9w": Phase="Running", Reason="", readiness=false. Elapsed: 18.870885662s
Jan  5 12:23:03.724: INFO: Pod "pod-subpath-test-projected-8q9w": Phase="Running", Reason="", readiness=false. Elapsed: 20.892035825s
Jan  5 12:23:05.740: INFO: Pod "pod-subpath-test-projected-8q9w": Phase="Running", Reason="", readiness=false. Elapsed: 22.90805657s
Jan  5 12:23:07.752: INFO: Pod "pod-subpath-test-projected-8q9w": Phase="Running", Reason="", readiness=false. Elapsed: 24.920842434s
Jan  5 12:23:09.776: INFO: Pod "pod-subpath-test-projected-8q9w": Phase="Running", Reason="", readiness=false. Elapsed: 26.944661744s
Jan  5 12:23:11.825: INFO: Pod "pod-subpath-test-projected-8q9w": Phase="Running", Reason="", readiness=false. Elapsed: 28.993794596s
Jan  5 12:23:13.868: INFO: Pod "pod-subpath-test-projected-8q9w": Phase="Running", Reason="", readiness=false. Elapsed: 31.036079069s
Jan  5 12:23:15.910: INFO: Pod "pod-subpath-test-projected-8q9w": Phase="Running", Reason="", readiness=false. Elapsed: 33.077864935s
Jan  5 12:23:17.929: INFO: Pod "pod-subpath-test-projected-8q9w": Phase="Running", Reason="", readiness=false. Elapsed: 35.097688305s
Jan  5 12:23:20.387: INFO: Pod "pod-subpath-test-projected-8q9w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.555618791s
STEP: Saw pod success
Jan  5 12:23:20.387: INFO: Pod "pod-subpath-test-projected-8q9w" satisfied condition "success or failure"
Jan  5 12:23:20.627: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-8q9w container test-container-subpath-projected-8q9w: 
STEP: delete the pod
Jan  5 12:23:20.929: INFO: Waiting for pod pod-subpath-test-projected-8q9w to disappear
Jan  5 12:23:20.959: INFO: Pod pod-subpath-test-projected-8q9w no longer exists
STEP: Deleting pod pod-subpath-test-projected-8q9w
Jan  5 12:23:20.959: INFO: Deleting pod "pod-subpath-test-projected-8q9w" in namespace "e2e-tests-subpath-m2lwx"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:23:20.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-m2lwx" for this suite.
Jan  5 12:23:27.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:23:27.179: INFO: namespace: e2e-tests-subpath-m2lwx, resource: bindings, ignored listing per whitelist
Jan  5 12:23:27.209: INFO: namespace e2e-tests-subpath-m2lwx deletion completed in 6.235965279s

• [SLOW TEST:44.807 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:23:27.210: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan  5 12:23:27.431: INFO: Waiting up to 5m0s for pod "downward-api-2d04d605-2fb6-11ea-910c-0242ac110004" in namespace "e2e-tests-downward-api-2nwn2" to be "success or failure"
Jan  5 12:23:27.531: INFO: Pod "downward-api-2d04d605-2fb6-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 100.266844ms
Jan  5 12:23:29.548: INFO: Pod "downward-api-2d04d605-2fb6-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117420957s
Jan  5 12:23:31.567: INFO: Pod "downward-api-2d04d605-2fb6-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.136076658s
Jan  5 12:23:33.587: INFO: Pod "downward-api-2d04d605-2fb6-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.156014362s
Jan  5 12:23:35.617: INFO: Pod "downward-api-2d04d605-2fb6-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.186027223s
Jan  5 12:23:38.186: INFO: Pod "downward-api-2d04d605-2fb6-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.75463607s
STEP: Saw pod success
Jan  5 12:23:38.186: INFO: Pod "downward-api-2d04d605-2fb6-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 12:23:38.194: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-2d04d605-2fb6-11ea-910c-0242ac110004 container dapi-container: 
STEP: delete the pod
Jan  5 12:23:38.600: INFO: Waiting for pod downward-api-2d04d605-2fb6-11ea-910c-0242ac110004 to disappear
Jan  5 12:23:38.650: INFO: Pod downward-api-2d04d605-2fb6-11ea-910c-0242ac110004 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:23:38.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-2nwn2" for this suite.
Jan  5 12:23:44.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:23:44.841: INFO: namespace: e2e-tests-downward-api-2nwn2, resource: bindings, ignored listing per whitelist
Jan  5 12:23:44.927: INFO: namespace e2e-tests-downward-api-2nwn2 deletion completed in 6.265093374s

• [SLOW TEST:17.717 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:23:44.928: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-f6f5t A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-f6f5t;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-f6f5t A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-f6f5t;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-f6f5t.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-f6f5t.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-f6f5t.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-f6f5t.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-f6f5t.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-f6f5t.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-f6f5t.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-f6f5t.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-f6f5t.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-f6f5t.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-f6f5t.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-f6f5t.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-f6f5t.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 229.125.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.125.229_udp@PTR;check="$$(dig +tcp +noall +answer +search 229.125.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.125.229_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-f6f5t A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-f6f5t;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-f6f5t A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-f6f5t;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-f6f5t.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-f6f5t.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-f6f5t.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-f6f5t.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-f6f5t.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-f6f5t.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-f6f5t.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-f6f5t.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-f6f5t.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-f6f5t.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-f6f5t.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-f6f5t.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-f6f5t.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 229.125.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.125.229_udp@PTR;check="$$(dig +tcp +noall +answer +search 229.125.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.125.229_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  5 12:24:02.023: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-f6f5t/dns-test-37d5923e-2fb6-11ea-910c-0242ac110004: the server could not find the requested resource (get pods dns-test-37d5923e-2fb6-11ea-910c-0242ac110004)
Jan  5 12:24:02.035: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-f6f5t/dns-test-37d5923e-2fb6-11ea-910c-0242ac110004: the server could not find the requested resource (get pods dns-test-37d5923e-2fb6-11ea-910c-0242ac110004)
Jan  5 12:24:02.043: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-f6f5t from pod e2e-tests-dns-f6f5t/dns-test-37d5923e-2fb6-11ea-910c-0242ac110004: the server could not find the requested resource (get pods dns-test-37d5923e-2fb6-11ea-910c-0242ac110004)
Jan  5 12:24:02.052: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-f6f5t from pod e2e-tests-dns-f6f5t/dns-test-37d5923e-2fb6-11ea-910c-0242ac110004: the server could not find the requested resource (get pods dns-test-37d5923e-2fb6-11ea-910c-0242ac110004)
Jan  5 12:24:02.059: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-f6f5t.svc from pod e2e-tests-dns-f6f5t/dns-test-37d5923e-2fb6-11ea-910c-0242ac110004: the server could not find the requested resource (get pods dns-test-37d5923e-2fb6-11ea-910c-0242ac110004)
Jan  5 12:24:02.064: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-f6f5t.svc from pod e2e-tests-dns-f6f5t/dns-test-37d5923e-2fb6-11ea-910c-0242ac110004: the server could not find the requested resource (get pods dns-test-37d5923e-2fb6-11ea-910c-0242ac110004)
Jan  5 12:24:02.070: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-f6f5t.svc from pod e2e-tests-dns-f6f5t/dns-test-37d5923e-2fb6-11ea-910c-0242ac110004: the server could not find the requested resource (get pods dns-test-37d5923e-2fb6-11ea-910c-0242ac110004)
Jan  5 12:24:02.075: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-f6f5t.svc from pod e2e-tests-dns-f6f5t/dns-test-37d5923e-2fb6-11ea-910c-0242ac110004: the server could not find the requested resource (get pods dns-test-37d5923e-2fb6-11ea-910c-0242ac110004)
Jan  5 12:24:02.080: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-f6f5t.svc from pod e2e-tests-dns-f6f5t/dns-test-37d5923e-2fb6-11ea-910c-0242ac110004: the server could not find the requested resource (get pods dns-test-37d5923e-2fb6-11ea-910c-0242ac110004)
Jan  5 12:24:02.084: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-f6f5t.svc from pod e2e-tests-dns-f6f5t/dns-test-37d5923e-2fb6-11ea-910c-0242ac110004: the server could not find the requested resource (get pods dns-test-37d5923e-2fb6-11ea-910c-0242ac110004)
Jan  5 12:24:02.089: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-f6f5t/dns-test-37d5923e-2fb6-11ea-910c-0242ac110004: the server could not find the requested resource (get pods dns-test-37d5923e-2fb6-11ea-910c-0242ac110004)
Jan  5 12:24:02.094: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-f6f5t/dns-test-37d5923e-2fb6-11ea-910c-0242ac110004: the server could not find the requested resource (get pods dns-test-37d5923e-2fb6-11ea-910c-0242ac110004)
Jan  5 12:24:02.098: INFO: Unable to read 10.99.125.229_udp@PTR from pod e2e-tests-dns-f6f5t/dns-test-37d5923e-2fb6-11ea-910c-0242ac110004: the server could not find the requested resource (get pods dns-test-37d5923e-2fb6-11ea-910c-0242ac110004)
Jan  5 12:24:02.103: INFO: Unable to read 10.99.125.229_tcp@PTR from pod e2e-tests-dns-f6f5t/dns-test-37d5923e-2fb6-11ea-910c-0242ac110004: the server could not find the requested resource (get pods dns-test-37d5923e-2fb6-11ea-910c-0242ac110004)
Jan  5 12:24:02.109: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-f6f5t/dns-test-37d5923e-2fb6-11ea-910c-0242ac110004: the server could not find the requested resource (get pods dns-test-37d5923e-2fb6-11ea-910c-0242ac110004)
Jan  5 12:24:02.113: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-f6f5t/dns-test-37d5923e-2fb6-11ea-910c-0242ac110004: the server could not find the requested resource (get pods dns-test-37d5923e-2fb6-11ea-910c-0242ac110004)
Jan  5 12:24:02.117: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-f6f5t from pod e2e-tests-dns-f6f5t/dns-test-37d5923e-2fb6-11ea-910c-0242ac110004: the server could not find the requested resource (get pods dns-test-37d5923e-2fb6-11ea-910c-0242ac110004)
Jan  5 12:24:02.121: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-f6f5t from pod e2e-tests-dns-f6f5t/dns-test-37d5923e-2fb6-11ea-910c-0242ac110004: the server could not find the requested resource (get pods dns-test-37d5923e-2fb6-11ea-910c-0242ac110004)
Jan  5 12:24:02.127: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-f6f5t.svc from pod e2e-tests-dns-f6f5t/dns-test-37d5923e-2fb6-11ea-910c-0242ac110004: the server could not find the requested resource (get pods dns-test-37d5923e-2fb6-11ea-910c-0242ac110004)
Jan  5 12:24:02.134: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-f6f5t.svc from pod e2e-tests-dns-f6f5t/dns-test-37d5923e-2fb6-11ea-910c-0242ac110004: the server could not find the requested resource (get pods dns-test-37d5923e-2fb6-11ea-910c-0242ac110004)
Jan  5 12:24:02.141: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-f6f5t.svc from pod e2e-tests-dns-f6f5t/dns-test-37d5923e-2fb6-11ea-910c-0242ac110004: the server could not find the requested resource (get pods dns-test-37d5923e-2fb6-11ea-910c-0242ac110004)
Jan  5 12:24:02.147: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-f6f5t.svc from pod e2e-tests-dns-f6f5t/dns-test-37d5923e-2fb6-11ea-910c-0242ac110004: the server could not find the requested resource (get pods dns-test-37d5923e-2fb6-11ea-910c-0242ac110004)
Jan  5 12:24:02.153: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-f6f5t.svc from pod e2e-tests-dns-f6f5t/dns-test-37d5923e-2fb6-11ea-910c-0242ac110004: the server could not find the requested resource (get pods dns-test-37d5923e-2fb6-11ea-910c-0242ac110004)
Jan  5 12:24:02.160: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-f6f5t.svc from pod e2e-tests-dns-f6f5t/dns-test-37d5923e-2fb6-11ea-910c-0242ac110004: the server could not find the requested resource (get pods dns-test-37d5923e-2fb6-11ea-910c-0242ac110004)
Jan  5 12:24:02.165: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-f6f5t/dns-test-37d5923e-2fb6-11ea-910c-0242ac110004: the server could not find the requested resource (get pods dns-test-37d5923e-2fb6-11ea-910c-0242ac110004)
Jan  5 12:24:02.171: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-f6f5t/dns-test-37d5923e-2fb6-11ea-910c-0242ac110004: the server could not find the requested resource (get pods dns-test-37d5923e-2fb6-11ea-910c-0242ac110004)
Jan  5 12:24:02.175: INFO: Unable to read 10.99.125.229_udp@PTR from pod e2e-tests-dns-f6f5t/dns-test-37d5923e-2fb6-11ea-910c-0242ac110004: the server could not find the requested resource (get pods dns-test-37d5923e-2fb6-11ea-910c-0242ac110004)
Jan  5 12:24:02.179: INFO: Unable to read 10.99.125.229_tcp@PTR from pod e2e-tests-dns-f6f5t/dns-test-37d5923e-2fb6-11ea-910c-0242ac110004: the server could not find the requested resource (get pods dns-test-37d5923e-2fb6-11ea-910c-0242ac110004)
Jan  5 12:24:02.179: INFO: Lookups using e2e-tests-dns-f6f5t/dns-test-37d5923e-2fb6-11ea-910c-0242ac110004 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-f6f5t wheezy_tcp@dns-test-service.e2e-tests-dns-f6f5t wheezy_udp@dns-test-service.e2e-tests-dns-f6f5t.svc wheezy_tcp@dns-test-service.e2e-tests-dns-f6f5t.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-f6f5t.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-f6f5t.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-f6f5t.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-f6f5t.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.99.125.229_udp@PTR 10.99.125.229_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-f6f5t jessie_tcp@dns-test-service.e2e-tests-dns-f6f5t jessie_udp@dns-test-service.e2e-tests-dns-f6f5t.svc jessie_tcp@dns-test-service.e2e-tests-dns-f6f5t.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-f6f5t.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-f6f5t.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-f6f5t.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-f6f5t.svc jessie_udp@PodARecord jessie_tcp@PodARecord 10.99.125.229_udp@PTR 10.99.125.229_tcp@PTR]

Jan  5 12:24:07.462: INFO: DNS probes using e2e-tests-dns-f6f5t/dns-test-37d5923e-2fb6-11ea-910c-0242ac110004 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:24:09.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-f6f5t" for this suite.
Jan  5 12:24:16.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:24:16.309: INFO: namespace: e2e-tests-dns-f6f5t, resource: bindings, ignored listing per whitelist
Jan  5 12:24:16.394: INFO: namespace e2e-tests-dns-f6f5t deletion completed in 6.3141031s

• [SLOW TEST:31.466 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:24:16.394: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-4dwfh
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-4dwfh to expose endpoints map[]
Jan  5 12:24:16.775: INFO: Get endpoints failed (24.15125ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jan  5 12:24:17.791: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-4dwfh exposes endpoints map[] (1.04033621s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-4dwfh
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-4dwfh to expose endpoints map[pod1:[80]]
Jan  5 12:24:22.082: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.256913273s elapsed, will retry)
Jan  5 12:24:26.167: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-4dwfh exposes endpoints map[pod1:[80]] (8.341689696s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-4dwfh
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-4dwfh to expose endpoints map[pod1:[80] pod2:[80]]
Jan  5 12:24:31.011: INFO: Unexpected endpoints: found map[4b0e64e5-2fb6-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (4.835423206s elapsed, will retry)
Jan  5 12:24:36.698: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-4dwfh exposes endpoints map[pod1:[80] pod2:[80]] (10.522379839s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-4dwfh
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-4dwfh to expose endpoints map[pod2:[80]]
Jan  5 12:24:37.841: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-4dwfh exposes endpoints map[pod2:[80]] (1.110356519s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-4dwfh
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-4dwfh to expose endpoints map[]
Jan  5 12:24:38.930: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-4dwfh exposes endpoints map[] (1.058524641s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:24:39.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-4dwfh" for this suite.
Jan  5 12:25:03.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:25:03.246: INFO: namespace: e2e-tests-services-4dwfh, resource: bindings, ignored listing per whitelist
Jan  5 12:25:03.315: INFO: namespace e2e-tests-services-4dwfh deletion completed in 24.19982055s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:46.921 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:25:03.315: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  5 12:25:03.535: INFO: Waiting up to 5m0s for pod "downwardapi-volume-66404e36-2fb6-11ea-910c-0242ac110004" in namespace "e2e-tests-downward-api-znss9" to be "success or failure"
Jan  5 12:25:03.799: INFO: Pod "downwardapi-volume-66404e36-2fb6-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 264.334562ms
Jan  5 12:25:05.831: INFO: Pod "downwardapi-volume-66404e36-2fb6-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.296500903s
Jan  5 12:25:07.865: INFO: Pod "downwardapi-volume-66404e36-2fb6-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330374668s
Jan  5 12:25:10.331: INFO: Pod "downwardapi-volume-66404e36-2fb6-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.795939288s
Jan  5 12:25:12.361: INFO: Pod "downwardapi-volume-66404e36-2fb6-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.825939012s
Jan  5 12:25:14.374: INFO: Pod "downwardapi-volume-66404e36-2fb6-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.838899373s
STEP: Saw pod success
Jan  5 12:25:14.374: INFO: Pod "downwardapi-volume-66404e36-2fb6-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 12:25:14.385: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-66404e36-2fb6-11ea-910c-0242ac110004 container client-container: 
STEP: delete the pod
Jan  5 12:25:14.643: INFO: Waiting for pod downwardapi-volume-66404e36-2fb6-11ea-910c-0242ac110004 to disappear
Jan  5 12:25:14.658: INFO: Pod downwardapi-volume-66404e36-2fb6-11ea-910c-0242ac110004 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:25:14.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-znss9" for this suite.
Jan  5 12:25:20.863: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:25:21.255: INFO: namespace: e2e-tests-downward-api-znss9, resource: bindings, ignored listing per whitelist
Jan  5 12:25:21.283: INFO: namespace e2e-tests-downward-api-znss9 deletion completed in 6.615512624s

• [SLOW TEST:17.968 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:25:21.284: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  5 12:25:21.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-k6n28'
Jan  5 12:25:23.858: INFO: stderr: ""
Jan  5 12:25:23.859: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532
Jan  5 12:25:23.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-k6n28'
Jan  5 12:25:32.785: INFO: stderr: ""
Jan  5 12:25:32.785: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:25:32.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-k6n28" for this suite.
Jan  5 12:25:38.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:25:39.006: INFO: namespace: e2e-tests-kubectl-k6n28, resource: bindings, ignored listing per whitelist
Jan  5 12:25:39.057: INFO: namespace e2e-tests-kubectl-k6n28 deletion completed in 6.165891977s

• [SLOW TEST:17.774 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:25:39.058: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  5 12:25:39.194: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:25:40.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-rv946" for this suite.
Jan  5 12:25:46.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:25:47.035: INFO: namespace: e2e-tests-custom-resource-definition-rv946, resource: bindings, ignored listing per whitelist
Jan  5 12:25:47.107: INFO: namespace e2e-tests-custom-resource-definition-rv946 deletion completed in 6.468958776s

• [SLOW TEST:8.049 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:25:47.107: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  5 12:25:47.298: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jan  5 12:25:47.414: INFO: Number of nodes with available pods: 0
Jan  5 12:25:47.415: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 12:25:48.982: INFO: Number of nodes with available pods: 0
Jan  5 12:25:48.982: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 12:25:49.444: INFO: Number of nodes with available pods: 0
Jan  5 12:25:49.444: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 12:25:50.447: INFO: Number of nodes with available pods: 0
Jan  5 12:25:50.447: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 12:25:51.460: INFO: Number of nodes with available pods: 0
Jan  5 12:25:51.461: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 12:25:52.451: INFO: Number of nodes with available pods: 0
Jan  5 12:25:52.452: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 12:25:54.800: INFO: Number of nodes with available pods: 0
Jan  5 12:25:54.800: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 12:25:55.438: INFO: Number of nodes with available pods: 0
Jan  5 12:25:55.438: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 12:25:56.475: INFO: Number of nodes with available pods: 0
Jan  5 12:25:56.475: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 12:25:57.441: INFO: Number of nodes with available pods: 1
Jan  5 12:25:57.441: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jan  5 12:25:57.510: INFO: Wrong image for pod: daemon-set-rk9qs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  5 12:25:58.556: INFO: Wrong image for pod: daemon-set-rk9qs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  5 12:25:59.537: INFO: Wrong image for pod: daemon-set-rk9qs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  5 12:26:00.581: INFO: Wrong image for pod: daemon-set-rk9qs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  5 12:26:02.223: INFO: Wrong image for pod: daemon-set-rk9qs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  5 12:26:02.539: INFO: Wrong image for pod: daemon-set-rk9qs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  5 12:26:03.537: INFO: Wrong image for pod: daemon-set-rk9qs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  5 12:26:04.574: INFO: Wrong image for pod: daemon-set-rk9qs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  5 12:26:04.575: INFO: Pod daemon-set-rk9qs is not available
Jan  5 12:26:05.531: INFO: Pod daemon-set-6wrsz is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Jan  5 12:26:05.543: INFO: Number of nodes with available pods: 0
Jan  5 12:26:05.543: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 12:26:06.585: INFO: Number of nodes with available pods: 0
Jan  5 12:26:06.585: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 12:26:07.580: INFO: Number of nodes with available pods: 0
Jan  5 12:26:07.580: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 12:26:08.581: INFO: Number of nodes with available pods: 0
Jan  5 12:26:08.581: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 12:26:10.181: INFO: Number of nodes with available pods: 0
Jan  5 12:26:10.181: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 12:26:11.072: INFO: Number of nodes with available pods: 0
Jan  5 12:26:11.072: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 12:26:11.651: INFO: Number of nodes with available pods: 0
Jan  5 12:26:11.651: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 12:26:12.611: INFO: Number of nodes with available pods: 0
Jan  5 12:26:12.611: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan  5 12:26:13.565: INFO: Number of nodes with available pods: 1
Jan  5 12:26:13.565: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-7jzvj, will wait for the garbage collector to delete the pods
Jan  5 12:26:13.902: INFO: Deleting DaemonSet.extensions daemon-set took: 108.226633ms
Jan  5 12:26:14.103: INFO: Terminating DaemonSet.extensions daemon-set pods took: 201.111027ms
Jan  5 12:26:21.017: INFO: Number of nodes with available pods: 0
Jan  5 12:26:21.017: INFO: Number of running nodes: 0, number of available pods: 0
Jan  5 12:26:21.022: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-7jzvj/daemonsets","resourceVersion":"17254052"},"items":null}

Jan  5 12:26:21.026: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-7jzvj/pods","resourceVersion":"17254052"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:26:21.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-7jzvj" for this suite.
Jan  5 12:26:27.094: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:26:27.227: INFO: namespace: e2e-tests-daemonsets-7jzvj, resource: bindings, ignored listing per whitelist
Jan  5 12:26:27.233: INFO: namespace e2e-tests-daemonsets-7jzvj deletion completed in 6.189733402s

• [SLOW TEST:40.126 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:26:27.234: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-dt96s
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-dt96s
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-dt96s
Jan  5 12:26:27.524: INFO: Found 0 stateful pods, waiting for 1
Jan  5 12:26:37.539: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jan  5 12:26:37.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt96s ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  5 12:26:38.192: INFO: stderr: "I0105 12:26:37.764130    2587 log.go:172] (0xc0006dc2c0) (0xc00071c640) Create stream\nI0105 12:26:37.764553    2587 log.go:172] (0xc0006dc2c0) (0xc00071c640) Stream added, broadcasting: 1\nI0105 12:26:37.769140    2587 log.go:172] (0xc0006dc2c0) Reply frame received for 1\nI0105 12:26:37.769186    2587 log.go:172] (0xc0006dc2c0) (0xc0007a6d20) Create stream\nI0105 12:26:37.769196    2587 log.go:172] (0xc0006dc2c0) (0xc0007a6d20) Stream added, broadcasting: 3\nI0105 12:26:37.771546    2587 log.go:172] (0xc0006dc2c0) Reply frame received for 3\nI0105 12:26:37.771587    2587 log.go:172] (0xc0006dc2c0) (0xc00058e000) Create stream\nI0105 12:26:37.771601    2587 log.go:172] (0xc0006dc2c0) (0xc00058e000) Stream added, broadcasting: 5\nI0105 12:26:37.773349    2587 log.go:172] (0xc0006dc2c0) Reply frame received for 5\nI0105 12:26:38.040805    2587 log.go:172] (0xc0006dc2c0) Data frame received for 3\nI0105 12:26:38.040852    2587 log.go:172] (0xc0007a6d20) (3) Data frame handling\nI0105 12:26:38.040886    2587 log.go:172] (0xc0007a6d20) (3) Data frame sent\nI0105 12:26:38.185310    2587 log.go:172] (0xc0006dc2c0) Data frame received for 1\nI0105 12:26:38.185403    2587 log.go:172] (0xc0006dc2c0) (0xc0007a6d20) Stream removed, broadcasting: 3\nI0105 12:26:38.185454    2587 log.go:172] (0xc00071c640) (1) Data frame handling\nI0105 12:26:38.185520    2587 log.go:172] (0xc0006dc2c0) (0xc00058e000) Stream removed, broadcasting: 5\nI0105 12:26:38.185545    2587 log.go:172] (0xc00071c640) (1) Data frame sent\nI0105 12:26:38.185557    2587 log.go:172] (0xc0006dc2c0) (0xc00071c640) Stream removed, broadcasting: 1\nI0105 12:26:38.185601    2587 log.go:172] (0xc0006dc2c0) Go away received\nI0105 12:26:38.185847    2587 log.go:172] (0xc0006dc2c0) (0xc00071c640) Stream removed, broadcasting: 1\nI0105 12:26:38.185859    2587 log.go:172] (0xc0006dc2c0) (0xc0007a6d20) Stream removed, broadcasting: 3\nI0105 12:26:38.185871    2587 log.go:172] (0xc0006dc2c0) (0xc00058e000) Stream removed, broadcasting: 5\n"
Jan  5 12:26:38.193: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  5 12:26:38.193: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  5 12:26:38.208: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan  5 12:26:48.247: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  5 12:26:48.247: INFO: Waiting for statefulset status.replicas updated to 0
Jan  5 12:26:48.331: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999998537s
Jan  5 12:26:49.353: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.973165776s
Jan  5 12:26:50.379: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.95071163s
Jan  5 12:26:51.434: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.924689023s
Jan  5 12:26:52.457: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.869713346s
Jan  5 12:26:53.484: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.846650697s
Jan  5 12:26:54.507: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.819430823s
Jan  5 12:26:55.520: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.796291074s
Jan  5 12:26:56.550: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.783025077s
Jan  5 12:26:57.570: INFO: Verifying statefulset ss doesn't scale past 1 for another 753.605164ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-dt96s
Jan  5 12:26:58.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt96s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 12:26:59.130: INFO: stderr: "I0105 12:26:58.796988    2608 log.go:172] (0xc00013a840) (0xc00061d4a0) Create stream\nI0105 12:26:58.797164    2608 log.go:172] (0xc00013a840) (0xc00061d4a0) Stream added, broadcasting: 1\nI0105 12:26:58.802363    2608 log.go:172] (0xc00013a840) Reply frame received for 1\nI0105 12:26:58.802398    2608 log.go:172] (0xc00013a840) (0xc00061d540) Create stream\nI0105 12:26:58.802410    2608 log.go:172] (0xc00013a840) (0xc00061d540) Stream added, broadcasting: 3\nI0105 12:26:58.804064    2608 log.go:172] (0xc00013a840) Reply frame received for 3\nI0105 12:26:58.804106    2608 log.go:172] (0xc00013a840) (0xc0006f8000) Create stream\nI0105 12:26:58.804121    2608 log.go:172] (0xc00013a840) (0xc0006f8000) Stream added, broadcasting: 5\nI0105 12:26:58.805141    2608 log.go:172] (0xc00013a840) Reply frame received for 5\nI0105 12:26:58.988212    2608 log.go:172] (0xc00013a840) Data frame received for 3\nI0105 12:26:58.988291    2608 log.go:172] (0xc00061d540) (3) Data frame handling\nI0105 12:26:58.988326    2608 log.go:172] (0xc00061d540) (3) Data frame sent\nI0105 12:26:59.123326    2608 log.go:172] (0xc00013a840) Data frame received for 1\nI0105 12:26:59.123463    2608 log.go:172] (0xc00013a840) (0xc00061d540) Stream removed, broadcasting: 3\nI0105 12:26:59.123524    2608 log.go:172] (0xc00061d4a0) (1) Data frame handling\nI0105 12:26:59.123551    2608 log.go:172] (0xc00061d4a0) (1) Data frame sent\nI0105 12:26:59.123568    2608 log.go:172] (0xc00013a840) (0xc0006f8000) Stream removed, broadcasting: 5\nI0105 12:26:59.123601    2608 log.go:172] (0xc00013a840) (0xc00061d4a0) Stream removed, broadcasting: 1\nI0105 12:26:59.123622    2608 log.go:172] (0xc00013a840) Go away received\nI0105 12:26:59.123793    2608 log.go:172] (0xc00013a840) (0xc00061d4a0) Stream removed, broadcasting: 1\nI0105 12:26:59.123806    2608 log.go:172] (0xc00013a840) (0xc00061d540) Stream removed, broadcasting: 3\nI0105 12:26:59.123810    2608 log.go:172] (0xc00013a840) (0xc0006f8000) Stream removed, broadcasting: 5\n"
Jan  5 12:26:59.130: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  5 12:26:59.130: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  5 12:26:59.149: INFO: Found 1 stateful pods, waiting for 3
Jan  5 12:27:09.162: INFO: Found 2 stateful pods, waiting for 3
Jan  5 12:27:19.158: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  5 12:27:19.158: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  5 12:27:19.158: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  5 12:27:29.172: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  5 12:27:29.172: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  5 12:27:29.172: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jan  5 12:27:29.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt96s ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  5 12:27:29.910: INFO: stderr: "I0105 12:27:29.435667    2631 log.go:172] (0xc000720370) (0xc000746640) Create stream\nI0105 12:27:29.436007    2631 log.go:172] (0xc000720370) (0xc000746640) Stream added, broadcasting: 1\nI0105 12:27:29.445852    2631 log.go:172] (0xc000720370) Reply frame received for 1\nI0105 12:27:29.445983    2631 log.go:172] (0xc000720370) (0xc0007466e0) Create stream\nI0105 12:27:29.446012    2631 log.go:172] (0xc000720370) (0xc0007466e0) Stream added, broadcasting: 3\nI0105 12:27:29.447832    2631 log.go:172] (0xc000720370) Reply frame received for 3\nI0105 12:27:29.447865    2631 log.go:172] (0xc000720370) (0xc0000ecd20) Create stream\nI0105 12:27:29.447878    2631 log.go:172] (0xc000720370) (0xc0000ecd20) Stream added, broadcasting: 5\nI0105 12:27:29.449244    2631 log.go:172] (0xc000720370) Reply frame received for 5\nI0105 12:27:29.606418    2631 log.go:172] (0xc000720370) Data frame received for 3\nI0105 12:27:29.606487    2631 log.go:172] (0xc0007466e0) (3) Data frame handling\nI0105 12:27:29.606504    2631 log.go:172] (0xc0007466e0) (3) Data frame sent\nI0105 12:27:29.899606    2631 log.go:172] (0xc000720370) (0xc0007466e0) Stream removed, broadcasting: 3\nI0105 12:27:29.900050    2631 log.go:172] (0xc000720370) Data frame received for 1\nI0105 12:27:29.900118    2631 log.go:172] (0xc000720370) (0xc0000ecd20) Stream removed, broadcasting: 5\nI0105 12:27:29.900148    2631 log.go:172] (0xc000746640) (1) Data frame handling\nI0105 12:27:29.900171    2631 log.go:172] (0xc000746640) (1) Data frame sent\nI0105 12:27:29.900231    2631 log.go:172] (0xc000720370) (0xc000746640) Stream removed, broadcasting: 1\nI0105 12:27:29.900282    2631 log.go:172] (0xc000720370) Go away received\nI0105 12:27:29.900751    2631 log.go:172] (0xc000720370) (0xc000746640) Stream removed, broadcasting: 1\nI0105 12:27:29.900830    2631 log.go:172] (0xc000720370) (0xc0007466e0) Stream removed, broadcasting: 3\nI0105 12:27:29.900874    2631 log.go:172] (0xc000720370) (0xc0000ecd20) Stream removed, broadcasting: 5\n"
Jan  5 12:27:29.910: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  5 12:27:29.910: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  5 12:27:29.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt96s ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  5 12:27:30.565: INFO: stderr: "I0105 12:27:30.206818    2654 log.go:172] (0xc0006e00b0) (0xc0007025a0) Create stream\nI0105 12:27:30.207135    2654 log.go:172] (0xc0006e00b0) (0xc0007025a0) Stream added, broadcasting: 1\nI0105 12:27:30.211467    2654 log.go:172] (0xc0006e00b0) Reply frame received for 1\nI0105 12:27:30.211492    2654 log.go:172] (0xc0006e00b0) (0xc0007a6dc0) Create stream\nI0105 12:27:30.211499    2654 log.go:172] (0xc0006e00b0) (0xc0007a6dc0) Stream added, broadcasting: 3\nI0105 12:27:30.212398    2654 log.go:172] (0xc0006e00b0) Reply frame received for 3\nI0105 12:27:30.212436    2654 log.go:172] (0xc0006e00b0) (0xc000552000) Create stream\nI0105 12:27:30.212449    2654 log.go:172] (0xc0006e00b0) (0xc000552000) Stream added, broadcasting: 5\nI0105 12:27:30.213348    2654 log.go:172] (0xc0006e00b0) Reply frame received for 5\nI0105 12:27:30.346123    2654 log.go:172] (0xc0006e00b0) Data frame received for 3\nI0105 12:27:30.346175    2654 log.go:172] (0xc0007a6dc0) (3) Data frame handling\nI0105 12:27:30.346193    2654 log.go:172] (0xc0007a6dc0) (3) Data frame sent\nI0105 12:27:30.549193    2654 log.go:172] (0xc0006e00b0) (0xc0007a6dc0) Stream removed, broadcasting: 3\nI0105 12:27:30.549402    2654 log.go:172] (0xc0006e00b0) Data frame received for 1\nI0105 12:27:30.549654    2654 log.go:172] (0xc0006e00b0) (0xc000552000) Stream removed, broadcasting: 5\nI0105 12:27:30.549809    2654 log.go:172] (0xc0007025a0) (1) Data frame handling\nI0105 12:27:30.549836    2654 log.go:172] (0xc0007025a0) (1) Data frame sent\nI0105 12:27:30.549866    2654 log.go:172] (0xc0006e00b0) (0xc0007025a0) Stream removed, broadcasting: 1\nI0105 12:27:30.549882    2654 log.go:172] (0xc0006e00b0) Go away received\nI0105 12:27:30.551575    2654 log.go:172] (0xc0006e00b0) (0xc0007025a0) Stream removed, broadcasting: 1\nI0105 12:27:30.551680    2654 log.go:172] (0xc0006e00b0) (0xc0007a6dc0) Stream removed, broadcasting: 3\nI0105 12:27:30.551696    2654 log.go:172] (0xc0006e00b0) (0xc000552000) Stream removed, broadcasting: 5\n"
Jan  5 12:27:30.566: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  5 12:27:30.566: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  5 12:27:30.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt96s ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  5 12:27:31.087: INFO: stderr: "I0105 12:27:30.716803    2676 log.go:172] (0xc0001440b0) (0xc0002d0e60) Create stream\nI0105 12:27:30.717527    2676 log.go:172] (0xc0001440b0) (0xc0002d0e60) Stream added, broadcasting: 1\nI0105 12:27:30.723003    2676 log.go:172] (0xc0001440b0) Reply frame received for 1\nI0105 12:27:30.723033    2676 log.go:172] (0xc0001440b0) (0xc0002d0f00) Create stream\nI0105 12:27:30.723040    2676 log.go:172] (0xc0001440b0) (0xc0002d0f00) Stream added, broadcasting: 3\nI0105 12:27:30.724515    2676 log.go:172] (0xc0001440b0) Reply frame received for 3\nI0105 12:27:30.724543    2676 log.go:172] (0xc0001440b0) (0xc000596000) Create stream\nI0105 12:27:30.724557    2676 log.go:172] (0xc0001440b0) (0xc000596000) Stream added, broadcasting: 5\nI0105 12:27:30.725535    2676 log.go:172] (0xc0001440b0) Reply frame received for 5\nI0105 12:27:30.911181    2676 log.go:172] (0xc0001440b0) Data frame received for 3\nI0105 12:27:30.911548    2676 log.go:172] (0xc0002d0f00) (3) Data frame handling\nI0105 12:27:30.911618    2676 log.go:172] (0xc0002d0f00) (3) Data frame sent\nI0105 12:27:31.081688    2676 log.go:172] (0xc0001440b0) (0xc0002d0f00) Stream removed, broadcasting: 3\nI0105 12:27:31.081788    2676 log.go:172] (0xc0001440b0) Data frame received for 1\nI0105 12:27:31.081808    2676 log.go:172] (0xc0002d0e60) (1) Data frame handling\nI0105 12:27:31.081819    2676 log.go:172] (0xc0002d0e60) (1) Data frame sent\nI0105 12:27:31.081833    2676 log.go:172] (0xc0001440b0) (0xc0002d0e60) Stream removed, broadcasting: 1\nI0105 12:27:31.081869    2676 log.go:172] (0xc0001440b0) (0xc000596000) Stream removed, broadcasting: 5\nI0105 12:27:31.081997    2676 log.go:172] (0xc0001440b0) Go away received\nI0105 12:27:31.082074    2676 log.go:172] (0xc0001440b0) (0xc0002d0e60) Stream removed, broadcasting: 1\nI0105 12:27:31.082094    2676 log.go:172] (0xc0001440b0) (0xc0002d0f00) Stream removed, broadcasting: 3\nI0105 12:27:31.082101    2676 log.go:172] (0xc0001440b0) (0xc000596000) Stream removed, broadcasting: 5\n"
Jan  5 12:27:31.087: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  5 12:27:31.088: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  5 12:27:31.088: INFO: Waiting for statefulset status.replicas updated to 0
Jan  5 12:27:31.116: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Jan  5 12:27:41.148: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  5 12:27:41.148: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan  5 12:27:41.148: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan  5 12:27:41.246: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999996655s
Jan  5 12:27:42.306: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.969569837s
Jan  5 12:27:43.335: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.909094676s
Jan  5 12:27:44.353: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.880116805s
Jan  5 12:27:45.462: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.862137876s
Jan  5 12:27:46.489: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.752934591s
Jan  5 12:27:47.503: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.725917314s
Jan  5 12:27:49.074: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.712526418s
Jan  5 12:27:50.101: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.141672458s
Jan  5 12:27:51.127: INFO: Verifying statefulset ss doesn't scale past 3 for another 114.593551ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-dt96s
Jan  5 12:27:52.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt96s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 12:27:53.036: INFO: stderr: "I0105 12:27:52.395175    2696 log.go:172] (0xc00074c4d0) (0xc0005df680) Create stream\nI0105 12:27:52.395363    2696 log.go:172] (0xc00074c4d0) (0xc0005df680) Stream added, broadcasting: 1\nI0105 12:27:52.400324    2696 log.go:172] (0xc00074c4d0) Reply frame received for 1\nI0105 12:27:52.400345    2696 log.go:172] (0xc00074c4d0) (0xc0005df720) Create stream\nI0105 12:27:52.400352    2696 log.go:172] (0xc00074c4d0) (0xc0005df720) Stream added, broadcasting: 3\nI0105 12:27:52.401430    2696 log.go:172] (0xc00074c4d0) Reply frame received for 3\nI0105 12:27:52.401470    2696 log.go:172] (0xc00074c4d0) (0xc0008da000) Create stream\nI0105 12:27:52.401480    2696 log.go:172] (0xc00074c4d0) (0xc0008da000) Stream added, broadcasting: 5\nI0105 12:27:52.402731    2696 log.go:172] (0xc00074c4d0) Reply frame received for 5\nI0105 12:27:52.697036    2696 log.go:172] (0xc00074c4d0) Data frame received for 3\nI0105 12:27:52.697194    2696 log.go:172] (0xc0005df720) (3) Data frame handling\nI0105 12:27:52.697292    2696 log.go:172] (0xc0005df720) (3) Data frame sent\nI0105 12:27:53.026924    2696 log.go:172] (0xc00074c4d0) Data frame received for 1\nI0105 12:27:53.027038    2696 log.go:172] (0xc00074c4d0) (0xc0005df720) Stream removed, broadcasting: 3\nI0105 12:27:53.027137    2696 log.go:172] (0xc0005df680) (1) Data frame handling\nI0105 12:27:53.027163    2696 log.go:172] (0xc0005df680) (1) Data frame sent\nI0105 12:27:53.027178    2696 log.go:172] (0xc00074c4d0) (0xc0005df680) Stream removed, broadcasting: 1\nI0105 12:27:53.027586    2696 log.go:172] (0xc00074c4d0) (0xc0008da000) Stream removed, broadcasting: 5\nI0105 12:27:53.028057    2696 log.go:172] (0xc00074c4d0) Go away received\nI0105 12:27:53.028585    2696 log.go:172] (0xc00074c4d0) (0xc0005df680) Stream removed, broadcasting: 1\nI0105 12:27:53.028622    2696 log.go:172] (0xc00074c4d0) (0xc0005df720) Stream removed, broadcasting: 3\nI0105 12:27:53.028649    2696 log.go:172] (0xc00074c4d0) (0xc0008da000) Stream removed, broadcasting: 5\n"
Jan  5 12:27:53.036: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  5 12:27:53.036: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  5 12:27:53.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt96s ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 12:27:53.534: INFO: stderr: "I0105 12:27:53.214630    2718 log.go:172] (0xc0006d2370) (0xc00071a640) Create stream\nI0105 12:27:53.214837    2718 log.go:172] (0xc0006d2370) (0xc00071a640) Stream added, broadcasting: 1\nI0105 12:27:53.220008    2718 log.go:172] (0xc0006d2370) Reply frame received for 1\nI0105 12:27:53.220049    2718 log.go:172] (0xc0006d2370) (0xc00049ec80) Create stream\nI0105 12:27:53.220058    2718 log.go:172] (0xc0006d2370) (0xc00049ec80) Stream added, broadcasting: 3\nI0105 12:27:53.223591    2718 log.go:172] (0xc0006d2370) Reply frame received for 3\nI0105 12:27:53.223636    2718 log.go:172] (0xc0006d2370) (0xc000554000) Create stream\nI0105 12:27:53.223647    2718 log.go:172] (0xc0006d2370) (0xc000554000) Stream added, broadcasting: 5\nI0105 12:27:53.224623    2718 log.go:172] (0xc0006d2370) Reply frame received for 5\nI0105 12:27:53.326285    2718 log.go:172] (0xc0006d2370) Data frame received for 3\nI0105 12:27:53.326748    2718 log.go:172] (0xc00049ec80) (3) Data frame handling\nI0105 12:27:53.326779    2718 log.go:172] (0xc00049ec80) (3) Data frame sent\nI0105 12:27:53.522913    2718 log.go:172] (0xc0006d2370) (0xc00049ec80) Stream removed, broadcasting: 3\nI0105 12:27:53.523111    2718 log.go:172] (0xc0006d2370) (0xc000554000) Stream removed, broadcasting: 5\nI0105 12:27:53.523167    2718 log.go:172] (0xc0006d2370) Data frame received for 1\nI0105 12:27:53.523187    2718 log.go:172] (0xc00071a640) (1) Data frame handling\nI0105 12:27:53.523208    2718 log.go:172] (0xc00071a640) (1) Data frame sent\nI0105 12:27:53.523223    2718 log.go:172] (0xc0006d2370) (0xc00071a640) Stream removed, broadcasting: 1\nI0105 12:27:53.523529    2718 log.go:172] (0xc0006d2370) (0xc00071a640) Stream removed, broadcasting: 1\nI0105 12:27:53.523544    2718 log.go:172] (0xc0006d2370) (0xc00049ec80) Stream removed, broadcasting: 3\nI0105 12:27:53.523558    2718 log.go:172] (0xc0006d2370) (0xc000554000) Stream removed, broadcasting: 5\n"
Jan  5 12:27:53.535: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  5 12:27:53.535: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  5 12:27:53.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dt96s ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 12:27:54.247: INFO: stderr: "I0105 12:27:53.750720    2740 log.go:172] (0xc0005d0370) (0xc0001a1540) Create stream\nI0105 12:27:53.751086    2740 log.go:172] (0xc0005d0370) (0xc0001a1540) Stream added, broadcasting: 1\nI0105 12:27:53.758782    2740 log.go:172] (0xc0005d0370) Reply frame received for 1\nI0105 12:27:53.758830    2740 log.go:172] (0xc0005d0370) (0xc000322000) Create stream\nI0105 12:27:53.758840    2740 log.go:172] (0xc0005d0370) (0xc000322000) Stream added, broadcasting: 3\nI0105 12:27:53.759728    2740 log.go:172] (0xc0005d0370) Reply frame received for 3\nI0105 12:27:53.759806    2740 log.go:172] (0xc0005d0370) (0xc0006ba000) Create stream\nI0105 12:27:53.759839    2740 log.go:172] (0xc0005d0370) (0xc0006ba000) Stream added, broadcasting: 5\nI0105 12:27:53.761842    2740 log.go:172] (0xc0005d0370) Reply frame received for 5\nI0105 12:27:53.929861    2740 log.go:172] (0xc0005d0370) Data frame received for 3\nI0105 12:27:53.929977    2740 log.go:172] (0xc000322000) (3) Data frame handling\nI0105 12:27:53.929991    2740 log.go:172] (0xc000322000) (3) Data frame sent\nI0105 12:27:54.236403    2740 log.go:172] (0xc0005d0370) Data frame received for 1\nI0105 12:27:54.236987    2740 log.go:172] (0xc0005d0370) (0xc000322000) Stream removed, broadcasting: 3\nI0105 12:27:54.237071    2740 log.go:172] (0xc0001a1540) (1) Data frame handling\nI0105 12:27:54.237105    2740 log.go:172] (0xc0001a1540) (1) Data frame sent\nI0105 12:27:54.237149    2740 log.go:172] (0xc0005d0370) (0xc0001a1540) Stream removed, broadcasting: 1\nI0105 12:27:54.239595    2740 log.go:172] (0xc0005d0370) (0xc0006ba000) Stream removed, broadcasting: 5\nI0105 12:27:54.239652    2740 log.go:172] (0xc0005d0370) (0xc0001a1540) Stream removed, broadcasting: 1\nI0105 12:27:54.239663    2740 log.go:172] (0xc0005d0370) (0xc000322000) Stream removed, broadcasting: 3\nI0105 12:27:54.239671    2740 log.go:172] (0xc0005d0370) (0xc0006ba000) Stream removed, broadcasting: 5\n"
Jan  5 12:27:54.248: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  5 12:27:54.248: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  5 12:27:54.248: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan  5 12:28:24.401: INFO: Deleting all statefulset in ns e2e-tests-statefulset-dt96s
Jan  5 12:28:24.464: INFO: Scaling statefulset ss to 0
Jan  5 12:28:24.491: INFO: Waiting for statefulset status.replicas updated to 0
Jan  5 12:28:24.497: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:28:24.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-dt96s" for this suite.
Jan  5 12:28:32.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:28:32.743: INFO: namespace: e2e-tests-statefulset-dt96s, resource: bindings, ignored listing per whitelist
Jan  5 12:28:32.773: INFO: namespace e2e-tests-statefulset-dt96s deletion completed in 8.221883647s

• [SLOW TEST:125.539 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:28:32.773: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  5 12:28:33.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-9s7z6'
Jan  5 12:28:33.154: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  5 12:28:33.155: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
Jan  5 12:28:35.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-9s7z6'
Jan  5 12:28:36.318: INFO: stderr: ""
Jan  5 12:28:36.318: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:28:36.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-9s7z6" for this suite.
Jan  5 12:28:42.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:28:42.567: INFO: namespace: e2e-tests-kubectl-9s7z6, resource: bindings, ignored listing per whitelist
Jan  5 12:28:42.680: INFO: namespace e2e-tests-kubectl-9s7z6 deletion completed in 6.349566369s

• [SLOW TEST:9.907 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:28:42.681: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-e91e424d-2fb6-11ea-910c-0242ac110004
STEP: Creating a pod to test consume configMaps
Jan  5 12:28:43.096: INFO: Waiting up to 5m0s for pod "pod-configmaps-e92d0536-2fb6-11ea-910c-0242ac110004" in namespace "e2e-tests-configmap-xq6cz" to be "success or failure"
Jan  5 12:28:43.136: INFO: Pod "pod-configmaps-e92d0536-2fb6-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 39.264059ms
Jan  5 12:28:45.158: INFO: Pod "pod-configmaps-e92d0536-2fb6-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061979684s
Jan  5 12:28:47.177: INFO: Pod "pod-configmaps-e92d0536-2fb6-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080114812s
Jan  5 12:28:49.293: INFO: Pod "pod-configmaps-e92d0536-2fb6-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.196558387s
Jan  5 12:28:51.306: INFO: Pod "pod-configmaps-e92d0536-2fb6-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.209074046s
Jan  5 12:28:53.322: INFO: Pod "pod-configmaps-e92d0536-2fb6-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.225390772s
STEP: Saw pod success
Jan  5 12:28:53.322: INFO: Pod "pod-configmaps-e92d0536-2fb6-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 12:28:53.327: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-e92d0536-2fb6-11ea-910c-0242ac110004 container configmap-volume-test: 
STEP: delete the pod
Jan  5 12:28:54.387: INFO: Waiting for pod pod-configmaps-e92d0536-2fb6-11ea-910c-0242ac110004 to disappear
Jan  5 12:28:54.736: INFO: Pod pod-configmaps-e92d0536-2fb6-11ea-910c-0242ac110004 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:28:54.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-xq6cz" for this suite.
Jan  5 12:29:00.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:29:01.049: INFO: namespace: e2e-tests-configmap-xq6cz, resource: bindings, ignored listing per whitelist
Jan  5 12:29:01.049: INFO: namespace e2e-tests-configmap-xq6cz deletion completed in 6.298432177s

• [SLOW TEST:18.368 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:29:01.050: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  5 12:29:01.255: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f3ff0c8e-2fb6-11ea-910c-0242ac110004" in namespace "e2e-tests-downward-api-nsdfw" to be "success or failure"
Jan  5 12:29:01.282: INFO: Pod "downwardapi-volume-f3ff0c8e-2fb6-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 26.852087ms
Jan  5 12:29:03.292: INFO: Pod "downwardapi-volume-f3ff0c8e-2fb6-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03687198s
Jan  5 12:29:05.301: INFO: Pod "downwardapi-volume-f3ff0c8e-2fb6-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045852184s
Jan  5 12:29:07.782: INFO: Pod "downwardapi-volume-f3ff0c8e-2fb6-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.526160877s
Jan  5 12:29:10.448: INFO: Pod "downwardapi-volume-f3ff0c8e-2fb6-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.192587267s
Jan  5 12:29:12.480: INFO: Pod "downwardapi-volume-f3ff0c8e-2fb6-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.224877161s
STEP: Saw pod success
Jan  5 12:29:12.481: INFO: Pod "downwardapi-volume-f3ff0c8e-2fb6-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 12:29:12.504: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-f3ff0c8e-2fb6-11ea-910c-0242ac110004 container client-container: 
STEP: delete the pod
Jan  5 12:29:12.774: INFO: Waiting for pod downwardapi-volume-f3ff0c8e-2fb6-11ea-910c-0242ac110004 to disappear
Jan  5 12:29:12.793: INFO: Pod downwardapi-volume-f3ff0c8e-2fb6-11ea-910c-0242ac110004 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:29:12.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-nsdfw" for this suite.
Jan  5 12:29:18.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:29:18.958: INFO: namespace: e2e-tests-downward-api-nsdfw, resource: bindings, ignored listing per whitelist
Jan  5 12:29:19.056: INFO: namespace e2e-tests-downward-api-nsdfw deletion completed in 6.257049719s

• [SLOW TEST:18.006 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:29:19.057: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Jan  5 12:29:19.453: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-kf7kv" to be "success or failure"
Jan  5 12:29:19.463: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.736312ms
Jan  5 12:29:21.741: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287990246s
Jan  5 12:29:23.771: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.318358379s
Jan  5 12:29:25.867: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.414545303s
Jan  5 12:29:27.909: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.456812042s
Jan  5 12:29:29.961: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.508396773s
Jan  5 12:29:31.998: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.545064505s
STEP: Saw pod success
Jan  5 12:29:31.998: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jan  5 12:29:32.019: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jan  5 12:29:32.206: INFO: Waiting for pod pod-host-path-test to disappear
Jan  5 12:29:32.220: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:29:32.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-kf7kv" for this suite.
Jan  5 12:29:40.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:29:40.593: INFO: namespace: e2e-tests-hostpath-kf7kv, resource: bindings, ignored listing per whitelist
Jan  5 12:29:40.643: INFO: namespace e2e-tests-hostpath-kf7kv deletion completed in 8.414379168s

• [SLOW TEST:21.587 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:29:40.644: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan  5 12:29:40.974: INFO: Waiting up to 5m0s for pod "pod-0ba76fa1-2fb7-11ea-910c-0242ac110004" in namespace "e2e-tests-emptydir-jm8lw" to be "success or failure"
Jan  5 12:29:41.003: INFO: Pod "pod-0ba76fa1-2fb7-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 28.742335ms
Jan  5 12:29:43.020: INFO: Pod "pod-0ba76fa1-2fb7-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045893503s
Jan  5 12:29:45.035: INFO: Pod "pod-0ba76fa1-2fb7-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060765823s
Jan  5 12:29:47.094: INFO: Pod "pod-0ba76fa1-2fb7-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119076144s
Jan  5 12:29:49.107: INFO: Pod "pod-0ba76fa1-2fb7-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.132938579s
Jan  5 12:29:51.120: INFO: Pod "pod-0ba76fa1-2fb7-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.145152674s
STEP: Saw pod success
Jan  5 12:29:51.120: INFO: Pod "pod-0ba76fa1-2fb7-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 12:29:51.125: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-0ba76fa1-2fb7-11ea-910c-0242ac110004 container test-container: 
STEP: delete the pod
Jan  5 12:29:52.662: INFO: Waiting for pod pod-0ba76fa1-2fb7-11ea-910c-0242ac110004 to disappear
Jan  5 12:29:52.685: INFO: Pod pod-0ba76fa1-2fb7-11ea-910c-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:29:52.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-jm8lw" for this suite.
Jan  5 12:29:58.748: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:29:58.805: INFO: namespace: e2e-tests-emptydir-jm8lw, resource: bindings, ignored listing per whitelist
Jan  5 12:29:58.981: INFO: namespace e2e-tests-emptydir-jm8lw deletion completed in 6.285330238s

• [SLOW TEST:18.337 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:29:58.981: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
STEP: creating an rc
Jan  5 12:29:59.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-r7ktm'
Jan  5 12:29:59.634: INFO: stderr: ""
Jan  5 12:29:59.634: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Waiting for Redis master to start.
Jan  5 12:30:01.088: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 12:30:01.088: INFO: Found 0 / 1
Jan  5 12:30:01.645: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 12:30:01.645: INFO: Found 0 / 1
Jan  5 12:30:02.648: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 12:30:02.648: INFO: Found 0 / 1
Jan  5 12:30:03.652: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 12:30:03.652: INFO: Found 0 / 1
Jan  5 12:30:05.084: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 12:30:05.084: INFO: Found 0 / 1
Jan  5 12:30:05.865: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 12:30:05.865: INFO: Found 0 / 1
Jan  5 12:30:06.653: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 12:30:06.653: INFO: Found 0 / 1
Jan  5 12:30:07.946: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 12:30:07.946: INFO: Found 0 / 1
Jan  5 12:30:08.665: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 12:30:08.665: INFO: Found 0 / 1
Jan  5 12:30:09.653: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 12:30:09.653: INFO: Found 0 / 1
Jan  5 12:30:10.647: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 12:30:10.647: INFO: Found 1 / 1
Jan  5 12:30:10.647: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan  5 12:30:10.652: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 12:30:10.652: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Jan  5 12:30:10.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-wql9d redis-master --namespace=e2e-tests-kubectl-r7ktm'
Jan  5 12:30:10.918: INFO: stderr: ""
Jan  5 12:30:10.918: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 05 Jan 12:30:08.509 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 05 Jan 12:30:08.509 # Server started, Redis version 3.2.12\n1:M 05 Jan 12:30:08.509 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 05 Jan 12:30:08.510 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Jan  5 12:30:10.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-wql9d redis-master --namespace=e2e-tests-kubectl-r7ktm --tail=1'
Jan  5 12:30:11.046: INFO: stderr: ""
Jan  5 12:30:11.046: INFO: stdout: "1:M 05 Jan 12:30:08.510 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Jan  5 12:30:11.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-wql9d redis-master --namespace=e2e-tests-kubectl-r7ktm --limit-bytes=1'
Jan  5 12:30:11.264: INFO: stderr: ""
Jan  5 12:30:11.264: INFO: stdout: " "
STEP: exposing timestamps
Jan  5 12:30:11.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-wql9d redis-master --namespace=e2e-tests-kubectl-r7ktm --tail=1 --timestamps'
Jan  5 12:30:11.403: INFO: stderr: ""
Jan  5 12:30:11.403: INFO: stdout: "2020-01-05T12:30:08.518953855Z 1:M 05 Jan 12:30:08.510 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Jan  5 12:30:13.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-wql9d redis-master --namespace=e2e-tests-kubectl-r7ktm --since=1s'
Jan  5 12:30:14.062: INFO: stderr: ""
Jan  5 12:30:14.062: INFO: stdout: ""
Jan  5 12:30:14.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-wql9d redis-master --namespace=e2e-tests-kubectl-r7ktm --since=24h'
Jan  5 12:30:14.234: INFO: stderr: ""
Jan  5 12:30:14.234: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 05 Jan 12:30:08.509 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 05 Jan 12:30:08.509 # Server started, Redis version 3.2.12\n1:M 05 Jan 12:30:08.509 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 05 Jan 12:30:08.510 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140
STEP: using delete to clean up resources
Jan  5 12:30:14.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-r7ktm'
Jan  5 12:30:14.388: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  5 12:30:14.388: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Jan  5 12:30:14.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-r7ktm'
Jan  5 12:30:14.580: INFO: stderr: "No resources found.\n"
Jan  5 12:30:14.581: INFO: stdout: ""
Jan  5 12:30:14.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-r7ktm -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  5 12:30:14.697: INFO: stderr: ""
Jan  5 12:30:14.697: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:30:14.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-r7ktm" for this suite.
Jan  5 12:30:36.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:30:37.029: INFO: namespace: e2e-tests-kubectl-r7ktm, resource: bindings, ignored listing per whitelist
Jan  5 12:30:37.057: INFO: namespace e2e-tests-kubectl-r7ktm deletion completed in 22.345406624s

• [SLOW TEST:38.076 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:30:37.058: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  5 12:30:37.295: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2d3e565c-2fb7-11ea-910c-0242ac110004" in namespace "e2e-tests-downward-api-nw2vq" to be "success or failure"
Jan  5 12:30:37.299: INFO: Pod "downwardapi-volume-2d3e565c-2fb7-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 3.80022ms
Jan  5 12:30:39.329: INFO: Pod "downwardapi-volume-2d3e565c-2fb7-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034238873s
Jan  5 12:30:41.347: INFO: Pod "downwardapi-volume-2d3e565c-2fb7-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051638553s
Jan  5 12:30:43.719: INFO: Pod "downwardapi-volume-2d3e565c-2fb7-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.423318263s
Jan  5 12:30:45.761: INFO: Pod "downwardapi-volume-2d3e565c-2fb7-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.466196355s
Jan  5 12:30:47.808: INFO: Pod "downwardapi-volume-2d3e565c-2fb7-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.513016521s
Jan  5 12:30:49.831: INFO: Pod "downwardapi-volume-2d3e565c-2fb7-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.536227196s
STEP: Saw pod success
Jan  5 12:30:49.832: INFO: Pod "downwardapi-volume-2d3e565c-2fb7-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 12:30:49.838: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-2d3e565c-2fb7-11ea-910c-0242ac110004 container client-container: 
STEP: delete the pod
Jan  5 12:30:50.023: INFO: Waiting for pod downwardapi-volume-2d3e565c-2fb7-11ea-910c-0242ac110004 to disappear
Jan  5 12:30:50.044: INFO: Pod downwardapi-volume-2d3e565c-2fb7-11ea-910c-0242ac110004 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:30:50.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-nw2vq" for this suite.
Jan  5 12:30:56.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:30:56.351: INFO: namespace: e2e-tests-downward-api-nw2vq, resource: bindings, ignored listing per whitelist
Jan  5 12:30:56.384: INFO: namespace e2e-tests-downward-api-nw2vq deletion completed in 6.320098819s

• [SLOW TEST:19.326 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:30:56.385: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-38d8a342-2fb7-11ea-910c-0242ac110004
STEP: Creating a pod to test consume secrets
Jan  5 12:30:56.781: INFO: Waiting up to 5m0s for pod "pod-secrets-38db5385-2fb7-11ea-910c-0242ac110004" in namespace "e2e-tests-secrets-2tnqm" to be "success or failure"
Jan  5 12:30:56.946: INFO: Pod "pod-secrets-38db5385-2fb7-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 165.080581ms
Jan  5 12:30:58.972: INFO: Pod "pod-secrets-38db5385-2fb7-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.190752241s
Jan  5 12:31:00.995: INFO: Pod "pod-secrets-38db5385-2fb7-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.213865545s
Jan  5 12:31:03.288: INFO: Pod "pod-secrets-38db5385-2fb7-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.506360996s
Jan  5 12:31:05.792: INFO: Pod "pod-secrets-38db5385-2fb7-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.010491578s
Jan  5 12:31:07.817: INFO: Pod "pod-secrets-38db5385-2fb7-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.035128951s
STEP: Saw pod success
Jan  5 12:31:07.817: INFO: Pod "pod-secrets-38db5385-2fb7-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 12:31:07.829: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-38db5385-2fb7-11ea-910c-0242ac110004 container secret-volume-test: 
STEP: delete the pod
Jan  5 12:31:08.391: INFO: Waiting for pod pod-secrets-38db5385-2fb7-11ea-910c-0242ac110004 to disappear
Jan  5 12:31:08.417: INFO: Pod pod-secrets-38db5385-2fb7-11ea-910c-0242ac110004 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:31:08.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-2tnqm" for this suite.
Jan  5 12:31:14.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:31:14.691: INFO: namespace: e2e-tests-secrets-2tnqm, resource: bindings, ignored listing per whitelist
Jan  5 12:31:14.793: INFO: namespace e2e-tests-secrets-2tnqm deletion completed in 6.364148387s

• [SLOW TEST:18.408 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:31:14.794: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Jan  5 12:31:14.988: INFO: Waiting up to 5m0s for pod "var-expansion-43b62ee7-2fb7-11ea-910c-0242ac110004" in namespace "e2e-tests-var-expansion-bvf2v" to be "success or failure"
Jan  5 12:31:14.999: INFO: Pod "var-expansion-43b62ee7-2fb7-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.900439ms
Jan  5 12:31:17.016: INFO: Pod "var-expansion-43b62ee7-2fb7-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02766653s
Jan  5 12:31:19.031: INFO: Pod "var-expansion-43b62ee7-2fb7-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042973962s
Jan  5 12:31:21.046: INFO: Pod "var-expansion-43b62ee7-2fb7-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05746099s
Jan  5 12:31:23.063: INFO: Pod "var-expansion-43b62ee7-2fb7-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.074787776s
Jan  5 12:31:25.076: INFO: Pod "var-expansion-43b62ee7-2fb7-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.088243533s
STEP: Saw pod success
Jan  5 12:31:25.077: INFO: Pod "var-expansion-43b62ee7-2fb7-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 12:31:25.081: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-43b62ee7-2fb7-11ea-910c-0242ac110004 container dapi-container: 
STEP: delete the pod
Jan  5 12:31:26.004: INFO: Waiting for pod var-expansion-43b62ee7-2fb7-11ea-910c-0242ac110004 to disappear
Jan  5 12:31:26.022: INFO: Pod var-expansion-43b62ee7-2fb7-11ea-910c-0242ac110004 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:31:26.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-bvf2v" for this suite.
Jan  5 12:31:32.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:31:32.243: INFO: namespace: e2e-tests-var-expansion-bvf2v, resource: bindings, ignored listing per whitelist
Jan  5 12:31:32.333: INFO: namespace e2e-tests-var-expansion-bvf2v deletion completed in 6.297757719s

• [SLOW TEST:17.540 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:31:32.334: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
Jan  5 12:31:33.196: INFO: created pod pod-service-account-defaultsa
Jan  5 12:31:33.197: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Jan  5 12:31:33.214: INFO: created pod pod-service-account-mountsa
Jan  5 12:31:33.214: INFO: pod pod-service-account-mountsa service account token volume mount: true
Jan  5 12:31:33.242: INFO: created pod pod-service-account-nomountsa
Jan  5 12:31:33.242: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Jan  5 12:31:33.270: INFO: created pod pod-service-account-defaultsa-mountspec
Jan  5 12:31:33.270: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Jan  5 12:31:33.380: INFO: created pod pod-service-account-mountsa-mountspec
Jan  5 12:31:33.380: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Jan  5 12:31:33.516: INFO: created pod pod-service-account-nomountsa-mountspec
Jan  5 12:31:33.517: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Jan  5 12:31:33.576: INFO: created pod pod-service-account-defaultsa-nomountspec
Jan  5 12:31:33.576: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Jan  5 12:31:33.599: INFO: created pod pod-service-account-mountsa-nomountspec
Jan  5 12:31:33.599: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Jan  5 12:31:34.385: INFO: created pod pod-service-account-nomountsa-nomountspec
Jan  5 12:31:34.385: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:31:34.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-zkllv" for this suite.
Jan  5 12:32:00.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:32:01.031: INFO: namespace: e2e-tests-svcaccounts-zkllv, resource: bindings, ignored listing per whitelist
Jan  5 12:32:01.079: INFO: namespace e2e-tests-svcaccounts-zkllv deletion completed in 26.67737703s

• [SLOW TEST:28.745 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:32:01.079: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0105 12:32:11.556310       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  5 12:32:11.556: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:32:11.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-s4cvx" for this suite.
Jan  5 12:32:17.712: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:32:17.944: INFO: namespace: e2e-tests-gc-s4cvx, resource: bindings, ignored listing per whitelist
Jan  5 12:32:17.944: INFO: namespace e2e-tests-gc-s4cvx deletion completed in 6.365711452s

• [SLOW TEST:16.865 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:32:17.945: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan  5 12:32:26.871: INFO: Successfully updated pod "annotationupdate6964e1a5-2fb7-11ea-910c-0242ac110004"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:32:31.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-w2cdx" for this suite.
Jan  5 12:32:55.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:32:55.261: INFO: namespace: e2e-tests-downward-api-w2cdx, resource: bindings, ignored listing per whitelist
Jan  5 12:32:55.367: INFO: namespace e2e-tests-downward-api-w2cdx deletion completed in 24.284870753s

• [SLOW TEST:37.422 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:32:55.367: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:33:02.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-zkbhd" for this suite.
Jan  5 12:33:08.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:33:08.739: INFO: namespace: e2e-tests-namespaces-zkbhd, resource: bindings, ignored listing per whitelist
Jan  5 12:33:08.777: INFO: namespace e2e-tests-namespaces-zkbhd deletion completed in 6.269737874s
STEP: Destroying namespace "e2e-tests-nsdeletetest-vnkqs" for this suite.
Jan  5 12:33:08.789: INFO: Namespace e2e-tests-nsdeletetest-vnkqs was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-plhc6" for this suite.
Jan  5 12:33:14.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:33:14.991: INFO: namespace: e2e-tests-nsdeletetest-plhc6, resource: bindings, ignored listing per whitelist
Jan  5 12:33:15.050: INFO: namespace e2e-tests-nsdeletetest-plhc6 deletion completed in 6.260952338s

• [SLOW TEST:19.683 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:33:15.050: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-8b60b793-2fb7-11ea-910c-0242ac110004
STEP: Creating a pod to test consume secrets
Jan  5 12:33:15.229: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8b6198cc-2fb7-11ea-910c-0242ac110004" in namespace "e2e-tests-projected-487zz" to be "success or failure"
Jan  5 12:33:15.248: INFO: Pod "pod-projected-secrets-8b6198cc-2fb7-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 19.079469ms
Jan  5 12:33:17.609: INFO: Pod "pod-projected-secrets-8b6198cc-2fb7-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.379685591s
Jan  5 12:33:19.624: INFO: Pod "pod-projected-secrets-8b6198cc-2fb7-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.394982333s
Jan  5 12:33:21.646: INFO: Pod "pod-projected-secrets-8b6198cc-2fb7-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.416352893s
Jan  5 12:33:23.659: INFO: Pod "pod-projected-secrets-8b6198cc-2fb7-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.429665426s
Jan  5 12:33:25.669: INFO: Pod "pod-projected-secrets-8b6198cc-2fb7-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.439859393s
STEP: Saw pod success
Jan  5 12:33:25.669: INFO: Pod "pod-projected-secrets-8b6198cc-2fb7-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 12:33:25.673: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-8b6198cc-2fb7-11ea-910c-0242ac110004 container projected-secret-volume-test: 
STEP: delete the pod
Jan  5 12:33:26.503: INFO: Waiting for pod pod-projected-secrets-8b6198cc-2fb7-11ea-910c-0242ac110004 to disappear
Jan  5 12:33:26.635: INFO: Pod pod-projected-secrets-8b6198cc-2fb7-11ea-910c-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:33:26.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-487zz" for this suite.
Jan  5 12:33:33.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:33:33.150: INFO: namespace: e2e-tests-projected-487zz, resource: bindings, ignored listing per whitelist
Jan  5 12:33:33.233: INFO: namespace e2e-tests-projected-487zz deletion completed in 6.578260005s

• [SLOW TEST:18.183 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:33:33.234: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-9641710d-2fb7-11ea-910c-0242ac110004
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-9641710d-2fb7-11ea-910c-0242ac110004
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:33:45.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-q8w27" for this suite.
Jan  5 12:34:09.809: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:34:09.948: INFO: namespace: e2e-tests-configmap-q8w27, resource: bindings, ignored listing per whitelist
Jan  5 12:34:10.035: INFO: namespace e2e-tests-configmap-q8w27 deletion completed in 24.305390389s

• [SLOW TEST:36.802 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:34:10.036: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-ac2be9b5-2fb7-11ea-910c-0242ac110004
STEP: Creating a pod to test consume configMaps
Jan  5 12:34:10.280: INFO: Waiting up to 5m0s for pod "pod-configmaps-ac2cffaf-2fb7-11ea-910c-0242ac110004" in namespace "e2e-tests-configmap-jjnv2" to be "success or failure"
Jan  5 12:34:10.348: INFO: Pod "pod-configmaps-ac2cffaf-2fb7-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 67.190638ms
Jan  5 12:34:12.469: INFO: Pod "pod-configmaps-ac2cffaf-2fb7-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.188169113s
Jan  5 12:34:14.495: INFO: Pod "pod-configmaps-ac2cffaf-2fb7-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.21461571s
Jan  5 12:34:17.136: INFO: Pod "pod-configmaps-ac2cffaf-2fb7-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.855903305s
Jan  5 12:34:19.148: INFO: Pod "pod-configmaps-ac2cffaf-2fb7-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.867577909s
Jan  5 12:34:21.166: INFO: Pod "pod-configmaps-ac2cffaf-2fb7-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.885956783s
STEP: Saw pod success
Jan  5 12:34:21.167: INFO: Pod "pod-configmaps-ac2cffaf-2fb7-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 12:34:21.177: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-ac2cffaf-2fb7-11ea-910c-0242ac110004 container configmap-volume-test: 
STEP: delete the pod
Jan  5 12:34:21.359: INFO: Waiting for pod pod-configmaps-ac2cffaf-2fb7-11ea-910c-0242ac110004 to disappear
Jan  5 12:34:21.366: INFO: Pod pod-configmaps-ac2cffaf-2fb7-11ea-910c-0242ac110004 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:34:21.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-jjnv2" for this suite.
Jan  5 12:34:27.414: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:34:27.481: INFO: namespace: e2e-tests-configmap-jjnv2, resource: bindings, ignored listing per whitelist
Jan  5 12:34:27.560: INFO: namespace e2e-tests-configmap-jjnv2 deletion completed in 6.187421676s

• [SLOW TEST:17.524 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:34:27.560: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  5 12:34:27.692: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b6918858-2fb7-11ea-910c-0242ac110004" in namespace "e2e-tests-downward-api-m5wlh" to be "success or failure"
Jan  5 12:34:27.776: INFO: Pod "downwardapi-volume-b6918858-2fb7-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 84.343385ms
Jan  5 12:34:29.795: INFO: Pod "downwardapi-volume-b6918858-2fb7-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103027785s
Jan  5 12:34:31.824: INFO: Pod "downwardapi-volume-b6918858-2fb7-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.132223193s
Jan  5 12:34:34.032: INFO: Pod "downwardapi-volume-b6918858-2fb7-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.340517767s
Jan  5 12:34:36.196: INFO: Pod "downwardapi-volume-b6918858-2fb7-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.503997474s
Jan  5 12:34:38.224: INFO: Pod "downwardapi-volume-b6918858-2fb7-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.532357222s
Jan  5 12:34:40.257: INFO: Pod "downwardapi-volume-b6918858-2fb7-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.56539123s
STEP: Saw pod success
Jan  5 12:34:40.257: INFO: Pod "downwardapi-volume-b6918858-2fb7-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 12:34:40.279: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-b6918858-2fb7-11ea-910c-0242ac110004 container client-container: 
STEP: delete the pod
Jan  5 12:34:40.623: INFO: Waiting for pod downwardapi-volume-b6918858-2fb7-11ea-910c-0242ac110004 to disappear
Jan  5 12:34:40.634: INFO: Pod downwardapi-volume-b6918858-2fb7-11ea-910c-0242ac110004 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:34:40.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-m5wlh" for this suite.
Jan  5 12:34:46.725: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:34:46.933: INFO: namespace: e2e-tests-downward-api-m5wlh, resource: bindings, ignored listing per whitelist
Jan  5 12:34:46.933: INFO: namespace e2e-tests-downward-api-m5wlh deletion completed in 6.291269079s

• [SLOW TEST:19.373 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:34:46.933: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan  5 12:34:47.214: INFO: Waiting up to 5m0s for pod "pod-c2342df6-2fb7-11ea-910c-0242ac110004" in namespace "e2e-tests-emptydir-mfnbs" to be "success or failure"
Jan  5 12:34:47.234: INFO: Pod "pod-c2342df6-2fb7-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 19.999153ms
Jan  5 12:34:49.253: INFO: Pod "pod-c2342df6-2fb7-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039413744s
Jan  5 12:34:51.279: INFO: Pod "pod-c2342df6-2fb7-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065088027s
Jan  5 12:34:53.911: INFO: Pod "pod-c2342df6-2fb7-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.696973368s
Jan  5 12:34:55.937: INFO: Pod "pod-c2342df6-2fb7-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.723605674s
Jan  5 12:34:57.954: INFO: Pod "pod-c2342df6-2fb7-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.739917182s
STEP: Saw pod success
Jan  5 12:34:57.954: INFO: Pod "pod-c2342df6-2fb7-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 12:34:57.960: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-c2342df6-2fb7-11ea-910c-0242ac110004 container test-container: 
STEP: delete the pod
Jan  5 12:34:58.064: INFO: Waiting for pod pod-c2342df6-2fb7-11ea-910c-0242ac110004 to disappear
Jan  5 12:34:58.074: INFO: Pod pod-c2342df6-2fb7-11ea-910c-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:34:58.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-mfnbs" for this suite.
Jan  5 12:35:04.112: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:35:04.221: INFO: namespace: e2e-tests-emptydir-mfnbs, resource: bindings, ignored listing per whitelist
Jan  5 12:35:04.234: INFO: namespace e2e-tests-emptydir-mfnbs deletion completed in 6.152389171s

• [SLOW TEST:17.301 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:35:04.235: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jan  5 12:35:05.321: INFO: Pod name wrapped-volume-race-ccf8b297-2fb7-11ea-910c-0242ac110004: Found 0 pods out of 5
Jan  5 12:35:10.355: INFO: Pod name wrapped-volume-race-ccf8b297-2fb7-11ea-910c-0242ac110004: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-ccf8b297-2fb7-11ea-910c-0242ac110004 in namespace e2e-tests-emptydir-wrapper-w4q5t, will wait for the garbage collector to delete the pods
Jan  5 12:37:44.496: INFO: Deleting ReplicationController wrapped-volume-race-ccf8b297-2fb7-11ea-910c-0242ac110004 took: 24.237317ms
Jan  5 12:37:44.797: INFO: Terminating ReplicationController wrapped-volume-race-ccf8b297-2fb7-11ea-910c-0242ac110004 pods took: 300.594728ms
STEP: Creating RC which spawns configmap-volume pods
Jan  5 12:38:30.282: INFO: Pod name wrapped-volume-race-471fdfa3-2fb8-11ea-910c-0242ac110004: Found 0 pods out of 5
Jan  5 12:38:35.318: INFO: Pod name wrapped-volume-race-471fdfa3-2fb8-11ea-910c-0242ac110004: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-471fdfa3-2fb8-11ea-910c-0242ac110004 in namespace e2e-tests-emptydir-wrapper-w4q5t, will wait for the garbage collector to delete the pods
Jan  5 12:40:39.533: INFO: Deleting ReplicationController wrapped-volume-race-471fdfa3-2fb8-11ea-910c-0242ac110004 took: 16.448973ms
Jan  5 12:40:39.934: INFO: Terminating ReplicationController wrapped-volume-race-471fdfa3-2fb8-11ea-910c-0242ac110004 pods took: 401.199643ms
STEP: Creating RC which spawns configmap-volume pods
Jan  5 12:41:24.403: INFO: Pod name wrapped-volume-race-aeeb1bec-2fb8-11ea-910c-0242ac110004: Found 0 pods out of 5
Jan  5 12:41:29.475: INFO: Pod name wrapped-volume-race-aeeb1bec-2fb8-11ea-910c-0242ac110004: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-aeeb1bec-2fb8-11ea-910c-0242ac110004 in namespace e2e-tests-emptydir-wrapper-w4q5t, will wait for the garbage collector to delete the pods
Jan  5 12:43:35.643: INFO: Deleting ReplicationController wrapped-volume-race-aeeb1bec-2fb8-11ea-910c-0242ac110004 took: 22.771344ms
Jan  5 12:43:36.144: INFO: Terminating ReplicationController wrapped-volume-race-aeeb1bec-2fb8-11ea-910c-0242ac110004 pods took: 501.234851ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:44:24.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-w4q5t" for this suite.
Jan  5 12:44:34.906: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:44:35.000: INFO: namespace: e2e-tests-emptydir-wrapper-w4q5t, resource: bindings, ignored listing per whitelist
Jan  5 12:44:35.064: INFO: namespace e2e-tests-emptydir-wrapper-w4q5t deletion completed in 10.199330694s

• [SLOW TEST:570.830 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:44:35.065: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating secret e2e-tests-secrets-lkl8v/secret-test-20b06f51-2fb9-11ea-910c-0242ac110004
STEP: Creating a pod to test consume secrets
Jan  5 12:44:35.318: INFO: Waiting up to 5m0s for pod "pod-configmaps-20be218e-2fb9-11ea-910c-0242ac110004" in namespace "e2e-tests-secrets-lkl8v" to be "success or failure"
Jan  5 12:44:35.568: INFO: Pod "pod-configmaps-20be218e-2fb9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 250.435058ms
Jan  5 12:44:38.391: INFO: Pod "pod-configmaps-20be218e-2fb9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 3.073475828s
Jan  5 12:44:40.576: INFO: Pod "pod-configmaps-20be218e-2fb9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 5.25791766s
Jan  5 12:44:42.619: INFO: Pod "pod-configmaps-20be218e-2fb9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.301258966s
Jan  5 12:44:44.662: INFO: Pod "pod-configmaps-20be218e-2fb9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.343961477s
Jan  5 12:44:46.695: INFO: Pod "pod-configmaps-20be218e-2fb9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.377315014s
Jan  5 12:44:48.711: INFO: Pod "pod-configmaps-20be218e-2fb9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 13.392946273s
Jan  5 12:44:50.720: INFO: Pod "pod-configmaps-20be218e-2fb9-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.401694383s
STEP: Saw pod success
Jan  5 12:44:50.720: INFO: Pod "pod-configmaps-20be218e-2fb9-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 12:44:50.725: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-20be218e-2fb9-11ea-910c-0242ac110004 container env-test: 
STEP: delete the pod
Jan  5 12:44:52.132: INFO: Waiting for pod pod-configmaps-20be218e-2fb9-11ea-910c-0242ac110004 to disappear
Jan  5 12:44:52.190: INFO: Pod pod-configmaps-20be218e-2fb9-11ea-910c-0242ac110004 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:44:52.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-lkl8v" for this suite.
Jan  5 12:44:58.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:44:59.245: INFO: namespace: e2e-tests-secrets-lkl8v, resource: bindings, ignored listing per whitelist
Jan  5 12:44:59.582: INFO: namespace e2e-tests-secrets-lkl8v deletion completed in 7.363213698s

• [SLOW TEST:24.517 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:44:59.583: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan  5 12:44:59.753: INFO: PodSpec: initContainers in spec.initContainers
Jan  5 12:46:10.644: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-2f513d7f-2fb9-11ea-910c-0242ac110004", GenerateName:"", Namespace:"e2e-tests-init-container-md8bv", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-md8bv/pods/pod-init-2f513d7f-2fb9-11ea-910c-0242ac110004", UID:"2f5b36b6-2fb9-11ea-a994-fa163e34d433", ResourceVersion:"17256705", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63713825099, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"time":"753011242", "name":"foo"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-fdsmr", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0029d2000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-fdsmr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-fdsmr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-fdsmr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0020401e8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00172e660), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002040260)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002040280)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002040288), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00204028c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713825099, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713825099, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713825099, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713825099, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc000aca060), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000f5ecb0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000f5fea0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://2cd6b974bebcc4fbb1bc9520beff413092c09474e4a7e82f3a71bc65e64d1944"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000aca0c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000aca0a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:46:10.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-md8bv" for this suite.
Jan  5 12:46:34.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:46:34.813: INFO: namespace: e2e-tests-init-container-md8bv, resource: bindings, ignored listing per whitelist
Jan  5 12:46:34.845: INFO: namespace e2e-tests-init-container-md8bv deletion completed in 24.17854679s

• [SLOW TEST:95.262 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:46:34.845: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan  5 12:46:47.754: INFO: Successfully updated pod "labelsupdate68275a7c-2fb9-11ea-910c-0242ac110004"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:46:49.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-b96v9" for this suite.
Jan  5 12:47:15.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:47:15.989: INFO: namespace: e2e-tests-projected-b96v9, resource: bindings, ignored listing per whitelist
Jan  5 12:47:16.083: INFO: namespace e2e-tests-projected-b96v9 deletion completed in 26.245625809s

• [SLOW TEST:41.238 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:47:16.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:48:15.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-runtime-dpgz2" for this suite.
Jan  5 12:48:23.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:48:23.309: INFO: namespace: e2e-tests-container-runtime-dpgz2, resource: bindings, ignored listing per whitelist
Jan  5 12:48:23.570: INFO: namespace e2e-tests-container-runtime-dpgz2 deletion completed in 8.440820297s

• [SLOW TEST:67.487 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:48:23.571: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-fb828 in namespace e2e-tests-proxy-kdq96
I0105 12:48:24.111967       8 runners.go:184] Created replication controller with name: proxy-service-fb828, namespace: e2e-tests-proxy-kdq96, replica count: 1
I0105 12:48:25.163320       8 runners.go:184] proxy-service-fb828 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0105 12:48:26.163925       8 runners.go:184] proxy-service-fb828 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0105 12:48:27.164437       8 runners.go:184] proxy-service-fb828 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0105 12:48:28.165080       8 runners.go:184] proxy-service-fb828 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0105 12:48:29.165670       8 runners.go:184] proxy-service-fb828 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0105 12:48:30.166262       8 runners.go:184] proxy-service-fb828 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0105 12:48:31.166961       8 runners.go:184] proxy-service-fb828 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0105 12:48:32.168554       8 runners.go:184] proxy-service-fb828 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0105 12:48:33.169773       8 runners.go:184] proxy-service-fb828 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0105 12:48:34.170638       8 runners.go:184] proxy-service-fb828 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0105 12:48:35.171475       8 runners.go:184] proxy-service-fb828 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0105 12:48:36.172114       8 runners.go:184] proxy-service-fb828 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0105 12:48:37.172775       8 runners.go:184] proxy-service-fb828 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0105 12:48:38.173452       8 runners.go:184] proxy-service-fb828 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0105 12:48:39.174099       8 runners.go:184] proxy-service-fb828 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0105 12:48:40.174770       8 runners.go:184] proxy-service-fb828 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0105 12:48:41.175632       8 runners.go:184] proxy-service-fb828 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0105 12:48:42.176400       8 runners.go:184] proxy-service-fb828 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan  5 12:48:42.194: INFO: setup took 18.297112149s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jan  5 12:48:42.244: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-kdq96/pods/proxy-service-fb828-b8gdx:1080/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  5 12:48:55.525: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bbd76946-2fb9-11ea-910c-0242ac110004" in namespace "e2e-tests-projected-r67rz" to be "success or failure"
Jan  5 12:48:55.607: INFO: Pod "downwardapi-volume-bbd76946-2fb9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 82.015793ms
Jan  5 12:48:58.199: INFO: Pod "downwardapi-volume-bbd76946-2fb9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.673693505s
Jan  5 12:49:00.213: INFO: Pod "downwardapi-volume-bbd76946-2fb9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.687628601s
Jan  5 12:49:02.588: INFO: Pod "downwardapi-volume-bbd76946-2fb9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.062502287s
Jan  5 12:49:04.615: INFO: Pod "downwardapi-volume-bbd76946-2fb9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.089750393s
Jan  5 12:49:06.628: INFO: Pod "downwardapi-volume-bbd76946-2fb9-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.102491091s
STEP: Saw pod success
Jan  5 12:49:06.628: INFO: Pod "downwardapi-volume-bbd76946-2fb9-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 12:49:06.632: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-bbd76946-2fb9-11ea-910c-0242ac110004 container client-container: 
STEP: delete the pod
Jan  5 12:49:07.502: INFO: Waiting for pod downwardapi-volume-bbd76946-2fb9-11ea-910c-0242ac110004 to disappear
Jan  5 12:49:07.510: INFO: Pod downwardapi-volume-bbd76946-2fb9-11ea-910c-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:49:07.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-r67rz" for this suite.
Jan  5 12:49:13.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:49:14.089: INFO: namespace: e2e-tests-projected-r67rz, resource: bindings, ignored listing per whitelist
Jan  5 12:49:14.125: INFO: namespace e2e-tests-projected-r67rz deletion completed in 6.601326034s

• [SLOW TEST:18.804 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:49:14.125: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan  5 12:49:14.397: INFO: Waiting up to 5m0s for pod "downward-api-c6ffb4bf-2fb9-11ea-910c-0242ac110004" in namespace "e2e-tests-downward-api-nv9sv" to be "success or failure"
Jan  5 12:49:14.424: INFO: Pod "downward-api-c6ffb4bf-2fb9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 26.902768ms
Jan  5 12:49:16.434: INFO: Pod "downward-api-c6ffb4bf-2fb9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037458126s
Jan  5 12:49:18.459: INFO: Pod "downward-api-c6ffb4bf-2fb9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062231725s
Jan  5 12:49:20.483: INFO: Pod "downward-api-c6ffb4bf-2fb9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086400205s
Jan  5 12:49:22.533: INFO: Pod "downward-api-c6ffb4bf-2fb9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.13550689s
Jan  5 12:49:24.587: INFO: Pod "downward-api-c6ffb4bf-2fb9-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.190158943s
STEP: Saw pod success
Jan  5 12:49:24.587: INFO: Pod "downward-api-c6ffb4bf-2fb9-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 12:49:24.619: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-c6ffb4bf-2fb9-11ea-910c-0242ac110004 container dapi-container: 
STEP: delete the pod
Jan  5 12:49:25.946: INFO: Waiting for pod downward-api-c6ffb4bf-2fb9-11ea-910c-0242ac110004 to disappear
Jan  5 12:49:25.963: INFO: Pod downward-api-c6ffb4bf-2fb9-11ea-910c-0242ac110004 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:49:25.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-nv9sv" for this suite.
Jan  5 12:49:32.127: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:49:32.216: INFO: namespace: e2e-tests-downward-api-nv9sv, resource: bindings, ignored listing per whitelist
Jan  5 12:49:32.299: INFO: namespace e2e-tests-downward-api-nv9sv deletion completed in 6.320816698s

• [SLOW TEST:18.174 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:49:32.301: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  5 12:49:32.755: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d1f80aeb-2fb9-11ea-910c-0242ac110004" in namespace "e2e-tests-downward-api-9wjt8" to be "success or failure"
Jan  5 12:49:32.805: INFO: Pod "downwardapi-volume-d1f80aeb-2fb9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 49.721968ms
Jan  5 12:49:34.819: INFO: Pod "downwardapi-volume-d1f80aeb-2fb9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063388778s
Jan  5 12:49:36.835: INFO: Pod "downwardapi-volume-d1f80aeb-2fb9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07947943s
Jan  5 12:49:39.175: INFO: Pod "downwardapi-volume-d1f80aeb-2fb9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.419510876s
Jan  5 12:49:41.188: INFO: Pod "downwardapi-volume-d1f80aeb-2fb9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.432818793s
Jan  5 12:49:43.201: INFO: Pod "downwardapi-volume-d1f80aeb-2fb9-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.445672164s
STEP: Saw pod success
Jan  5 12:49:43.201: INFO: Pod "downwardapi-volume-d1f80aeb-2fb9-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 12:49:43.207: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-d1f80aeb-2fb9-11ea-910c-0242ac110004 container client-container: 
STEP: delete the pod
Jan  5 12:49:44.206: INFO: Waiting for pod downwardapi-volume-d1f80aeb-2fb9-11ea-910c-0242ac110004 to disappear
Jan  5 12:49:44.233: INFO: Pod downwardapi-volume-d1f80aeb-2fb9-11ea-910c-0242ac110004 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:49:44.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-9wjt8" for this suite.
Jan  5 12:49:52.373: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:49:52.675: INFO: namespace: e2e-tests-downward-api-9wjt8, resource: bindings, ignored listing per whitelist
Jan  5 12:49:52.716: INFO: namespace e2e-tests-downward-api-9wjt8 deletion completed in 8.469829324s

• [SLOW TEST:20.416 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:49:52.717: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Jan  5 12:49:52.908: INFO: Waiting up to 5m0s for pod "client-containers-de0b561d-2fb9-11ea-910c-0242ac110004" in namespace "e2e-tests-containers-8bcrg" to be "success or failure"
Jan  5 12:49:52.920: INFO: Pod "client-containers-de0b561d-2fb9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.911386ms
Jan  5 12:49:55.314: INFO: Pod "client-containers-de0b561d-2fb9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.405453727s
Jan  5 12:49:57.329: INFO: Pod "client-containers-de0b561d-2fb9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.420604704s
Jan  5 12:49:59.477: INFO: Pod "client-containers-de0b561d-2fb9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.568820601s
Jan  5 12:50:01.503: INFO: Pod "client-containers-de0b561d-2fb9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.594051716s
Jan  5 12:50:03.513: INFO: Pod "client-containers-de0b561d-2fb9-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.604662377s
STEP: Saw pod success
Jan  5 12:50:03.513: INFO: Pod "client-containers-de0b561d-2fb9-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 12:50:03.517: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-de0b561d-2fb9-11ea-910c-0242ac110004 container test-container: 
STEP: delete the pod
Jan  5 12:50:04.202: INFO: Waiting for pod client-containers-de0b561d-2fb9-11ea-910c-0242ac110004 to disappear
Jan  5 12:50:04.570: INFO: Pod client-containers-de0b561d-2fb9-11ea-910c-0242ac110004 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:50:04.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-8bcrg" for this suite.
Jan  5 12:50:10.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:50:10.957: INFO: namespace: e2e-tests-containers-8bcrg, resource: bindings, ignored listing per whitelist
Jan  5 12:50:10.981: INFO: namespace e2e-tests-containers-8bcrg deletion completed in 6.373577148s

• [SLOW TEST:18.264 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:50:10.982: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan  5 12:50:11.167: INFO: Waiting up to 5m0s for pod "pod-e8ed4bc2-2fb9-11ea-910c-0242ac110004" in namespace "e2e-tests-emptydir-b9rlh" to be "success or failure"
Jan  5 12:50:11.191: INFO: Pod "pod-e8ed4bc2-2fb9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 23.673041ms
Jan  5 12:50:13.205: INFO: Pod "pod-e8ed4bc2-2fb9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038188801s
Jan  5 12:50:15.241: INFO: Pod "pod-e8ed4bc2-2fb9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074035299s
Jan  5 12:50:17.854: INFO: Pod "pod-e8ed4bc2-2fb9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.686578974s
Jan  5 12:50:19.942: INFO: Pod "pod-e8ed4bc2-2fb9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.775282194s
Jan  5 12:50:21.957: INFO: Pod "pod-e8ed4bc2-2fb9-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.789464341s
STEP: Saw pod success
Jan  5 12:50:21.957: INFO: Pod "pod-e8ed4bc2-2fb9-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 12:50:21.963: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-e8ed4bc2-2fb9-11ea-910c-0242ac110004 container test-container: 
STEP: delete the pod
Jan  5 12:50:22.620: INFO: Waiting for pod pod-e8ed4bc2-2fb9-11ea-910c-0242ac110004 to disappear
Jan  5 12:50:22.904: INFO: Pod pod-e8ed4bc2-2fb9-11ea-910c-0242ac110004 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:50:22.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-b9rlh" for this suite.
Jan  5 12:50:29.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:50:29.185: INFO: namespace: e2e-tests-emptydir-b9rlh, resource: bindings, ignored listing per whitelist
Jan  5 12:50:29.394: INFO: namespace e2e-tests-emptydir-b9rlh deletion completed in 6.474401792s

• [SLOW TEST:18.413 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:50:29.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Jan  5 12:50:29.700: INFO: Waiting up to 5m0s for pod "client-containers-f3f3d41d-2fb9-11ea-910c-0242ac110004" in namespace "e2e-tests-containers-4cj6n" to be "success or failure"
Jan  5 12:50:29.767: INFO: Pod "client-containers-f3f3d41d-2fb9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 66.730912ms
Jan  5 12:50:31.784: INFO: Pod "client-containers-f3f3d41d-2fb9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084372256s
Jan  5 12:50:33.810: INFO: Pod "client-containers-f3f3d41d-2fb9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109885095s
Jan  5 12:50:36.130: INFO: Pod "client-containers-f3f3d41d-2fb9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.429692894s
Jan  5 12:50:38.142: INFO: Pod "client-containers-f3f3d41d-2fb9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.442355635s
Jan  5 12:50:40.164: INFO: Pod "client-containers-f3f3d41d-2fb9-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.463646655s
STEP: Saw pod success
Jan  5 12:50:40.164: INFO: Pod "client-containers-f3f3d41d-2fb9-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 12:50:40.282: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-f3f3d41d-2fb9-11ea-910c-0242ac110004 container test-container: 
STEP: delete the pod
Jan  5 12:50:40.421: INFO: Waiting for pod client-containers-f3f3d41d-2fb9-11ea-910c-0242ac110004 to disappear
Jan  5 12:50:40.462: INFO: Pod client-containers-f3f3d41d-2fb9-11ea-910c-0242ac110004 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:50:40.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-4cj6n" for this suite.
Jan  5 12:50:46.540: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:50:46.659: INFO: namespace: e2e-tests-containers-4cj6n, resource: bindings, ignored listing per whitelist
Jan  5 12:50:46.702: INFO: namespace e2e-tests-containers-4cj6n deletion completed in 6.226895866s

• [SLOW TEST:17.307 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:50:46.702: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  5 12:50:46.945: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fe4002c2-2fb9-11ea-910c-0242ac110004" in namespace "e2e-tests-projected-vjbxm" to be "success or failure"
Jan  5 12:50:46.976: INFO: Pod "downwardapi-volume-fe4002c2-2fb9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 30.905656ms
Jan  5 12:50:48.991: INFO: Pod "downwardapi-volume-fe4002c2-2fb9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045651236s
Jan  5 12:50:51.014: INFO: Pod "downwardapi-volume-fe4002c2-2fb9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068354326s
Jan  5 12:50:53.315: INFO: Pod "downwardapi-volume-fe4002c2-2fb9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.369370981s
Jan  5 12:50:55.540: INFO: Pod "downwardapi-volume-fe4002c2-2fb9-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.594641937s
Jan  5 12:50:57.555: INFO: Pod "downwardapi-volume-fe4002c2-2fb9-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.610000354s
STEP: Saw pod success
Jan  5 12:50:57.556: INFO: Pod "downwardapi-volume-fe4002c2-2fb9-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 12:50:57.560: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-fe4002c2-2fb9-11ea-910c-0242ac110004 container client-container: 
STEP: delete the pod
Jan  5 12:50:58.164: INFO: Waiting for pod downwardapi-volume-fe4002c2-2fb9-11ea-910c-0242ac110004 to disappear
Jan  5 12:50:58.576: INFO: Pod downwardapi-volume-fe4002c2-2fb9-11ea-910c-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:50:58.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-vjbxm" for this suite.
Jan  5 12:51:04.641: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:51:04.687: INFO: namespace: e2e-tests-projected-vjbxm, resource: bindings, ignored listing per whitelist
Jan  5 12:51:04.792: INFO: namespace e2e-tests-projected-vjbxm deletion completed in 6.196238222s

• [SLOW TEST:18.090 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:51:04.792: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-tmhzx
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  5 12:51:05.142: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  5 12:51:39.561: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-tmhzx PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  5 12:51:39.562: INFO: >>> kubeConfig: /root/.kube/config
I0105 12:51:39.695883       8 log.go:172] (0xc00184c2c0) (0xc001e91d60) Create stream
I0105 12:51:39.696117       8 log.go:172] (0xc00184c2c0) (0xc001e91d60) Stream added, broadcasting: 1
I0105 12:51:39.710635       8 log.go:172] (0xc00184c2c0) Reply frame received for 1
I0105 12:51:39.710833       8 log.go:172] (0xc00184c2c0) (0xc0024b0f00) Create stream
I0105 12:51:39.710871       8 log.go:172] (0xc00184c2c0) (0xc0024b0f00) Stream added, broadcasting: 3
I0105 12:51:39.714336       8 log.go:172] (0xc00184c2c0) Reply frame received for 3
I0105 12:51:39.714391       8 log.go:172] (0xc00184c2c0) (0xc001e91e00) Create stream
I0105 12:51:39.714417       8 log.go:172] (0xc00184c2c0) (0xc001e91e00) Stream added, broadcasting: 5
I0105 12:51:39.716957       8 log.go:172] (0xc00184c2c0) Reply frame received for 5
I0105 12:51:39.896759       8 log.go:172] (0xc00184c2c0) Data frame received for 3
I0105 12:51:39.896899       8 log.go:172] (0xc0024b0f00) (3) Data frame handling
I0105 12:51:39.896955       8 log.go:172] (0xc0024b0f00) (3) Data frame sent
I0105 12:51:40.152616       8 log.go:172] (0xc00184c2c0) Data frame received for 1
I0105 12:51:40.152723       8 log.go:172] (0xc001e91d60) (1) Data frame handling
I0105 12:51:40.152759       8 log.go:172] (0xc001e91d60) (1) Data frame sent
I0105 12:51:40.152800       8 log.go:172] (0xc00184c2c0) (0xc001e91d60) Stream removed, broadcasting: 1
I0105 12:51:40.153492       8 log.go:172] (0xc00184c2c0) (0xc0024b0f00) Stream removed, broadcasting: 3
I0105 12:51:40.153655       8 log.go:172] (0xc00184c2c0) (0xc001e91e00) Stream removed, broadcasting: 5
I0105 12:51:40.153783       8 log.go:172] (0xc00184c2c0) (0xc001e91d60) Stream removed, broadcasting: 1
I0105 12:51:40.153823       8 log.go:172] (0xc00184c2c0) (0xc0024b0f00) Stream removed, broadcasting: 3
I0105 12:51:40.153848       8 log.go:172] (0xc00184c2c0) (0xc001e91e00) Stream removed, broadcasting: 5
I0105 12:51:40.155057       8 log.go:172] (0xc00184c2c0) Go away received
Jan  5 12:51:40.155: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:51:40.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-tmhzx" for this suite.
Jan  5 12:52:04.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:52:04.302: INFO: namespace: e2e-tests-pod-network-test-tmhzx, resource: bindings, ignored listing per whitelist
Jan  5 12:52:04.330: INFO: namespace e2e-tests-pod-network-test-tmhzx deletion completed in 24.151110663s

• [SLOW TEST:59.537 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:52:04.330: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-gfqnt
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-gfqnt
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-gfqnt
Jan  5 12:52:04.779: INFO: Found 0 stateful pods, waiting for 1
Jan  5 12:52:14.798: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jan  5 12:52:14.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  5 12:52:15.374: INFO: stderr: "I0105 12:52:15.083038    3019 log.go:172] (0xc000774160) (0xc0005ce000) Create stream\nI0105 12:52:15.083309    3019 log.go:172] (0xc000774160) (0xc0005ce000) Stream added, broadcasting: 1\nI0105 12:52:15.095517    3019 log.go:172] (0xc000774160) Reply frame received for 1\nI0105 12:52:15.095556    3019 log.go:172] (0xc000774160) (0xc0008ac500) Create stream\nI0105 12:52:15.095568    3019 log.go:172] (0xc000774160) (0xc0008ac500) Stream added, broadcasting: 3\nI0105 12:52:15.097495    3019 log.go:172] (0xc000774160) Reply frame received for 3\nI0105 12:52:15.097521    3019 log.go:172] (0xc000774160) (0xc000588dc0) Create stream\nI0105 12:52:15.097530    3019 log.go:172] (0xc000774160) (0xc000588dc0) Stream added, broadcasting: 5\nI0105 12:52:15.100129    3019 log.go:172] (0xc000774160) Reply frame received for 5\nI0105 12:52:15.244746    3019 log.go:172] (0xc000774160) Data frame received for 3\nI0105 12:52:15.244790    3019 log.go:172] (0xc0008ac500) (3) Data frame handling\nI0105 12:52:15.244810    3019 log.go:172] (0xc0008ac500) (3) Data frame sent\nI0105 12:52:15.360509    3019 log.go:172] (0xc000774160) Data frame received for 1\nI0105 12:52:15.360753    3019 log.go:172] (0xc000774160) (0xc0008ac500) Stream removed, broadcasting: 3\nI0105 12:52:15.360845    3019 log.go:172] (0xc0005ce000) (1) Data frame handling\nI0105 12:52:15.360879    3019 log.go:172] (0xc0005ce000) (1) Data frame sent\nI0105 12:52:15.361006    3019 log.go:172] (0xc000774160) (0xc000588dc0) Stream removed, broadcasting: 5\nI0105 12:52:15.361133    3019 log.go:172] (0xc000774160) (0xc0005ce000) Stream removed, broadcasting: 1\nI0105 12:52:15.361182    3019 log.go:172] (0xc000774160) Go away received\nI0105 12:52:15.361527    3019 log.go:172] (0xc000774160) (0xc0005ce000) Stream removed, broadcasting: 1\nI0105 12:52:15.361547    3019 log.go:172] (0xc000774160) (0xc0008ac500) Stream removed, broadcasting: 3\nI0105 12:52:15.361557    3019 log.go:172] (0xc000774160) (0xc000588dc0) Stream removed, broadcasting: 5\n"
Jan  5 12:52:15.375: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  5 12:52:15.375: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  5 12:52:15.396: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan  5 12:52:25.421: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  5 12:52:25.421: INFO: Waiting for statefulset status.replicas updated to 0
Jan  5 12:52:25.501: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan  5 12:52:25.501: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:04 +0000 UTC  }]
Jan  5 12:52:25.501: INFO: 
Jan  5 12:52:25.501: INFO: StatefulSet ss has not reached scale 3, at 1
Jan  5 12:52:27.318: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.977446783s
Jan  5 12:52:28.421: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.160242938s
Jan  5 12:52:29.518: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.057404074s
Jan  5 12:52:30.539: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.96022883s
Jan  5 12:52:32.715: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.939465199s
Jan  5 12:52:34.489: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.762916903s
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-gfqnt
Jan  5 12:52:35.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 12:52:36.228: INFO: stderr: "I0105 12:52:35.788848    3042 log.go:172] (0xc00014c0b0) (0xc0006fa640) Create stream\nI0105 12:52:35.789054    3042 log.go:172] (0xc00014c0b0) (0xc0006fa640) Stream added, broadcasting: 1\nI0105 12:52:35.798980    3042 log.go:172] (0xc00014c0b0) Reply frame received for 1\nI0105 12:52:35.799098    3042 log.go:172] (0xc00014c0b0) (0xc0004ccc80) Create stream\nI0105 12:52:35.799115    3042 log.go:172] (0xc00014c0b0) (0xc0004ccc80) Stream added, broadcasting: 3\nI0105 12:52:35.800029    3042 log.go:172] (0xc00014c0b0) Reply frame received for 3\nI0105 12:52:35.800083    3042 log.go:172] (0xc00014c0b0) (0xc00067a000) Create stream\nI0105 12:52:35.800096    3042 log.go:172] (0xc00014c0b0) (0xc00067a000) Stream added, broadcasting: 5\nI0105 12:52:35.801572    3042 log.go:172] (0xc00014c0b0) Reply frame received for 5\nI0105 12:52:36.002860    3042 log.go:172] (0xc00014c0b0) Data frame received for 3\nI0105 12:52:36.003759    3042 log.go:172] (0xc0004ccc80) (3) Data frame handling\nI0105 12:52:36.003855    3042 log.go:172] (0xc0004ccc80) (3) Data frame sent\nI0105 12:52:36.220904    3042 log.go:172] (0xc00014c0b0) (0xc0004ccc80) Stream removed, broadcasting: 3\nI0105 12:52:36.221026    3042 log.go:172] (0xc00014c0b0) Data frame received for 1\nI0105 12:52:36.221056    3042 log.go:172] (0xc0006fa640) (1) Data frame handling\nI0105 12:52:36.221073    3042 log.go:172] (0xc0006fa640) (1) Data frame sent\nI0105 12:52:36.221080    3042 log.go:172] (0xc00014c0b0) (0xc0006fa640) Stream removed, broadcasting: 1\nI0105 12:52:36.221114    3042 log.go:172] (0xc00014c0b0) (0xc00067a000) Stream removed, broadcasting: 5\nI0105 12:52:36.221165    3042 log.go:172] (0xc00014c0b0) Go away received\nI0105 12:52:36.221348    3042 log.go:172] (0xc00014c0b0) (0xc0006fa640) Stream removed, broadcasting: 1\nI0105 12:52:36.221364    3042 log.go:172] (0xc00014c0b0) (0xc0004ccc80) Stream removed, broadcasting: 3\nI0105 12:52:36.221372    3042 log.go:172] (0xc00014c0b0) (0xc00067a000) Stream removed, broadcasting: 5\n"
Jan  5 12:52:36.229: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  5 12:52:36.229: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  5 12:52:36.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 12:52:37.131: INFO: stderr: "I0105 12:52:36.506183    3063 log.go:172] (0xc0006a02c0) (0xc0006ec5a0) Create stream\nI0105 12:52:36.506371    3063 log.go:172] (0xc0006a02c0) (0xc0006ec5a0) Stream added, broadcasting: 1\nI0105 12:52:36.514850    3063 log.go:172] (0xc0006a02c0) Reply frame received for 1\nI0105 12:52:36.514904    3063 log.go:172] (0xc0006a02c0) (0xc000646c80) Create stream\nI0105 12:52:36.514931    3063 log.go:172] (0xc0006a02c0) (0xc000646c80) Stream added, broadcasting: 3\nI0105 12:52:36.520317    3063 log.go:172] (0xc0006a02c0) Reply frame received for 3\nI0105 12:52:36.520346    3063 log.go:172] (0xc0006a02c0) (0xc00020e000) Create stream\nI0105 12:52:36.520365    3063 log.go:172] (0xc0006a02c0) (0xc00020e000) Stream added, broadcasting: 5\nI0105 12:52:36.523025    3063 log.go:172] (0xc0006a02c0) Reply frame received for 5\nI0105 12:52:36.902779    3063 log.go:172] (0xc0006a02c0) Data frame received for 5\nI0105 12:52:36.902890    3063 log.go:172] (0xc00020e000) (5) Data frame handling\nI0105 12:52:36.902901    3063 log.go:172] (0xc00020e000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0105 12:52:36.902917    3063 log.go:172] (0xc0006a02c0) Data frame received for 3\nI0105 12:52:36.902922    3063 log.go:172] (0xc000646c80) (3) Data frame handling\nI0105 12:52:36.902929    3063 log.go:172] (0xc000646c80) (3) Data frame sent\nI0105 12:52:37.124279    3063 log.go:172] (0xc0006a02c0) (0xc000646c80) Stream removed, broadcasting: 3\nI0105 12:52:37.124373    3063 log.go:172] (0xc0006a02c0) Data frame received for 1\nI0105 12:52:37.124397    3063 log.go:172] (0xc0006ec5a0) (1) Data frame handling\nI0105 12:52:37.124414    3063 log.go:172] (0xc0006ec5a0) (1) Data frame sent\nI0105 12:52:37.124458    3063 log.go:172] (0xc0006a02c0) (0xc0006ec5a0) Stream removed, broadcasting: 1\nI0105 12:52:37.124466    3063 log.go:172] (0xc0006a02c0) (0xc00020e000) Stream removed, broadcasting: 5\nI0105 12:52:37.124502    3063 log.go:172] (0xc0006a02c0) Go away received\nI0105 12:52:37.124752    3063 log.go:172] (0xc0006a02c0) (0xc0006ec5a0) Stream removed, broadcasting: 1\nI0105 12:52:37.124779    3063 log.go:172] (0xc0006a02c0) (0xc000646c80) Stream removed, broadcasting: 3\nI0105 12:52:37.124805    3063 log.go:172] (0xc0006a02c0) (0xc00020e000) Stream removed, broadcasting: 5\n"
Jan  5 12:52:37.132: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  5 12:52:37.132: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  5 12:52:37.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 12:52:37.642: INFO: stderr: "I0105 12:52:37.402408    3085 log.go:172] (0xc000718370) (0xc000740640) Create stream\nI0105 12:52:37.402638    3085 log.go:172] (0xc000718370) (0xc000740640) Stream added, broadcasting: 1\nI0105 12:52:37.409327    3085 log.go:172] (0xc000718370) Reply frame received for 1\nI0105 12:52:37.409378    3085 log.go:172] (0xc000718370) (0xc000662b40) Create stream\nI0105 12:52:37.409388    3085 log.go:172] (0xc000718370) (0xc000662b40) Stream added, broadcasting: 3\nI0105 12:52:37.410391    3085 log.go:172] (0xc000718370) Reply frame received for 3\nI0105 12:52:37.410435    3085 log.go:172] (0xc000718370) (0xc0006ec000) Create stream\nI0105 12:52:37.410446    3085 log.go:172] (0xc000718370) (0xc0006ec000) Stream added, broadcasting: 5\nI0105 12:52:37.411410    3085 log.go:172] (0xc000718370) Reply frame received for 5\nI0105 12:52:37.511006    3085 log.go:172] (0xc000718370) Data frame received for 3\nI0105 12:52:37.511068    3085 log.go:172] (0xc000662b40) (3) Data frame handling\nI0105 12:52:37.511085    3085 log.go:172] (0xc000662b40) (3) Data frame sent\nI0105 12:52:37.511140    3085 log.go:172] (0xc000718370) Data frame received for 5\nI0105 12:52:37.511153    3085 log.go:172] (0xc0006ec000) (5) Data frame handling\nI0105 12:52:37.511166    3085 log.go:172] (0xc0006ec000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0105 12:52:37.633613    3085 log.go:172] (0xc000718370) Data frame received for 1\nI0105 12:52:37.634006    3085 log.go:172] (0xc000740640) (1) Data frame handling\nI0105 12:52:37.634074    3085 log.go:172] (0xc000740640) (1) Data frame sent\nI0105 12:52:37.634685    3085 log.go:172] (0xc000718370) (0xc000740640) Stream removed, broadcasting: 1\nI0105 12:52:37.634728    3085 log.go:172] (0xc000718370) (0xc000662b40) Stream removed, broadcasting: 3\nI0105 12:52:37.634819    3085 log.go:172] (0xc000718370) (0xc0006ec000) Stream removed, broadcasting: 5\nI0105 12:52:37.634925    3085 log.go:172] (0xc000718370) (0xc000740640) Stream removed, broadcasting: 1\nI0105 12:52:37.634942    3085 log.go:172] (0xc000718370) (0xc000662b40) Stream removed, broadcasting: 3\nI0105 12:52:37.634950    3085 log.go:172] (0xc000718370) (0xc0006ec000) Stream removed, broadcasting: 5\nI0105 12:52:37.635375    3085 log.go:172] (0xc000718370) Go away received\n"
Jan  5 12:52:37.643: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  5 12:52:37.643: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  5 12:52:37.660: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  5 12:52:37.660: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  5 12:52:37.660: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jan  5 12:52:37.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  5 12:52:38.102: INFO: stderr: "I0105 12:52:37.841480    3107 log.go:172] (0xc00013a6e0) (0xc000667400) Create stream\nI0105 12:52:37.841715    3107 log.go:172] (0xc00013a6e0) (0xc000667400) Stream added, broadcasting: 1\nI0105 12:52:37.848247    3107 log.go:172] (0xc00013a6e0) Reply frame received for 1\nI0105 12:52:37.848303    3107 log.go:172] (0xc00013a6e0) (0xc0005b4000) Create stream\nI0105 12:52:37.848320    3107 log.go:172] (0xc00013a6e0) (0xc0005b4000) Stream added, broadcasting: 3\nI0105 12:52:37.849849    3107 log.go:172] (0xc00013a6e0) Reply frame received for 3\nI0105 12:52:37.849937    3107 log.go:172] (0xc00013a6e0) (0xc000790000) Create stream\nI0105 12:52:37.849956    3107 log.go:172] (0xc00013a6e0) (0xc000790000) Stream added, broadcasting: 5\nI0105 12:52:37.851324    3107 log.go:172] (0xc00013a6e0) Reply frame received for 5\nI0105 12:52:37.979590    3107 log.go:172] (0xc00013a6e0) Data frame received for 3\nI0105 12:52:37.979672    3107 log.go:172] (0xc0005b4000) (3) Data frame handling\nI0105 12:52:37.979718    3107 log.go:172] (0xc0005b4000) (3) Data frame sent\nI0105 12:52:38.092356    3107 log.go:172] (0xc00013a6e0) Data frame received for 1\nI0105 12:52:38.092685    3107 log.go:172] (0xc000667400) (1) Data frame handling\nI0105 12:52:38.092822    3107 log.go:172] (0xc000667400) (1) Data frame sent\nI0105 12:52:38.092942    3107 log.go:172] (0xc00013a6e0) (0xc000667400) Stream removed, broadcasting: 1\nI0105 12:52:38.093371    3107 log.go:172] (0xc00013a6e0) (0xc0005b4000) Stream removed, broadcasting: 3\nI0105 12:52:38.093447    3107 log.go:172] (0xc00013a6e0) (0xc000790000) Stream removed, broadcasting: 5\nI0105 12:52:38.093477    3107 log.go:172] (0xc00013a6e0) Go away received\nI0105 12:52:38.093755    3107 log.go:172] (0xc00013a6e0) (0xc000667400) Stream removed, broadcasting: 1\nI0105 12:52:38.093893    3107 log.go:172] (0xc00013a6e0) (0xc0005b4000) Stream removed, broadcasting: 3\nI0105 12:52:38.093918    3107 log.go:172] (0xc00013a6e0) (0xc000790000) Stream removed, broadcasting: 5\n"
Jan  5 12:52:38.102: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  5 12:52:38.102: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  5 12:52:38.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  5 12:52:38.713: INFO: stderr: "I0105 12:52:38.293598    3130 log.go:172] (0xc000708370) (0xc00072e640) Create stream\nI0105 12:52:38.293769    3130 log.go:172] (0xc000708370) (0xc00072e640) Stream added, broadcasting: 1\nI0105 12:52:38.298112    3130 log.go:172] (0xc000708370) Reply frame received for 1\nI0105 12:52:38.298143    3130 log.go:172] (0xc000708370) (0xc000494be0) Create stream\nI0105 12:52:38.298149    3130 log.go:172] (0xc000708370) (0xc000494be0) Stream added, broadcasting: 3\nI0105 12:52:38.299009    3130 log.go:172] (0xc000708370) Reply frame received for 3\nI0105 12:52:38.299037    3130 log.go:172] (0xc000708370) (0xc00023a000) Create stream\nI0105 12:52:38.299047    3130 log.go:172] (0xc000708370) (0xc00023a000) Stream added, broadcasting: 5\nI0105 12:52:38.301233    3130 log.go:172] (0xc000708370) Reply frame received for 5\nI0105 12:52:38.528225    3130 log.go:172] (0xc000708370) Data frame received for 3\nI0105 12:52:38.528353    3130 log.go:172] (0xc000494be0) (3) Data frame handling\nI0105 12:52:38.528374    3130 log.go:172] (0xc000494be0) (3) Data frame sent\nI0105 12:52:38.706484    3130 log.go:172] (0xc000708370) Data frame received for 1\nI0105 12:52:38.706536    3130 log.go:172] (0xc00072e640) (1) Data frame handling\nI0105 12:52:38.706574    3130 log.go:172] (0xc00072e640) (1) Data frame sent\nI0105 12:52:38.706583    3130 log.go:172] (0xc000708370) (0xc00072e640) Stream removed, broadcasting: 1\nI0105 12:52:38.706614    3130 log.go:172] (0xc000708370) (0xc000494be0) Stream removed, broadcasting: 3\nI0105 12:52:38.706793    3130 log.go:172] (0xc000708370) (0xc00023a000) Stream removed, broadcasting: 5\nI0105 12:52:38.706901    3130 log.go:172] (0xc000708370) Go away received\nI0105 12:52:38.707594    3130 log.go:172] (0xc000708370) (0xc00072e640) Stream removed, broadcasting: 1\nI0105 12:52:38.707603    3130 log.go:172] (0xc000708370) (0xc000494be0) Stream removed, broadcasting: 3\nI0105 12:52:38.707611    3130 log.go:172] (0xc000708370) (0xc00023a000) Stream removed, broadcasting: 5\n"
Jan  5 12:52:38.713: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  5 12:52:38.713: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  5 12:52:38.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  5 12:52:39.497: INFO: stderr: "I0105 12:52:38.887227    3151 log.go:172] (0xc0007f8a50) (0xc0003df540) Create stream\nI0105 12:52:38.887347    3151 log.go:172] (0xc0007f8a50) (0xc0003df540) Stream added, broadcasting: 1\nI0105 12:52:38.934929    3151 log.go:172] (0xc0007f8a50) Reply frame received for 1\nI0105 12:52:38.935295    3151 log.go:172] (0xc0007f8a50) (0xc000820140) Create stream\nI0105 12:52:38.935376    3151 log.go:172] (0xc0007f8a50) (0xc000820140) Stream added, broadcasting: 3\nI0105 12:52:38.937137    3151 log.go:172] (0xc0007f8a50) Reply frame received for 3\nI0105 12:52:38.937169    3151 log.go:172] (0xc0007f8a50) (0xc0003de820) Create stream\nI0105 12:52:38.937186    3151 log.go:172] (0xc0007f8a50) (0xc0003de820) Stream added, broadcasting: 5\nI0105 12:52:38.939100    3151 log.go:172] (0xc0007f8a50) Reply frame received for 5\nI0105 12:52:39.201078    3151 log.go:172] (0xc0007f8a50) Data frame received for 3\nI0105 12:52:39.201136    3151 log.go:172] (0xc000820140) (3) Data frame handling\nI0105 12:52:39.201167    3151 log.go:172] (0xc000820140) (3) Data frame sent\nI0105 12:52:39.489319    3151 log.go:172] (0xc0007f8a50) (0xc000820140) Stream removed, broadcasting: 3\nI0105 12:52:39.489701    3151 log.go:172] (0xc0007f8a50) Data frame received for 1\nI0105 12:52:39.489710    3151 log.go:172] (0xc0003df540) (1) Data frame handling\nI0105 12:52:39.489723    3151 log.go:172] (0xc0003df540) (1) Data frame sent\nI0105 12:52:39.489729    3151 log.go:172] (0xc0007f8a50) (0xc0003df540) Stream removed, broadcasting: 1\nI0105 12:52:39.490018    3151 log.go:172] (0xc0007f8a50) (0xc0003de820) Stream removed, broadcasting: 5\nI0105 12:52:39.490096    3151 log.go:172] (0xc0007f8a50) (0xc0003df540) Stream removed, broadcasting: 1\nI0105 12:52:39.490129    3151 log.go:172] (0xc0007f8a50) (0xc000820140) Stream removed, broadcasting: 3\nI0105 12:52:39.490136    3151 log.go:172] (0xc0007f8a50) (0xc0003de820) Stream removed, broadcasting: 5\nI0105 12:52:39.490510    3151 log.go:172] (0xc0007f8a50) Go away received\n"
Jan  5 12:52:39.498: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  5 12:52:39.498: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  5 12:52:39.498: INFO: Waiting for statefulset status.replicas updated to 0
Jan  5 12:52:39.508: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Jan  5 12:52:49.531: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  5 12:52:49.531: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan  5 12:52:49.531: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan  5 12:52:49.584: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan  5 12:52:49.585: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:04 +0000 UTC  }]
Jan  5 12:52:49.585: INFO: ss-1  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:25 +0000 UTC  }]
Jan  5 12:52:49.585: INFO: ss-2  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:25 +0000 UTC  }]
Jan  5 12:52:49.585: INFO: 
Jan  5 12:52:49.585: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  5 12:52:50.607: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan  5 12:52:50.607: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:04 +0000 UTC  }]
Jan  5 12:52:50.607: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:25 +0000 UTC  }]
Jan  5 12:52:50.607: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:25 +0000 UTC  }]
Jan  5 12:52:50.607: INFO: 
Jan  5 12:52:50.607: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  5 12:52:52.031: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan  5 12:52:52.031: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:04 +0000 UTC  }]
Jan  5 12:52:52.031: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:25 +0000 UTC  }]
Jan  5 12:52:52.031: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:25 +0000 UTC  }]
Jan  5 12:52:52.031: INFO: 
Jan  5 12:52:52.031: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  5 12:52:53.051: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan  5 12:52:53.052: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:04 +0000 UTC  }]
Jan  5 12:52:53.052: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:25 +0000 UTC  }]
Jan  5 12:52:53.052: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:25 +0000 UTC  }]
Jan  5 12:52:53.052: INFO: 
Jan  5 12:52:53.052: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  5 12:52:54.648: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan  5 12:52:54.648: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:04 +0000 UTC  }]
Jan  5 12:52:54.648: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:25 +0000 UTC  }]
Jan  5 12:52:54.648: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:25 +0000 UTC  }]
Jan  5 12:52:54.648: INFO: 
Jan  5 12:52:54.648: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  5 12:52:55.664: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan  5 12:52:55.664: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:04 +0000 UTC  }]
Jan  5 12:52:55.664: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:25 +0000 UTC  }]
Jan  5 12:52:55.664: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:25 +0000 UTC  }]
Jan  5 12:52:55.664: INFO: 
Jan  5 12:52:55.664: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  5 12:52:57.256: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan  5 12:52:57.256: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:04 +0000 UTC  }]
Jan  5 12:52:57.256: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:25 +0000 UTC  }]
Jan  5 12:52:57.256: INFO: ss-2  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:25 +0000 UTC  }]
Jan  5 12:52:57.256: INFO: 
Jan  5 12:52:57.256: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  5 12:52:58.276: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan  5 12:52:58.276: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:04 +0000 UTC  }]
Jan  5 12:52:58.276: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:25 +0000 UTC  }]
Jan  5 12:52:58.276: INFO: ss-2  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:25 +0000 UTC  }]
Jan  5 12:52:58.276: INFO: 
Jan  5 12:52:58.276: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  5 12:52:59.292: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan  5 12:52:59.292: INFO: ss-0  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:04 +0000 UTC  }]
Jan  5 12:52:59.292: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 12:52:25 +0000 UTC  }]
Jan  5 12:52:59.293: INFO: 
Jan  5 12:52:59.293: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-gfqnt
Jan  5 12:53:00.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 12:53:00.622: INFO: rc: 1
Jan  5 12:53:00.623: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc00209d230 exit status 1   true [0xc000353180 0xc000353198 0xc0003531b0] [0xc000353180 0xc000353198 0xc0003531b0] [0xc000353190 0xc0003531a8] [0x935700 0x935700] 0xc001425740 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Jan  5 12:53:10.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 12:53:10.766: INFO: rc: 1
Jan  5 12:53:10.766: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0014c2ff0 exit status 1   true [0xc0026f6430 0xc0026f6448 0xc0026f6460] [0xc0026f6430 0xc0026f6448 0xc0026f6460] [0xc0026f6440 0xc0026f6458] [0x935700 0x935700] 0xc001989ec0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  5 12:53:20.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 12:53:20.908: INFO: rc: 1
Jan  5 12:53:20.908: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001c18ba0 exit status 1   true [0xc0019b8eb0 0xc0019b8f00 0xc0019b8f70] [0xc0019b8eb0 0xc0019b8f00 0xc0019b8f70] [0xc0019b8ee0 0xc0019b8f48] [0x935700 0x935700] 0xc0017eb200 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  5 12:53:30.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 12:53:31.030: INFO: rc: 1
Jan  5 12:53:31.030: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001c18cf0 exit status 1   true [0xc0019b8f90 0xc0019b8fa8 0xc0019b8ff8] [0xc0019b8f90 0xc0019b8fa8 0xc0019b8ff8] [0xc0019b8fa0 0xc0019b8ff0] [0x935700 0x935700] 0xc0017eb500 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  5 12:53:41.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 12:53:41.205: INFO: rc: 1
Jan  5 12:53:41.206: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0014c3110 exit status 1   true [0xc0026f6468 0xc0026f6480 0xc0026f6498] [0xc0026f6468 0xc0026f6480 0xc0026f6498] [0xc0026f6478 0xc0026f6490] [0x935700 0x935700] 0xc00017e060 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  5 12:53:51.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 12:53:51.346: INFO: rc: 1
Jan  5 12:53:51.347: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00209d3b0 exit status 1   true [0xc0003531b8 0xc0003531d0 0xc0003531e8] [0xc0003531b8 0xc0003531d0 0xc0003531e8] [0xc0003531c8 0xc0003531e0] [0x935700 0x935700] 0xc001486ea0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  5 12:54:01.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 12:54:01.457: INFO: rc: 1
Jan  5 12:54:01.458: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d88120 exit status 1   true [0xc00000ebe8 0xc00000ec50 0xc00000eca0] [0xc00000ebe8 0xc00000ec50 0xc00000eca0] [0xc00000ec48 0xc00000ec80] [0x935700 0x935700] 0xc0000b6000 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  5 12:54:11.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 12:54:11.600: INFO: rc: 1
Jan  5 12:54:11.601: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d88480 exit status 1   true [0xc00000ecc0 0xc00000ed30 0xc00000ed88] [0xc00000ecc0 0xc00000ed30 0xc00000ed88] [0xc00000ece8 0xc00000ed68] [0x935700 0x935700] 0xc0013516e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  5 12:54:21.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 12:54:21.687: INFO: rc: 1
Jan  5 12:54:21.688: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000f40120 exit status 1   true [0xc000352140 0xc000352198 0xc000352240] [0xc000352140 0xc000352198 0xc000352240] [0xc000352170 0xc000352230] [0x935700 0x935700] 0xc001425740 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  5 12:54:31.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 12:54:31.829: INFO: rc: 1
Jan  5 12:54:31.829: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d88720 exit status 1   true [0xc00000eda0 0xc00000ee88 0xc00000ef80] [0xc00000eda0 0xc00000ee88 0xc00000ef80] [0xc00000ee30 0xc00000eea0] [0x935700 0x935700] 0xc001804ea0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  5 12:54:41.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 12:54:41.975: INFO: rc: 1
Jan  5 12:54:41.975: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d88ab0 exit status 1   true [0xc00000ef98 0xc00000f040 0xc00000f0a8] [0xc00000ef98 0xc00000f040 0xc00000f0a8] [0xc00000f028 0xc00000f088] [0x935700 0x935700] 0xc001988900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  5 12:54:51.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 12:54:52.080: INFO: rc: 1
Jan  5 12:54:52.081: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d88e40 exit status 1   true [0xc00000f100 0xc00000f170 0xc00000f218] [0xc00000f100 0xc00000f170 0xc00000f218] [0xc00000f158 0xc00000f1f8] [0x935700 0x935700] 0xc001989a40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  5 12:55:02.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 12:55:02.210: INFO: rc: 1
Jan  5 12:55:02.211: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000f40270 exit status 1   true [0xc000352248 0xc000352278 0xc000352300] [0xc000352248 0xc000352278 0xc000352300] [0xc000352260 0xc0003522d0] [0x935700 0x935700] 0xc0019b2720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  5 12:55:12.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 12:55:12.337: INFO: rc: 1
Jan  5 12:55:12.338: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000bf41b0 exit status 1   true [0xc000e88000 0xc000e88030 0xc000e88048] [0xc000e88000 0xc000e88030 0xc000e88048] [0xc000e88028 0xc000e88040] [0x935700 0x935700] 0xc001ba6780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  5 12:55:22.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 12:55:22.468: INFO: rc: 1
Jan  5 12:55:22.469: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001750d80 exit status 1   true [0xc001d40000 0xc001d40048 0xc001d400b0] [0xc001d40000 0xc001d40048 0xc001d400b0] [0xc001d40030 0xc001d40068] [0x935700 0x935700] 0xc000cb44e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  5 12:55:32.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 12:55:32.632: INFO: rc: 1
Jan  5 12:55:32.633: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d89050 exit status 1   true [0xc00000f248 0xc00000f2c0 0xc00000f3c0] [0xc00000f248 0xc00000f2c0 0xc00000f3c0] [0xc00000f2b0 0xc00000f3a8] [0x935700 0x935700] 0xc001bfef00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  5 12:55:42.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 12:55:42.744: INFO: rc: 1
Jan  5 12:55:42.744: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000f40390 exit status 1   true [0xc000352328 0xc000352390 0xc0003523a8] [0xc000352328 0xc000352390 0xc0003523a8] [0xc000352378 0xc0003523a0] [0x935700 0x935700] 0xc0019b34a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  5 12:55:52.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 12:55:52.836: INFO: rc: 1
Jan  5 12:55:52.837: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001750ea0 exit status 1   true [0xc001d400c0 0xc001d400f8 0xc001d40138] [0xc001d400c0 0xc001d400f8 0xc001d40138] [0xc001d400e0 0xc001d40118] [0x935700 0x935700] 0xc000cb4a80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  5 12:56:02.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 12:56:02.989: INFO: rc: 1
Jan  5 12:56:02.989: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d882a0 exit status 1   true [0xc00000ebe8 0xc00000ec50 0xc00000eca0] [0xc00000ebe8 0xc00000ec50 0xc00000eca0] [0xc00000ec48 0xc00000ec80] [0x935700 0x935700] 0xc001988d20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  5 12:56:12.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 12:56:13.111: INFO: rc: 1
Jan  5 12:56:13.111: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001750db0 exit status 1   true [0xc000352140 0xc000352198 0xc000352240] [0xc000352140 0xc000352198 0xc000352240] [0xc000352170 0xc000352230] [0x935700 0x935700] 0xc0017127e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  5 12:56:23.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 12:56:23.272: INFO: rc: 1
Jan  5 12:56:23.272: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000bf4120 exit status 1   true [0xc000e88000 0xc000e88030 0xc000e88048] [0xc000e88000 0xc000e88030 0xc000e88048] [0xc000e88028 0xc000e88040] [0x935700 0x935700] 0xc001425740 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  5 12:56:33.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 12:56:33.562: INFO: rc: 1
Jan  5 12:56:33.563: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001750f00 exit status 1   true [0xc000352248 0xc000352278 0xc000352300] [0xc000352248 0xc000352278 0xc000352300] [0xc000352260 0xc0003522d0] [0x935700 0x935700] 0xc0013516e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  5 12:56:43.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 12:56:44.261: INFO: rc: 1
Jan  5 12:56:44.261: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000f401b0 exit status 1   true [0xc001d40000 0xc001d40048 0xc001d400b0] [0xc001d40000 0xc001d40048 0xc001d400b0] [0xc001d40030 0xc001d40068] [0x935700 0x935700] 0xc00129d020 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  5 12:56:54.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 12:56:54.378: INFO: rc: 1
Jan  5 12:56:54.378: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001751050 exit status 1   true [0xc000352328 0xc000352390 0xc0003523a8] [0xc000352328 0xc000352390 0xc0003523a8] [0xc000352378 0xc0003523a0] [0x935700 0x935700] 0xc001bfefc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  5 12:57:04.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 12:57:04.515: INFO: rc: 1
Jan  5 12:57:04.515: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001751170 exit status 1   true [0xc0003523b0 0xc0003523d0 0xc000352448] [0xc0003523b0 0xc0003523d0 0xc000352448] [0xc0003523c0 0xc000352428] [0x935700 0x935700] 0xc001bff3e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  5 12:57:14.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 12:57:14.619: INFO: rc: 1
Jan  5 12:57:14.619: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000bf42d0 exit status 1   true [0xc000e88050 0xc000e88090 0xc000e880b8] [0xc000e88050 0xc000e88090 0xc000e880b8] [0xc000e88088 0xc000e880b0] [0x935700 0x935700] 0xc0019b2720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  5 12:57:24.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 12:57:24.752: INFO: rc: 1
Jan  5 12:57:24.753: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000bf43f0 exit status 1   true [0xc000e880c0 0xc000e880d8 0xc000e880f0] [0xc000e880c0 0xc000e880d8 0xc000e880f0] [0xc000e880d0 0xc000e880e8] [0x935700 0x935700] 0xc0019b34a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  5 12:57:34.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 12:57:34.928: INFO: rc: 1
Jan  5 12:57:34.929: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0017512f0 exit status 1   true [0xc000352498 0xc000352550 0xc000352578] [0xc000352498 0xc000352550 0xc000352578] [0xc000352548 0xc000352568] [0x935700 0x935700] 0xc001bff980 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  5 12:57:44.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 12:57:45.637: INFO: rc: 1
Jan  5 12:57:45.637: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001751530 exit status 1   true [0xc000352580 0xc0003525a0 0xc0003525e0] [0xc000352580 0xc0003525a0 0xc0003525e0] [0xc000352590 0xc0003525c8] [0x935700 0x935700] 0xc000cb4480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  5 12:57:55.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 12:57:55.822: INFO: rc: 1
Jan  5 12:57:55.823: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000bf4510 exit status 1   true [0xc000e880f8 0xc000e88110 0xc000e88128] [0xc000e880f8 0xc000e88110 0xc000e88128] [0xc000e88108 0xc000e88120] [0x935700 0x935700] 0xc001ba61e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan  5 12:58:05.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gfqnt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 12:58:06.016: INFO: rc: 1
Jan  5 12:58:06.016: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Jan  5 12:58:06.016: INFO: Scaling statefulset ss to 0
Jan  5 12:58:06.034: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan  5 12:58:06.037: INFO: Deleting all statefulset in ns e2e-tests-statefulset-gfqnt
Jan  5 12:58:06.040: INFO: Scaling statefulset ss to 0
Jan  5 12:58:06.049: INFO: Waiting for statefulset status.replicas updated to 0
Jan  5 12:58:06.051: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:58:06.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-gfqnt" for this suite.
Jan  5 12:58:14.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:58:14.336: INFO: namespace: e2e-tests-statefulset-gfqnt, resource: bindings, ignored listing per whitelist
Jan  5 12:58:14.412: INFO: namespace e2e-tests-statefulset-gfqnt deletion completed in 8.215901081s

• [SLOW TEST:370.082 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:58:14.413: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-096cf9b7-2fbb-11ea-910c-0242ac110004
STEP: Creating a pod to test consume configMaps
Jan  5 12:58:15.209: INFO: Waiting up to 5m0s for pod "pod-configmaps-096ebff3-2fbb-11ea-910c-0242ac110004" in namespace "e2e-tests-configmap-88fj9" to be "success or failure"
Jan  5 12:58:15.222: INFO: Pod "pod-configmaps-096ebff3-2fbb-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 12.934526ms
Jan  5 12:58:17.257: INFO: Pod "pod-configmaps-096ebff3-2fbb-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047786325s
Jan  5 12:58:19.273: INFO: Pod "pod-configmaps-096ebff3-2fbb-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06387687s
Jan  5 12:58:21.776: INFO: Pod "pod-configmaps-096ebff3-2fbb-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.566474158s
Jan  5 12:58:23.816: INFO: Pod "pod-configmaps-096ebff3-2fbb-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.606356333s
Jan  5 12:58:25.836: INFO: Pod "pod-configmaps-096ebff3-2fbb-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.626750134s
Jan  5 12:58:27.948: INFO: Pod "pod-configmaps-096ebff3-2fbb-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.738653395s
STEP: Saw pod success
Jan  5 12:58:27.948: INFO: Pod "pod-configmaps-096ebff3-2fbb-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 12:58:27.983: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-096ebff3-2fbb-11ea-910c-0242ac110004 container configmap-volume-test: 
STEP: delete the pod
Jan  5 12:58:28.144: INFO: Waiting for pod pod-configmaps-096ebff3-2fbb-11ea-910c-0242ac110004 to disappear
Jan  5 12:58:28.154: INFO: Pod pod-configmaps-096ebff3-2fbb-11ea-910c-0242ac110004 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:58:28.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-88fj9" for this suite.
Jan  5 12:58:36.235: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:58:36.273: INFO: namespace: e2e-tests-configmap-88fj9, resource: bindings, ignored listing per whitelist
Jan  5 12:58:36.446: INFO: namespace e2e-tests-configmap-88fj9 deletion completed in 8.282153901s

• [SLOW TEST:22.033 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:58:36.447: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  5 12:58:36.701: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:58:46.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-dxvdz" for this suite.
Jan  5 12:59:32.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:59:33.134: INFO: namespace: e2e-tests-pods-dxvdz, resource: bindings, ignored listing per whitelist
Jan  5 12:59:33.206: INFO: namespace e2e-tests-pods-dxvdz deletion completed in 46.4106341s

• [SLOW TEST:56.760 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:59:33.207: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jan  5 12:59:33.628: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-r4hdq,SelfLink:/api/v1/namespaces/e2e-tests-watch-r4hdq/configmaps/e2e-watch-test-label-changed,UID:381b925d-2fbb-11ea-a994-fa163e34d433,ResourceVersion:17258240,Generation:0,CreationTimestamp:2020-01-05 12:59:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  5 12:59:33.628: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-r4hdq,SelfLink:/api/v1/namespaces/e2e-tests-watch-r4hdq/configmaps/e2e-watch-test-label-changed,UID:381b925d-2fbb-11ea-a994-fa163e34d433,ResourceVersion:17258241,Generation:0,CreationTimestamp:2020-01-05 12:59:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan  5 12:59:33.628: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-r4hdq,SelfLink:/api/v1/namespaces/e2e-tests-watch-r4hdq/configmaps/e2e-watch-test-label-changed,UID:381b925d-2fbb-11ea-a994-fa163e34d433,ResourceVersion:17258242,Generation:0,CreationTimestamp:2020-01-05 12:59:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jan  5 12:59:43.825: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-r4hdq,SelfLink:/api/v1/namespaces/e2e-tests-watch-r4hdq/configmaps/e2e-watch-test-label-changed,UID:381b925d-2fbb-11ea-a994-fa163e34d433,ResourceVersion:17258256,Generation:0,CreationTimestamp:2020-01-05 12:59:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  5 12:59:43.826: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-r4hdq,SelfLink:/api/v1/namespaces/e2e-tests-watch-r4hdq/configmaps/e2e-watch-test-label-changed,UID:381b925d-2fbb-11ea-a994-fa163e34d433,ResourceVersion:17258257,Generation:0,CreationTimestamp:2020-01-05 12:59:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Jan  5 12:59:43.826: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-r4hdq,SelfLink:/api/v1/namespaces/e2e-tests-watch-r4hdq/configmaps/e2e-watch-test-label-changed,UID:381b925d-2fbb-11ea-a994-fa163e34d433,ResourceVersion:17258258,Generation:0,CreationTimestamp:2020-01-05 12:59:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 12:59:43.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-r4hdq" for this suite.
Jan  5 12:59:50.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 12:59:50.679: INFO: namespace: e2e-tests-watch-r4hdq, resource: bindings, ignored listing per whitelist
Jan  5 12:59:50.719: INFO: namespace e2e-tests-watch-r4hdq deletion completed in 6.882454372s

• [SLOW TEST:17.513 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 12:59:50.720: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
STEP: Creating a pod to test consume service account token
Jan  5 12:59:51.674: INFO: Waiting up to 5m0s for pod "pod-service-account-42eb3911-2fbb-11ea-910c-0242ac110004-44klz" in namespace "e2e-tests-svcaccounts-vh6b7" to be "success or failure"
Jan  5 12:59:51.688: INFO: Pod "pod-service-account-42eb3911-2fbb-11ea-910c-0242ac110004-44klz": Phase="Pending", Reason="", readiness=false. Elapsed: 13.90119ms
Jan  5 12:59:54.603: INFO: Pod "pod-service-account-42eb3911-2fbb-11ea-910c-0242ac110004-44klz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.92892184s
Jan  5 12:59:56.625: INFO: Pod "pod-service-account-42eb3911-2fbb-11ea-910c-0242ac110004-44klz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.951237026s
Jan  5 12:59:58.649: INFO: Pod "pod-service-account-42eb3911-2fbb-11ea-910c-0242ac110004-44klz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.974370014s
Jan  5 13:00:02.045: INFO: Pod "pod-service-account-42eb3911-2fbb-11ea-910c-0242ac110004-44klz": Phase="Pending", Reason="", readiness=false. Elapsed: 10.370381641s
Jan  5 13:00:04.061: INFO: Pod "pod-service-account-42eb3911-2fbb-11ea-910c-0242ac110004-44klz": Phase="Pending", Reason="", readiness=false. Elapsed: 12.386687315s
Jan  5 13:00:06.135: INFO: Pod "pod-service-account-42eb3911-2fbb-11ea-910c-0242ac110004-44klz": Phase="Pending", Reason="", readiness=false. Elapsed: 14.460544483s
Jan  5 13:00:08.156: INFO: Pod "pod-service-account-42eb3911-2fbb-11ea-910c-0242ac110004-44klz": Phase="Pending", Reason="", readiness=false. Elapsed: 16.481941381s
Jan  5 13:00:10.194: INFO: Pod "pod-service-account-42eb3911-2fbb-11ea-910c-0242ac110004-44klz": Phase="Pending", Reason="", readiness=false. Elapsed: 18.519712822s
Jan  5 13:00:12.215: INFO: Pod "pod-service-account-42eb3911-2fbb-11ea-910c-0242ac110004-44klz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.540434448s
STEP: Saw pod success
Jan  5 13:00:12.215: INFO: Pod "pod-service-account-42eb3911-2fbb-11ea-910c-0242ac110004-44klz" satisfied condition "success or failure"
Jan  5 13:00:12.224: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-42eb3911-2fbb-11ea-910c-0242ac110004-44klz container token-test: 
STEP: delete the pod
Jan  5 13:00:12.470: INFO: Waiting for pod pod-service-account-42eb3911-2fbb-11ea-910c-0242ac110004-44klz to disappear
Jan  5 13:00:12.498: INFO: Pod pod-service-account-42eb3911-2fbb-11ea-910c-0242ac110004-44klz no longer exists
STEP: Creating a pod to test consume service account root CA
Jan  5 13:00:12.634: INFO: Waiting up to 5m0s for pod "pod-service-account-42eb3911-2fbb-11ea-910c-0242ac110004-x58qf" in namespace "e2e-tests-svcaccounts-vh6b7" to be "success or failure"
Jan  5 13:00:12.648: INFO: Pod "pod-service-account-42eb3911-2fbb-11ea-910c-0242ac110004-x58qf": Phase="Pending", Reason="", readiness=false. Elapsed: 14.044019ms
Jan  5 13:00:14.799: INFO: Pod "pod-service-account-42eb3911-2fbb-11ea-910c-0242ac110004-x58qf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.165115228s
Jan  5 13:00:16.830: INFO: Pod "pod-service-account-42eb3911-2fbb-11ea-910c-0242ac110004-x58qf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.195294226s
Jan  5 13:00:19.161: INFO: Pod "pod-service-account-42eb3911-2fbb-11ea-910c-0242ac110004-x58qf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.526931113s
Jan  5 13:00:21.355: INFO: Pod "pod-service-account-42eb3911-2fbb-11ea-910c-0242ac110004-x58qf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.720657355s
Jan  5 13:00:24.126: INFO: Pod "pod-service-account-42eb3911-2fbb-11ea-910c-0242ac110004-x58qf": Phase="Pending", Reason="", readiness=false. Elapsed: 11.491825736s
Jan  5 13:00:26.298: INFO: Pod "pod-service-account-42eb3911-2fbb-11ea-910c-0242ac110004-x58qf": Phase="Pending", Reason="", readiness=false. Elapsed: 13.663755941s
Jan  5 13:00:28.310: INFO: Pod "pod-service-account-42eb3911-2fbb-11ea-910c-0242ac110004-x58qf": Phase="Pending", Reason="", readiness=false. Elapsed: 15.675539143s
Jan  5 13:00:30.329: INFO: Pod "pod-service-account-42eb3911-2fbb-11ea-910c-0242ac110004-x58qf": Phase="Pending", Reason="", readiness=false. Elapsed: 17.694968089s
Jan  5 13:00:32.352: INFO: Pod "pod-service-account-42eb3911-2fbb-11ea-910c-0242ac110004-x58qf": Phase="Pending", Reason="", readiness=false. Elapsed: 19.717596936s
Jan  5 13:00:34.370: INFO: Pod "pod-service-account-42eb3911-2fbb-11ea-910c-0242ac110004-x58qf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.73585766s
STEP: Saw pod success
Jan  5 13:00:34.370: INFO: Pod "pod-service-account-42eb3911-2fbb-11ea-910c-0242ac110004-x58qf" satisfied condition "success or failure"
Jan  5 13:00:34.374: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-42eb3911-2fbb-11ea-910c-0242ac110004-x58qf container root-ca-test: 
STEP: delete the pod
Jan  5 13:00:34.580: INFO: Waiting for pod pod-service-account-42eb3911-2fbb-11ea-910c-0242ac110004-x58qf to disappear
Jan  5 13:00:34.591: INFO: Pod pod-service-account-42eb3911-2fbb-11ea-910c-0242ac110004-x58qf no longer exists
STEP: Creating a pod to test consume service account namespace
Jan  5 13:00:34.684: INFO: Waiting up to 5m0s for pod "pod-service-account-42eb3911-2fbb-11ea-910c-0242ac110004-gqpg9" in namespace "e2e-tests-svcaccounts-vh6b7" to be "success or failure"
Jan  5 13:00:34.696: INFO: Pod "pod-service-account-42eb3911-2fbb-11ea-910c-0242ac110004-gqpg9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.180262ms
Jan  5 13:00:36.894: INFO: Pod "pod-service-account-42eb3911-2fbb-11ea-910c-0242ac110004-gqpg9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209729866s
Jan  5 13:00:38.925: INFO: Pod "pod-service-account-42eb3911-2fbb-11ea-910c-0242ac110004-gqpg9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.24053507s
Jan  5 13:00:42.715: INFO: Pod "pod-service-account-42eb3911-2fbb-11ea-910c-0242ac110004-gqpg9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.03047181s
Jan  5 13:00:44.736: INFO: Pod "pod-service-account-42eb3911-2fbb-11ea-910c-0242ac110004-gqpg9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.052252966s
Jan  5 13:00:47.079: INFO: Pod "pod-service-account-42eb3911-2fbb-11ea-910c-0242ac110004-gqpg9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.394472166s
Jan  5 13:00:49.094: INFO: Pod "pod-service-account-42eb3911-2fbb-11ea-910c-0242ac110004-gqpg9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.409625556s
Jan  5 13:00:51.109: INFO: Pod "pod-service-account-42eb3911-2fbb-11ea-910c-0242ac110004-gqpg9": Phase="Pending", Reason="", readiness=false. Elapsed: 16.424416508s
Jan  5 13:00:53.125: INFO: Pod "pod-service-account-42eb3911-2fbb-11ea-910c-0242ac110004-gqpg9": Phase="Pending", Reason="", readiness=false. Elapsed: 18.440619172s
Jan  5 13:00:55.138: INFO: Pod "pod-service-account-42eb3911-2fbb-11ea-910c-0242ac110004-gqpg9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.453544919s
STEP: Saw pod success
Jan  5 13:00:55.138: INFO: Pod "pod-service-account-42eb3911-2fbb-11ea-910c-0242ac110004-gqpg9" satisfied condition "success or failure"
Jan  5 13:00:55.141: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-42eb3911-2fbb-11ea-910c-0242ac110004-gqpg9 container namespace-test: 
STEP: delete the pod
Jan  5 13:00:55.686: INFO: Waiting for pod pod-service-account-42eb3911-2fbb-11ea-910c-0242ac110004-gqpg9 to disappear
Jan  5 13:00:55.979: INFO: Pod pod-service-account-42eb3911-2fbb-11ea-910c-0242ac110004-gqpg9 no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 13:00:55.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-vh6b7" for this suite.
Jan  5 13:01:04.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:01:04.463: INFO: namespace: e2e-tests-svcaccounts-vh6b7, resource: bindings, ignored listing per whitelist
Jan  5 13:01:04.621: INFO: namespace e2e-tests-svcaccounts-vh6b7 deletion completed in 8.587712808s

• [SLOW TEST:73.901 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 13:01:04.621: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jan  5 13:01:14.987: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-6e97f5a4-2fbb-11ea-910c-0242ac110004,GenerateName:,Namespace:e2e-tests-events-t7pbb,SelfLink:/api/v1/namespaces/e2e-tests-events-t7pbb/pods/send-events-6e97f5a4-2fbb-11ea-910c-0242ac110004,UID:6e99d55a-2fbb-11ea-a994-fa163e34d433,ResourceVersion:17258463,Generation:0,CreationTimestamp:2020-01-05 13:01:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 906393398,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-26d96 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-26d96,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-26d96 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e328a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e328c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:01:05 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:01:13 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:01:13 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:01:04 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-01-05 13:01:05 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-01-05 13:01:12 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://eabb182e6458f9a7e21126457297e70d56ec7ca0d95401bf4d71c32fb4abd889}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Jan  5 13:01:17.010: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jan  5 13:01:19.026: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 13:01:19.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-events-t7pbb" for this suite.
Jan  5 13:02:05.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:02:05.279: INFO: namespace: e2e-tests-events-t7pbb, resource: bindings, ignored listing per whitelist
Jan  5 13:02:05.416: INFO: namespace e2e-tests-events-t7pbb deletion completed in 46.364832071s

• [SLOW TEST:60.794 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 13:02:05.417: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  5 13:02:05.584: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 13:02:18.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-pv5db" for this suite.
Jan  5 13:03:14.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:03:14.494: INFO: namespace: e2e-tests-pods-pv5db, resource: bindings, ignored listing per whitelist
Jan  5 13:03:14.542: INFO: namespace e2e-tests-pods-pv5db deletion completed in 56.349688143s

• [SLOW TEST:69.126 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 13:03:14.543: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 13:03:28.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-nfl8g" for this suite.
Jan  5 13:03:34.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:03:34.659: INFO: namespace: e2e-tests-kubelet-test-nfl8g, resource: bindings, ignored listing per whitelist
Jan  5 13:03:34.794: INFO: namespace e2e-tests-kubelet-test-nfl8g deletion completed in 6.679716953s

• [SLOW TEST:20.251 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 13:03:34.796: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  5 13:03:35.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-2x98v'
Jan  5 13:03:37.149: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  5 13:03:37.149: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Jan  5 13:03:37.254: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-pvb5p]
Jan  5 13:03:37.254: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-pvb5p" in namespace "e2e-tests-kubectl-2x98v" to be "running and ready"
Jan  5 13:03:37.312: INFO: Pod "e2e-test-nginx-rc-pvb5p": Phase="Pending", Reason="", readiness=false. Elapsed: 57.650556ms
Jan  5 13:03:39.523: INFO: Pod "e2e-test-nginx-rc-pvb5p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.269611106s
Jan  5 13:03:41.538: INFO: Pod "e2e-test-nginx-rc-pvb5p": Phase="Pending", Reason="", readiness=false. Elapsed: 4.283873234s
Jan  5 13:03:43.557: INFO: Pod "e2e-test-nginx-rc-pvb5p": Phase="Pending", Reason="", readiness=false. Elapsed: 6.302679749s
Jan  5 13:03:45.569: INFO: Pod "e2e-test-nginx-rc-pvb5p": Phase="Pending", Reason="", readiness=false. Elapsed: 8.315308213s
Jan  5 13:03:47.582: INFO: Pod "e2e-test-nginx-rc-pvb5p": Phase="Running", Reason="", readiness=true. Elapsed: 10.32785335s
Jan  5 13:03:47.582: INFO: Pod "e2e-test-nginx-rc-pvb5p" satisfied condition "running and ready"
Jan  5 13:03:47.582: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-pvb5p]
Jan  5 13:03:47.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-2x98v'
Jan  5 13:03:47.854: INFO: stderr: ""
Jan  5 13:03:47.854: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
Jan  5 13:03:47.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-2x98v'
Jan  5 13:03:48.151: INFO: stderr: ""
Jan  5 13:03:48.151: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 13:03:48.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-2x98v" for this suite.
Jan  5 13:04:14.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:04:14.919: INFO: namespace: e2e-tests-kubectl-2x98v, resource: bindings, ignored listing per whitelist
Jan  5 13:04:15.004: INFO: namespace e2e-tests-kubectl-2x98v deletion completed in 26.796380479s

• [SLOW TEST:40.209 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 13:04:15.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-7d6pw
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-7d6pw to expose endpoints map[]
Jan  5 13:04:15.373: INFO: Get endpoints failed (14.52219ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Jan  5 13:04:16.391: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-7d6pw exposes endpoints map[] (1.032938851s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-7d6pw
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-7d6pw to expose endpoints map[pod1:[100]]
Jan  5 13:04:22.921: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (6.505689764s elapsed, will retry)
Jan  5 13:04:30.910: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (14.495193075s elapsed, will retry)
Jan  5 13:04:35.259: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-7d6pw exposes endpoints map[pod1:[100]] (18.843919409s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-7d6pw
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-7d6pw to expose endpoints map[pod1:[100] pod2:[101]]
Jan  5 13:04:42.258: INFO: Unexpected endpoints: found map[e0bc0336-2fbb-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (6.973740471s elapsed, will retry)
Jan  5 13:04:46.509: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-7d6pw exposes endpoints map[pod1:[100] pod2:[101]] (11.22429349s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-7d6pw
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-7d6pw to expose endpoints map[pod2:[101]]
Jan  5 13:04:47.567: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-7d6pw exposes endpoints map[pod2:[101]] (1.045396572s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-7d6pw
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-7d6pw to expose endpoints map[]
Jan  5 13:04:49.318: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-7d6pw exposes endpoints map[] (1.740300758s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 13:04:49.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-7d6pw" for this suite.
Jan  5 13:05:13.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:05:13.731: INFO: namespace: e2e-tests-services-7d6pw, resource: bindings, ignored listing per whitelist
Jan  5 13:05:14.253: INFO: namespace e2e-tests-services-7d6pw deletion completed in 24.61412103s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:59.248 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 13:05:14.257: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-03999760-2fbc-11ea-910c-0242ac110004
STEP: Creating a pod to test consume secrets
Jan  5 13:05:15.409: INFO: Waiting up to 5m0s for pod "pod-secrets-03d5f4f2-2fbc-11ea-910c-0242ac110004" in namespace "e2e-tests-secrets-hxgdw" to be "success or failure"
Jan  5 13:05:15.416: INFO: Pod "pod-secrets-03d5f4f2-2fbc-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.746387ms
Jan  5 13:05:17.842: INFO: Pod "pod-secrets-03d5f4f2-2fbc-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.433635912s
Jan  5 13:05:19.873: INFO: Pod "pod-secrets-03d5f4f2-2fbc-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.464564364s
Jan  5 13:05:21.916: INFO: Pod "pod-secrets-03d5f4f2-2fbc-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.507536351s
Jan  5 13:05:23.941: INFO: Pod "pod-secrets-03d5f4f2-2fbc-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.532373517s
Jan  5 13:05:26.049: INFO: Pod "pod-secrets-03d5f4f2-2fbc-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.640354238s
Jan  5 13:05:28.108: INFO: Pod "pod-secrets-03d5f4f2-2fbc-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 12.698779347s
Jan  5 13:05:30.139: INFO: Pod "pod-secrets-03d5f4f2-2fbc-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 14.730208811s
Jan  5 13:05:32.169: INFO: Pod "pod-secrets-03d5f4f2-2fbc-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.759788017s
STEP: Saw pod success
Jan  5 13:05:32.169: INFO: Pod "pod-secrets-03d5f4f2-2fbc-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 13:05:32.186: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-03d5f4f2-2fbc-11ea-910c-0242ac110004 container secret-volume-test: 
STEP: delete the pod
Jan  5 13:05:32.381: INFO: Waiting for pod pod-secrets-03d5f4f2-2fbc-11ea-910c-0242ac110004 to disappear
Jan  5 13:05:32.406: INFO: Pod pod-secrets-03d5f4f2-2fbc-11ea-910c-0242ac110004 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 13:05:32.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-hxgdw" for this suite.
Jan  5 13:05:40.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:05:40.669: INFO: namespace: e2e-tests-secrets-hxgdw, resource: bindings, ignored listing per whitelist
Jan  5 13:05:40.980: INFO: namespace e2e-tests-secrets-hxgdw deletion completed in 8.555849561s
STEP: Destroying namespace "e2e-tests-secret-namespace-bztbr" for this suite.
Jan  5 13:05:49.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:05:49.200: INFO: namespace: e2e-tests-secret-namespace-bztbr, resource: bindings, ignored listing per whitelist
Jan  5 13:05:49.468: INFO: namespace e2e-tests-secret-namespace-bztbr deletion completed in 8.487536508s

• [SLOW TEST:35.210 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 13:05:49.468: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  5 13:05:50.042: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jan  5 13:05:55.082: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan  5 13:06:05.147: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jan  5 13:06:07.170: INFO: Creating deployment "test-rollover-deployment"
Jan  5 13:06:07.197: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jan  5 13:06:09.253: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jan  5 13:06:09.272: INFO: Ensure that both replica sets have 1 created replica
Jan  5 13:06:09.281: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jan  5 13:06:09.296: INFO: Updating deployment test-rollover-deployment
Jan  5 13:06:09.296: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jan  5 13:06:11.890: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jan  5 13:06:11.900: INFO: Make sure deployment "test-rollover-deployment" is complete
Jan  5 13:06:11.909: INFO: all replica sets need to contain the pod-template-hash label
Jan  5 13:06:11.909: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826367, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826367, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826370, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826367, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  5 13:06:13.970: INFO: all replica sets need to contain the pod-template-hash label
Jan  5 13:06:13.971: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826367, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826367, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826370, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826367, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  5 13:06:17.153: INFO: all replica sets need to contain the pod-template-hash label
Jan  5 13:06:17.154: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826367, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826367, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826370, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826367, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  5 13:06:18.203: INFO: all replica sets need to contain the pod-template-hash label
Jan  5 13:06:18.203: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826367, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826367, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826370, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826367, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  5 13:06:19.969: INFO: all replica sets need to contain the pod-template-hash label
Jan  5 13:06:19.970: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826367, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826367, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826370, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826367, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  5 13:06:21.941: INFO: all replica sets need to contain the pod-template-hash label
Jan  5 13:06:21.941: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826367, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826367, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826380, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826367, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  5 13:06:23.979: INFO: all replica sets need to contain the pod-template-hash label
Jan  5 13:06:23.979: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826367, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826367, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826380, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826367, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  5 13:06:26.052: INFO: all replica sets need to contain the pod-template-hash label
Jan  5 13:06:26.052: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826367, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826367, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826380, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826367, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  5 13:06:27.937: INFO: all replica sets need to contain the pod-template-hash label
Jan  5 13:06:27.937: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826367, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826367, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826380, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826367, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  5 13:06:29.962: INFO: all replica sets need to contain the pod-template-hash label
Jan  5 13:06:29.963: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826367, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826367, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826380, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826367, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  5 13:06:32.380: INFO: 
Jan  5 13:06:32.380: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan  5 13:06:32.778: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-k4gkr,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-k4gkr/deployments/test-rollover-deployment,UID:22c2b128-2fbc-11ea-a994-fa163e34d433,ResourceVersion:17259085,Generation:2,CreationTimestamp:2020-01-05 13:06:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-05 13:06:07 +0000 UTC 2020-01-05 13:06:07 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-05 13:06:30 +0000 UTC 2020-01-05 13:06:07 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan  5 13:06:32.815: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-k4gkr,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-k4gkr/replicasets/test-rollover-deployment-5b8479fdb6,UID:2407998e-2fbc-11ea-a994-fa163e34d433,ResourceVersion:17259076,Generation:2,CreationTimestamp:2020-01-05 13:06:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 22c2b128-2fbc-11ea-a994-fa163e34d433 0xc001f1e447 0xc001f1e448}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan  5 13:06:32.815: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jan  5 13:06:32.816: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-k4gkr,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-k4gkr/replicasets/test-rollover-controller,UID:188481d3-2fbc-11ea-a994-fa163e34d433,ResourceVersion:17259084,Generation:2,CreationTimestamp:2020-01-05 13:05:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 22c2b128-2fbc-11ea-a994-fa163e34d433 0xc002229ff7 0xc002229ff8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  5 13:06:32.817: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-k4gkr,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-k4gkr/replicasets/test-rollover-deployment-58494b7559,UID:22cb083b-2fbc-11ea-a994-fa163e34d433,ResourceVersion:17259038,Generation:2,CreationTimestamp:2020-01-05 13:06:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 22c2b128-2fbc-11ea-a994-fa163e34d433 0xc001f1e0b7 0xc001f1e0b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  5 13:06:32.884: INFO: Pod "test-rollover-deployment-5b8479fdb6-8wfrv" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-8wfrv,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-k4gkr,SelfLink:/api/v1/namespaces/e2e-tests-deployment-k4gkr/pods/test-rollover-deployment-5b8479fdb6-8wfrv,UID:2486ef78-2fbc-11ea-a994-fa163e34d433,ResourceVersion:17259061,Generation:0,CreationTimestamp:2020-01-05 13:06:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 2407998e-2fbc-11ea-a994-fa163e34d433 0xc001f1fbb7 0xc001f1fbb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vd8g8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vd8g8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-vd8g8 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f1fc80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f1fca0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:06:10 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:06:20 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:06:20 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:06:10 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-05 13:06:10 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-05 13:06:19 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://28e16805d8457d997b8c6a139951a705ee9a370708515d27d9173f0f8524bed4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 13:06:32.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-k4gkr" for this suite.
Jan  5 13:06:45.104: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:06:45.304: INFO: namespace: e2e-tests-deployment-k4gkr, resource: bindings, ignored listing per whitelist
Jan  5 13:06:45.677: INFO: namespace e2e-tests-deployment-k4gkr deletion completed in 12.662317791s

• [SLOW TEST:56.210 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 13:06:45.678: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-39db081e-2fbc-11ea-910c-0242ac110004
STEP: Creating a pod to test consume configMaps
Jan  5 13:06:45.946: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-39dc88d5-2fbc-11ea-910c-0242ac110004" in namespace "e2e-tests-projected-tzcf7" to be "success or failure"
Jan  5 13:06:45.985: INFO: Pod "pod-projected-configmaps-39dc88d5-2fbc-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 38.527902ms
Jan  5 13:06:47.996: INFO: Pod "pod-projected-configmaps-39dc88d5-2fbc-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049145173s
Jan  5 13:06:50.086: INFO: Pod "pod-projected-configmaps-39dc88d5-2fbc-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.138958786s
Jan  5 13:06:52.463: INFO: Pod "pod-projected-configmaps-39dc88d5-2fbc-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.516392693s
Jan  5 13:06:54.588: INFO: Pod "pod-projected-configmaps-39dc88d5-2fbc-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.641084561s
Jan  5 13:06:56.649: INFO: Pod "pod-projected-configmaps-39dc88d5-2fbc-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.702887329s
STEP: Saw pod success
Jan  5 13:06:56.650: INFO: Pod "pod-projected-configmaps-39dc88d5-2fbc-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 13:06:56.714: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-39dc88d5-2fbc-11ea-910c-0242ac110004 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  5 13:06:57.085: INFO: Waiting for pod pod-projected-configmaps-39dc88d5-2fbc-11ea-910c-0242ac110004 to disappear
Jan  5 13:06:57.101: INFO: Pod pod-projected-configmaps-39dc88d5-2fbc-11ea-910c-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 13:06:57.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-tzcf7" for this suite.
Jan  5 13:07:05.309: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:07:05.387: INFO: namespace: e2e-tests-projected-tzcf7, resource: bindings, ignored listing per whitelist
Jan  5 13:07:05.592: INFO: namespace e2e-tests-projected-tzcf7 deletion completed in 8.387062607s

• [SLOW TEST:19.915 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 13:07:05.593: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-45b9ea73-2fbc-11ea-910c-0242ac110004
STEP: Creating a pod to test consume configMaps
Jan  5 13:07:06.087: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-45beedf3-2fbc-11ea-910c-0242ac110004" in namespace "e2e-tests-projected-86fvh" to be "success or failure"
Jan  5 13:07:06.103: INFO: Pod "pod-projected-configmaps-45beedf3-2fbc-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 15.999705ms
Jan  5 13:07:08.760: INFO: Pod "pod-projected-configmaps-45beedf3-2fbc-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.672957886s
Jan  5 13:07:11.525: INFO: Pod "pod-projected-configmaps-45beedf3-2fbc-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 5.437629225s
Jan  5 13:07:13.552: INFO: Pod "pod-projected-configmaps-45beedf3-2fbc-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.465550704s
Jan  5 13:07:15.982: INFO: Pod "pod-projected-configmaps-45beedf3-2fbc-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.895316459s
Jan  5 13:07:18.009: INFO: Pod "pod-projected-configmaps-45beedf3-2fbc-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.921995806s
Jan  5 13:07:20.039: INFO: Pod "pod-projected-configmaps-45beedf3-2fbc-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 13.951840955s
Jan  5 13:07:22.150: INFO: Pod "pod-projected-configmaps-45beedf3-2fbc-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 16.062775247s
Jan  5 13:07:24.176: INFO: Pod "pod-projected-configmaps-45beedf3-2fbc-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.089406342s
STEP: Saw pod success
Jan  5 13:07:24.177: INFO: Pod "pod-projected-configmaps-45beedf3-2fbc-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 13:07:24.192: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-45beedf3-2fbc-11ea-910c-0242ac110004 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  5 13:07:26.914: INFO: Waiting for pod pod-projected-configmaps-45beedf3-2fbc-11ea-910c-0242ac110004 to disappear
Jan  5 13:07:27.281: INFO: Pod pod-projected-configmaps-45beedf3-2fbc-11ea-910c-0242ac110004 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 13:07:27.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-86fvh" for this suite.
Jan  5 13:07:33.369: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:07:33.502: INFO: namespace: e2e-tests-projected-86fvh, resource: bindings, ignored listing per whitelist
Jan  5 13:07:33.513: INFO: namespace e2e-tests-projected-86fvh deletion completed in 6.212395898s

• [SLOW TEST:27.921 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 13:07:33.514: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-56569fa3-2fbc-11ea-910c-0242ac110004
STEP: Creating a pod to test consume secrets
Jan  5 13:07:33.724: INFO: Waiting up to 5m0s for pod "pod-secrets-56575943-2fbc-11ea-910c-0242ac110004" in namespace "e2e-tests-secrets-qlf2p" to be "success or failure"
Jan  5 13:07:33.732: INFO: Pod "pod-secrets-56575943-2fbc-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.812748ms
Jan  5 13:07:35.921: INFO: Pod "pod-secrets-56575943-2fbc-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.197293672s
Jan  5 13:07:37.968: INFO: Pod "pod-secrets-56575943-2fbc-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.243646788s
Jan  5 13:07:39.991: INFO: Pod "pod-secrets-56575943-2fbc-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.266901457s
Jan  5 13:07:42.012: INFO: Pod "pod-secrets-56575943-2fbc-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.287679093s
Jan  5 13:07:44.025: INFO: Pod "pod-secrets-56575943-2fbc-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.30152603s
Jan  5 13:07:46.050: INFO: Pod "pod-secrets-56575943-2fbc-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.326035287s
STEP: Saw pod success
Jan  5 13:07:46.050: INFO: Pod "pod-secrets-56575943-2fbc-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 13:07:46.058: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-56575943-2fbc-11ea-910c-0242ac110004 container secret-volume-test: 
STEP: delete the pod
Jan  5 13:07:46.182: INFO: Waiting for pod pod-secrets-56575943-2fbc-11ea-910c-0242ac110004 to disappear
Jan  5 13:07:46.189: INFO: Pod pod-secrets-56575943-2fbc-11ea-910c-0242ac110004 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 13:07:46.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-qlf2p" for this suite.
Jan  5 13:07:54.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:07:54.349: INFO: namespace: e2e-tests-secrets-qlf2p, resource: bindings, ignored listing per whitelist
Jan  5 13:07:54.365: INFO: namespace e2e-tests-secrets-qlf2p deletion completed in 8.16648951s

• [SLOW TEST:20.851 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 13:07:54.366: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 13:08:54.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-pb6p4" for this suite.
Jan  5 13:09:20.717: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:09:20.777: INFO: namespace: e2e-tests-container-probe-pb6p4, resource: bindings, ignored listing per whitelist
Jan  5 13:09:20.859: INFO: namespace e2e-tests-container-probe-pb6p4 deletion completed in 26.175740955s

• [SLOW TEST:86.494 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 13:09:20.861: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override command
Jan  5 13:09:21.117: INFO: Waiting up to 5m0s for pod "client-containers-96596b9b-2fbc-11ea-910c-0242ac110004" in namespace "e2e-tests-containers-khj2d" to be "success or failure"
Jan  5 13:09:21.123: INFO: Pod "client-containers-96596b9b-2fbc-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.512016ms
Jan  5 13:09:23.147: INFO: Pod "client-containers-96596b9b-2fbc-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030222979s
Jan  5 13:09:25.188: INFO: Pod "client-containers-96596b9b-2fbc-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071491067s
Jan  5 13:09:27.523: INFO: Pod "client-containers-96596b9b-2fbc-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.405637314s
Jan  5 13:09:30.092: INFO: Pod "client-containers-96596b9b-2fbc-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.975418985s
Jan  5 13:09:32.106: INFO: Pod "client-containers-96596b9b-2fbc-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.98878157s
STEP: Saw pod success
Jan  5 13:09:32.106: INFO: Pod "client-containers-96596b9b-2fbc-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 13:09:32.110: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-96596b9b-2fbc-11ea-910c-0242ac110004 container test-container: 
STEP: delete the pod
Jan  5 13:09:32.601: INFO: Waiting for pod client-containers-96596b9b-2fbc-11ea-910c-0242ac110004 to disappear
Jan  5 13:09:32.619: INFO: Pod client-containers-96596b9b-2fbc-11ea-910c-0242ac110004 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 13:09:32.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-khj2d" for this suite.
Jan  5 13:09:38.796: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:09:38.844: INFO: namespace: e2e-tests-containers-khj2d, resource: bindings, ignored listing per whitelist
Jan  5 13:09:39.034: INFO: namespace e2e-tests-containers-khj2d deletion completed in 6.402441077s

• [SLOW TEST:18.174 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 13:09:39.035: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan  5 13:09:39.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client'
Jan  5 13:09:39.422: INFO: stderr: ""
Jan  5 13:09:39.422: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
Jan  5 13:09:39.433: INFO: Not supported for server versions before "1.13.12"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 13:09:39.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-4vmks" for this suite.
Jan  5 13:09:45.482: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:09:45.530: INFO: namespace: e2e-tests-kubectl-4vmks, resource: bindings, ignored listing per whitelist
Jan  5 13:09:45.639: INFO: namespace e2e-tests-kubectl-4vmks deletion completed in 6.182283013s

S [SKIPPING] [6.604 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if kubectl describe prints relevant information for rc and pods  [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

    Jan  5 13:09:39.433: Not supported for server versions before "1.13.12"

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 13:09:45.639: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan  5 13:09:45.870: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a51730b5-2fbc-11ea-910c-0242ac110004" in namespace "e2e-tests-downward-api-2fgqn" to be "success or failure"
Jan  5 13:09:45.947: INFO: Pod "downwardapi-volume-a51730b5-2fbc-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 76.525245ms
Jan  5 13:09:47.997: INFO: Pod "downwardapi-volume-a51730b5-2fbc-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126528111s
Jan  5 13:09:50.043: INFO: Pod "downwardapi-volume-a51730b5-2fbc-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.171952227s
Jan  5 13:09:53.608: INFO: Pod "downwardapi-volume-a51730b5-2fbc-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.736990598s
Jan  5 13:09:55.630: INFO: Pod "downwardapi-volume-a51730b5-2fbc-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.759490093s
Jan  5 13:09:57.653: INFO: Pod "downwardapi-volume-a51730b5-2fbc-11ea-910c-0242ac110004": Phase="Pending", Reason="", readiness=false. Elapsed: 11.782569328s
Jan  5 13:10:00.212: INFO: Pod "downwardapi-volume-a51730b5-2fbc-11ea-910c-0242ac110004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.341442528s
STEP: Saw pod success
Jan  5 13:10:00.212: INFO: Pod "downwardapi-volume-a51730b5-2fbc-11ea-910c-0242ac110004" satisfied condition "success or failure"
Jan  5 13:10:00.241: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-a51730b5-2fbc-11ea-910c-0242ac110004 container client-container: 
STEP: delete the pod
Jan  5 13:10:00.814: INFO: Waiting for pod downwardapi-volume-a51730b5-2fbc-11ea-910c-0242ac110004 to disappear
Jan  5 13:10:00.882: INFO: Pod downwardapi-volume-a51730b5-2fbc-11ea-910c-0242ac110004 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 13:10:00.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-2fgqn" for this suite.
Jan  5 13:10:08.960: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:10:09.059: INFO: namespace: e2e-tests-downward-api-2fgqn, resource: bindings, ignored listing per whitelist
Jan  5 13:10:09.370: INFO: namespace e2e-tests-downward-api-2fgqn deletion completed in 8.470466909s

• [SLOW TEST:23.731 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan  5 13:10:09.371: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan  5 13:10:25.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-rk6ht" for this suite.
Jan  5 13:10:51.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:10:51.458: INFO: namespace: e2e-tests-replication-controller-rk6ht, resource: bindings, ignored listing per whitelist
Jan  5 13:10:51.551: INFO: namespace e2e-tests-replication-controller-rk6ht deletion completed in 26.434884183s

• [SLOW TEST:42.180 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSJan  5 13:10:51.551: INFO: Running AfterSuite actions on all nodes
Jan  5 13:10:51.552: INFO: Running AfterSuite actions on node 1
Jan  5 13:10:51.552: INFO: Skipping dumping logs from cluster


Summarizing 1 Failure:

[Fail] [sig-api-machinery] Namespaces [Serial] [It] should ensure that all pods are removed when a namespace is deleted [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/namespace.go:161

Ran 199 of 2164 Specs in 8624.083 seconds
FAIL! -- 198 Passed | 1 Failed | 0 Pending | 1965 Skipped --- FAIL: TestE2E (8624.97s)
FAIL