I0511 17:18:44.157948 6 e2e.go:224] Starting e2e run "772453d2-93ab-11ea-b832-0242ac110018" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1589217523 - Will randomize all specs Will run 201 of 2164 specs May 11 17:18:44.344: INFO: >>> kubeConfig: /root/.kube/config May 11 17:18:44.346: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 11 17:18:44.360: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 11 17:18:44.401: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 11 17:18:44.401: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 11 17:18:44.401: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 11 17:18:44.409: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 11 17:18:44.409: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 11 17:18:44.409: INFO: e2e test version: v1.13.12 May 11 17:18:44.410: INFO: kube-apiserver version: v1.13.12 SSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:18:44.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers May 11 17:18:44.518: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command May 11 17:18:44.524: INFO: Waiting up to 5m0s for pod "client-containers-77b40a78-93ab-11ea-b832-0242ac110018" in namespace "e2e-tests-containers-zqw4z" to be "success or failure" May 11 17:18:44.535: INFO: Pod "client-containers-77b40a78-93ab-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 11.0672ms May 11 17:18:48.504: INFO: Pod "client-containers-77b40a78-93ab-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.979849288s May 11 17:18:51.244: INFO: Pod "client-containers-77b40a78-93ab-11ea-b832-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 6.719547432s May 11 17:18:53.248: INFO: Pod "client-containers-77b40a78-93ab-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.723946909s STEP: Saw pod success May 11 17:18:53.248: INFO: Pod "client-containers-77b40a78-93ab-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 17:18:53.251: INFO: Trying to get logs from node hunter-worker pod client-containers-77b40a78-93ab-11ea-b832-0242ac110018 container test-container: STEP: delete the pod May 11 17:18:53.522: INFO: Waiting for pod client-containers-77b40a78-93ab-11ea-b832-0242ac110018 to disappear May 11 17:18:53.664: INFO: Pod client-containers-77b40a78-93ab-11ea-b832-0242ac110018 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:18:53.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-zqw4z" for this suite. May 11 17:19:00.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:19:01.099: INFO: namespace: e2e-tests-containers-zqw4z, resource: bindings, ignored listing per whitelist May 11 17:19:01.108: INFO: namespace e2e-tests-containers-zqw4z deletion completed in 7.388730759s • [SLOW TEST:16.699 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:19:01.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 11 17:19:02.046: INFO: Creating ReplicaSet my-hostname-basic-822668d6-93ab-11ea-b832-0242ac110018 May 11 17:19:02.264: INFO: Pod name my-hostname-basic-822668d6-93ab-11ea-b832-0242ac110018: Found 0 pods out of 1 May 11 17:19:07.273: INFO: Pod name my-hostname-basic-822668d6-93ab-11ea-b832-0242ac110018: Found 1 pods out of 1 May 11 17:19:07.273: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-822668d6-93ab-11ea-b832-0242ac110018" is running May 11 17:19:09.278: INFO: Pod "my-hostname-basic-822668d6-93ab-11ea-b832-0242ac110018-fpgg7" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 17:19:02 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 17:19:02 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-822668d6-93ab-11ea-b832-0242ac110018]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 17:19:02 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-822668d6-93ab-11ea-b832-0242ac110018]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 17:19:02 +0000 UTC Reason: Message:}]) May 11 17:19:09.278: INFO: Trying to dial the pod May 11 17:19:14.357: INFO: Controller my-hostname-basic-822668d6-93ab-11ea-b832-0242ac110018: Got expected result from replica 1 [my-hostname-basic-822668d6-93ab-11ea-b832-0242ac110018-fpgg7]: "my-hostname-basic-822668d6-93ab-11ea-b832-0242ac110018-fpgg7", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:19:14.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-vtcfq" for this suite. May 11 17:19:22.420: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:19:22.433: INFO: namespace: e2e-tests-replicaset-vtcfq, resource: bindings, ignored listing per whitelist May 11 17:19:22.939: INFO: namespace e2e-tests-replicaset-vtcfq deletion completed in 8.579254562s • [SLOW TEST:21.830 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:19:22.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-cwj5z May 11 17:19:29.260: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-cwj5z STEP: checking the pod's current state and verifying that restartCount is present May 11 17:19:29.262: INFO: Initial restart count of pod liveness-http is 0 May 11 17:19:53.892: INFO: Restart count of pod e2e-tests-container-probe-cwj5z/liveness-http is now 1 (24.630180229s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:19:53.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-cwj5z" for this suite. May 11 17:20:01.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:20:02.012: INFO: namespace: e2e-tests-container-probe-cwj5z, resource: bindings, ignored listing per whitelist May 11 17:20:02.031: INFO: namespace e2e-tests-container-probe-cwj5z deletion completed in 8.08497241s • [SLOW TEST:39.092 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:20:02.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-kvx8x May 11 17:20:13.303: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-kvx8x STEP: checking the pod's current state and verifying that restartCount is present May 11 17:20:13.306: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:24:14.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-kvx8x" for this suite. May 11 17:24:20.877: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:24:20.934: INFO: namespace: e2e-tests-container-probe-kvx8x, resource: bindings, ignored listing per whitelist May 11 17:24:20.938: INFO: namespace e2e-tests-container-probe-kvx8x deletion completed in 6.088090722s • [SLOW TEST:258.907 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:24:20.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed May 11 17:24:27.766: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-409a27c8-93ac-11ea-b832-0242ac110018", GenerateName:"", Namespace:"e2e-tests-pods-kcn59", SelfLink:"/api/v1/namespaces/e2e-tests-pods-kcn59/pods/pod-submit-remove-409a27c8-93ac-11ea-b832-0242ac110018", UID:"40a63e97-93ac-11ea-99e8-0242ac110002", ResourceVersion:"9985504", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63724814661, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"572404773"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-n62g7", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001c392c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-n62g7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000f20f58), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001c2e1e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000f20fc0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000f21020)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000f21028), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000f2102c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724814661, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724814666, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724814666, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724814661, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.3", PodIP:"10.244.1.9", StartTime:(*v1.Time)(0xc001888b40), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001888ba0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"docker.io/library/nginx:1.14-alpine", ImageID:"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"containerd://73a2edc27c58df71b33a4a1a3d8b372d6ba2af375bebabac1639907f97ef4062"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:24:41.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-kcn59" for this suite. May 11 17:24:49.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:24:49.429: INFO: namespace: e2e-tests-pods-kcn59, resource: bindings, ignored listing per whitelist May 11 17:24:49.461: INFO: namespace e2e-tests-pods-kcn59 deletion completed in 8.063270662s • [SLOW TEST:28.523 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:24:49.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-5163cbbd-93ac-11ea-b832-0242ac110018 STEP: Creating a pod to test consume secrets May 11 17:24:49.762: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-516676bb-93ac-11ea-b832-0242ac110018" in namespace "e2e-tests-projected-f6ffb" to be "success or failure" May 11 17:24:49.766: INFO: Pod "pod-projected-secrets-516676bb-93ac-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.949182ms May 11 17:24:51.771: INFO: Pod "pod-projected-secrets-516676bb-93ac-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008562716s May 11 17:24:53.807: INFO: Pod "pod-projected-secrets-516676bb-93ac-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044240255s STEP: Saw pod success May 11 17:24:53.807: INFO: Pod "pod-projected-secrets-516676bb-93ac-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 17:24:53.809: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-516676bb-93ac-11ea-b832-0242ac110018 container secret-volume-test: STEP: delete the pod May 11 17:24:53.876: INFO: Waiting for pod pod-projected-secrets-516676bb-93ac-11ea-b832-0242ac110018 to disappear May 11 17:24:54.155: INFO: Pod pod-projected-secrets-516676bb-93ac-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:24:54.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-f6ffb" for this suite. May 11 17:25:00.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:25:00.379: INFO: namespace: e2e-tests-projected-f6ffb, resource: bindings, ignored listing per whitelist May 11 17:25:00.388: INFO: namespace e2e-tests-projected-f6ffb deletion completed in 6.229823668s • [SLOW TEST:10.927 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:25:00.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-580a847e-93ac-11ea-b832-0242ac110018 STEP: Creating a pod to test consume secrets May 11 17:25:01.118: INFO: Waiting up to 5m0s for pod "pod-secrets-5812784e-93ac-11ea-b832-0242ac110018" in namespace "e2e-tests-secrets-hcrgr" to be "success or failure" May 11 17:25:01.163: INFO: Pod "pod-secrets-5812784e-93ac-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 44.841704ms May 11 17:25:03.166: INFO: Pod "pod-secrets-5812784e-93ac-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047598121s May 11 17:25:05.170: INFO: Pod "pod-secrets-5812784e-93ac-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051098835s May 11 17:25:07.174: INFO: Pod "pod-secrets-5812784e-93ac-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055123637s May 11 17:25:09.177: INFO: Pod "pod-secrets-5812784e-93ac-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058650796s STEP: Saw pod success May 11 17:25:09.177: INFO: Pod "pod-secrets-5812784e-93ac-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 17:25:09.179: INFO: Trying to get logs from node hunter-worker pod pod-secrets-5812784e-93ac-11ea-b832-0242ac110018 container secret-volume-test: STEP: delete the pod May 11 17:25:09.211: INFO: Waiting for pod pod-secrets-5812784e-93ac-11ea-b832-0242ac110018 to disappear May 11 17:25:09.254: INFO: Pod pod-secrets-5812784e-93ac-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:25:09.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-hcrgr" for this suite. May 11 17:25:15.279: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:25:15.306: INFO: namespace: e2e-tests-secrets-hcrgr, resource: bindings, ignored listing per whitelist May 11 17:25:15.359: INFO: namespace e2e-tests-secrets-hcrgr deletion completed in 6.101786584s • [SLOW TEST:14.970 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:25:15.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 11 17:25:15.650: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:25:15.652: INFO: Number of nodes with available pods: 0 May 11 17:25:15.652: INFO: Node hunter-worker is running more than one daemon pod May 11 17:25:16.657: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:25:16.659: INFO: Number of nodes with available pods: 0 May 11 17:25:16.659: INFO: Node hunter-worker is running more than one daemon pod May 11 17:25:17.657: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:25:17.661: INFO: Number of nodes with available pods: 0 May 11 17:25:17.661: INFO: Node hunter-worker is running more than one daemon pod May 11 17:25:18.923: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:25:18.926: INFO: Number of nodes with available pods: 0 May 11 17:25:18.926: INFO: Node hunter-worker is running more than one daemon pod May 11 17:25:19.699: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:25:19.702: INFO: Number of nodes with available pods: 0 May 11 17:25:19.702: INFO: Node hunter-worker is running more than one daemon pod May 11 17:25:20.776: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:25:20.779: INFO: Number of nodes with available pods: 1 May 11 17:25:20.779: INFO: Node hunter-worker2 is running more than one daemon pod May 11 17:25:21.657: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:25:21.661: INFO: Number of nodes with available pods: 2 May 11 17:25:21.661: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 11 17:25:21.697: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:25:21.714: INFO: Number of nodes with available pods: 2 May 11 17:25:21.714: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-jbcww, will wait for the garbage collector to delete the pods May 11 17:25:23.392: INFO: Deleting DaemonSet.extensions daemon-set took: 118.557178ms May 11 17:25:23.593: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.521226ms May 11 17:25:31.320: INFO: Number of nodes with available pods: 0 May 11 17:25:31.320: INFO: Number of running nodes: 0, number of available pods: 0 May 11 17:25:31.325: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-jbcww/daemonsets","resourceVersion":"9985746"},"items":null} May 11 17:25:31.358: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-jbcww/pods","resourceVersion":"9985747"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:25:31.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-jbcww" for this suite. May 11 17:25:37.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:25:37.476: INFO: namespace: e2e-tests-daemonsets-jbcww, resource: bindings, ignored listing per whitelist May 11 17:25:37.557: INFO: namespace e2e-tests-daemonsets-jbcww deletion completed in 6.186541889s • [SLOW TEST:22.198 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:25:37.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-6df13aa5-93ac-11ea-b832-0242ac110018 STEP: Creating a pod to test consume configMaps May 11 17:25:37.708: INFO: Waiting up to 5m0s for pod "pod-configmaps-6df36217-93ac-11ea-b832-0242ac110018" in namespace "e2e-tests-configmap-6q8bm" to be "success or failure" May 11 17:25:37.714: INFO: Pod "pod-configmaps-6df36217-93ac-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 5.74639ms May 11 17:25:39.792: INFO: Pod "pod-configmaps-6df36217-93ac-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08394094s May 11 17:25:41.796: INFO: Pod "pod-configmaps-6df36217-93ac-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088020493s May 11 17:25:43.800: INFO: Pod "pod-configmaps-6df36217-93ac-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091974631s May 11 17:25:45.803: INFO: Pod "pod-configmaps-6df36217-93ac-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.094931772s STEP: Saw pod success May 11 17:25:45.803: INFO: Pod "pod-configmaps-6df36217-93ac-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 17:25:45.805: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-6df36217-93ac-11ea-b832-0242ac110018 container configmap-volume-test: STEP: delete the pod May 11 17:25:45.872: INFO: Waiting for pod pod-configmaps-6df36217-93ac-11ea-b832-0242ac110018 to disappear May 11 17:25:45.887: INFO: Pod pod-configmaps-6df36217-93ac-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:25:45.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-6q8bm" for this suite. May 11 17:25:51.902: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:25:51.917: INFO: namespace: e2e-tests-configmap-6q8bm, resource: bindings, ignored listing per whitelist May 11 17:25:51.955: INFO: namespace e2e-tests-configmap-6q8bm deletion completed in 6.064414672s • [SLOW TEST:14.398 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:25:51.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs May 11 17:25:52.281: INFO: Waiting up to 5m0s for pod "pod-7693f941-93ac-11ea-b832-0242ac110018" in namespace "e2e-tests-emptydir-gq4v6" to be "success or failure" May 11 17:25:52.338: INFO: Pod "pod-7693f941-93ac-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 56.752159ms May 11 17:25:54.728: INFO: Pod "pod-7693f941-93ac-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.447276243s May 11 17:25:56.733: INFO: Pod "pod-7693f941-93ac-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.451659199s May 11 17:25:58.909: INFO: Pod "pod-7693f941-93ac-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.627940664s STEP: Saw pod success May 11 17:25:58.909: INFO: Pod "pod-7693f941-93ac-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 17:25:58.912: INFO: Trying to get logs from node hunter-worker pod pod-7693f941-93ac-11ea-b832-0242ac110018 container test-container: STEP: delete the pod May 11 17:26:00.610: INFO: Waiting for pod pod-7693f941-93ac-11ea-b832-0242ac110018 to disappear May 11 17:26:00.637: INFO: Pod pod-7693f941-93ac-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:26:00.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-gq4v6" for this suite. May 11 17:26:06.707: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:26:06.752: INFO: namespace: e2e-tests-emptydir-gq4v6, resource: bindings, ignored listing per whitelist May 11 17:26:06.791: INFO: namespace e2e-tests-emptydir-gq4v6 deletion completed in 6.141890356s • [SLOW TEST:14.836 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:26:06.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-7f69af15-93ac-11ea-b832-0242ac110018 STEP: Creating a pod to test consume configMaps May 11 17:26:07.045: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7f72ad3e-93ac-11ea-b832-0242ac110018" in namespace "e2e-tests-projected-bngx8" to be "success or failure" May 11 17:26:07.062: INFO: Pod "pod-projected-configmaps-7f72ad3e-93ac-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 17.385905ms May 11 17:26:09.357: INFO: Pod "pod-projected-configmaps-7f72ad3e-93ac-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.311987422s May 11 17:26:11.417: INFO: Pod "pod-projected-configmaps-7f72ad3e-93ac-11ea-b832-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.37202661s May 11 17:26:13.555: INFO: Pod "pod-projected-configmaps-7f72ad3e-93ac-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.510079685s STEP: Saw pod success May 11 17:26:13.555: INFO: Pod "pod-projected-configmaps-7f72ad3e-93ac-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 17:26:13.559: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-7f72ad3e-93ac-11ea-b832-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod May 11 17:26:14.143: INFO: Waiting for pod pod-projected-configmaps-7f72ad3e-93ac-11ea-b832-0242ac110018 to disappear May 11 17:26:14.164: INFO: Pod pod-projected-configmaps-7f72ad3e-93ac-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:26:14.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bngx8" for this suite. May 11 17:26:20.367: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:26:20.414: INFO: namespace: e2e-tests-projected-bngx8, resource: bindings, ignored listing per whitelist May 11 17:26:20.444: INFO: namespace e2e-tests-projected-bngx8 deletion completed in 6.276513025s • [SLOW TEST:13.653 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:26:20.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-8783af2e-93ac-11ea-b832-0242ac110018 STEP: Creating a pod to test consume configMaps May 11 17:26:20.583: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-87892201-93ac-11ea-b832-0242ac110018" in namespace "e2e-tests-projected-2mp96" to be "success or failure" May 11 17:26:20.614: INFO: Pod "pod-projected-configmaps-87892201-93ac-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 30.607554ms May 11 17:26:22.618: INFO: Pod "pod-projected-configmaps-87892201-93ac-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034252566s May 11 17:26:24.645: INFO: Pod "pod-projected-configmaps-87892201-93ac-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061565823s STEP: Saw pod success May 11 17:26:24.645: INFO: Pod "pod-projected-configmaps-87892201-93ac-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 17:26:24.648: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-87892201-93ac-11ea-b832-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod May 11 17:26:24.687: INFO: Waiting for pod pod-projected-configmaps-87892201-93ac-11ea-b832-0242ac110018 to disappear May 11 17:26:24.716: INFO: Pod pod-projected-configmaps-87892201-93ac-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:26:24.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-2mp96" for this suite. May 11 17:26:31.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:26:31.108: INFO: namespace: e2e-tests-projected-2mp96, resource: bindings, ignored listing per whitelist May 11 17:26:31.131: INFO: namespace e2e-tests-projected-2mp96 deletion completed in 6.411018995s • [SLOW TEST:10.686 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:26:31.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-8de196ce-93ac-11ea-b832-0242ac110018 STEP: Creating a pod to test consume configMaps May 11 17:26:31.252: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8de449bd-93ac-11ea-b832-0242ac110018" in namespace "e2e-tests-projected-xqllj" to be "success or failure" May 11 17:26:31.271: INFO: Pod "pod-projected-configmaps-8de449bd-93ac-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 19.455142ms May 11 17:26:33.412: INFO: Pod "pod-projected-configmaps-8de449bd-93ac-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.160303373s May 11 17:26:35.430: INFO: Pod "pod-projected-configmaps-8de449bd-93ac-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.178005141s May 11 17:26:37.432: INFO: Pod "pod-projected-configmaps-8de449bd-93ac-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.180630272s STEP: Saw pod success May 11 17:26:37.432: INFO: Pod "pod-projected-configmaps-8de449bd-93ac-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 17:26:37.434: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-8de449bd-93ac-11ea-b832-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod May 11 17:26:37.558: INFO: Waiting for pod pod-projected-configmaps-8de449bd-93ac-11ea-b832-0242ac110018 to disappear May 11 17:26:37.603: INFO: Pod pod-projected-configmaps-8de449bd-93ac-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:26:37.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xqllj" for this suite. May 11 17:26:43.635: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:26:43.672: INFO: namespace: e2e-tests-projected-xqllj, resource: bindings, ignored listing per whitelist May 11 17:26:43.701: INFO: namespace e2e-tests-projected-xqllj deletion completed in 6.095890886s • [SLOW TEST:12.571 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:26:43.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:26:50.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-8b89m" for this suite. May 11 17:27:15.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:27:15.024: INFO: namespace: e2e-tests-replication-controller-8b89m, resource: bindings, ignored listing per whitelist May 11 17:27:15.072: INFO: namespace e2e-tests-replication-controller-8b89m deletion completed in 24.104863099s • [SLOW TEST:31.371 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:27:15.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-a8224ae2-93ac-11ea-b832-0242ac110018 STEP: Creating a pod to test consume secrets May 11 17:27:15.354: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a825aba1-93ac-11ea-b832-0242ac110018" in namespace "e2e-tests-projected-dgr49" to be "success or failure" May 11 17:27:15.424: INFO: Pod "pod-projected-secrets-a825aba1-93ac-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 69.388726ms May 11 17:27:17.427: INFO: Pod "pod-projected-secrets-a825aba1-93ac-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072529691s May 11 17:27:19.430: INFO: Pod "pod-projected-secrets-a825aba1-93ac-11ea-b832-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.075159923s May 11 17:27:21.434: INFO: Pod "pod-projected-secrets-a825aba1-93ac-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.079612624s STEP: Saw pod success May 11 17:27:21.434: INFO: Pod "pod-projected-secrets-a825aba1-93ac-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 17:27:21.437: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-a825aba1-93ac-11ea-b832-0242ac110018 container projected-secret-volume-test: STEP: delete the pod May 11 17:27:21.498: INFO: Waiting for pod pod-projected-secrets-a825aba1-93ac-11ea-b832-0242ac110018 to disappear May 11 17:27:21.568: INFO: Pod pod-projected-secrets-a825aba1-93ac-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:27:21.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-dgr49" for this suite. May 11 17:27:27.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:27:27.709: INFO: namespace: e2e-tests-projected-dgr49, resource: bindings, ignored listing per whitelist May 11 17:27:27.717: INFO: namespace e2e-tests-projected-dgr49 deletion completed in 6.143819262s • [SLOW TEST:12.644 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:27:27.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC May 11 17:27:27.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-rgxj2' May 11 17:27:30.971: INFO: stderr: "" May 11 17:27:30.971: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 11 17:27:32.118: INFO: Selector matched 1 pods for map[app:redis] May 11 17:27:32.118: INFO: Found 0 / 1 May 11 17:27:32.976: INFO: Selector matched 1 pods for map[app:redis] May 11 17:27:32.976: INFO: Found 0 / 1 May 11 17:27:34.437: INFO: Selector matched 1 pods for map[app:redis] May 11 17:27:34.437: INFO: Found 0 / 1 May 11 17:27:35.132: INFO: Selector matched 1 pods for map[app:redis] May 11 17:27:35.132: INFO: Found 0 / 1 May 11 17:27:35.976: INFO: Selector matched 1 pods for map[app:redis] May 11 17:27:35.976: INFO: Found 0 / 1 May 11 17:27:37.132: INFO: Selector matched 1 pods for map[app:redis] May 11 17:27:37.132: INFO: Found 0 / 1 May 11 17:27:38.005: INFO: Selector matched 1 pods for map[app:redis] May 11 17:27:38.005: INFO: Found 1 / 1 May 11 17:27:38.005: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 11 17:27:38.008: INFO: Selector matched 1 pods for map[app:redis] May 11 17:27:38.008: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 11 17:27:38.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-tnbln --namespace=e2e-tests-kubectl-rgxj2 -p {"metadata":{"annotations":{"x":"y"}}}' May 11 17:27:38.107: INFO: stderr: "" May 11 17:27:38.107: INFO: stdout: "pod/redis-master-tnbln patched\n" STEP: checking annotations May 11 17:27:38.215: INFO: Selector matched 1 pods for map[app:redis] May 11 17:27:38.215: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:27:38.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-rgxj2" for this suite. May 11 17:28:04.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:28:04.332: INFO: namespace: e2e-tests-kubectl-rgxj2, resource: bindings, ignored listing per whitelist May 11 17:28:04.335: INFO: namespace e2e-tests-kubectl-rgxj2 deletion completed in 26.116039131s • [SLOW TEST:36.618 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:28:04.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 11 17:28:04.439: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c56e66ba-93ac-11ea-b832-0242ac110018" in namespace "e2e-tests-downward-api-dbpg9" to be "success or failure" May 11 17:28:04.473: INFO: Pod "downwardapi-volume-c56e66ba-93ac-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 33.30073ms May 11 17:28:06.605: INFO: Pod "downwardapi-volume-c56e66ba-93ac-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.165823419s May 11 17:28:08.608: INFO: Pod "downwardapi-volume-c56e66ba-93ac-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.168672313s May 11 17:28:10.611: INFO: Pod "downwardapi-volume-c56e66ba-93ac-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.172166623s STEP: Saw pod success May 11 17:28:10.611: INFO: Pod "downwardapi-volume-c56e66ba-93ac-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 17:28:10.614: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-c56e66ba-93ac-11ea-b832-0242ac110018 container client-container: STEP: delete the pod May 11 17:28:10.959: INFO: Waiting for pod downwardapi-volume-c56e66ba-93ac-11ea-b832-0242ac110018 to disappear May 11 17:28:11.227: INFO: Pod downwardapi-volume-c56e66ba-93ac-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:28:11.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-dbpg9" for this suite. May 11 17:28:17.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:28:17.410: INFO: namespace: e2e-tests-downward-api-dbpg9, resource: bindings, ignored listing per whitelist May 11 17:28:17.453: INFO: namespace e2e-tests-downward-api-dbpg9 deletion completed in 6.152163043s • [SLOW TEST:13.117 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:28:17.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium May 11 17:28:17.744: INFO: Waiting up to 5m0s for pod "pod-cd5a8cb8-93ac-11ea-b832-0242ac110018" in namespace "e2e-tests-emptydir-j6l7x" to be "success or failure" May 11 17:28:17.754: INFO: Pod "pod-cd5a8cb8-93ac-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.177064ms May 11 17:28:19.757: INFO: Pod "pod-cd5a8cb8-93ac-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013919998s May 11 17:28:21.761: INFO: Pod "pod-cd5a8cb8-93ac-11ea-b832-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.017713389s May 11 17:28:23.797: INFO: Pod "pod-cd5a8cb8-93ac-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.053019933s STEP: Saw pod success May 11 17:28:23.797: INFO: Pod "pod-cd5a8cb8-93ac-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 17:28:23.801: INFO: Trying to get logs from node hunter-worker pod pod-cd5a8cb8-93ac-11ea-b832-0242ac110018 container test-container: STEP: delete the pod May 11 17:28:23.876: INFO: Waiting for pod pod-cd5a8cb8-93ac-11ea-b832-0242ac110018 to disappear May 11 17:28:24.018: INFO: Pod pod-cd5a8cb8-93ac-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:28:24.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-j6l7x" for this suite. May 11 17:28:32.218: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:28:32.276: INFO: namespace: e2e-tests-emptydir-j6l7x, resource: bindings, ignored listing per whitelist May 11 17:28:32.289: INFO: namespace e2e-tests-emptydir-j6l7x deletion completed in 8.267016242s • [SLOW TEST:14.836 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:28:32.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 11 17:28:45.024: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 17:28:45.211: INFO: Pod pod-with-poststart-exec-hook still exists May 11 17:28:47.211: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 17:28:47.270: INFO: Pod pod-with-poststart-exec-hook still exists May 11 17:28:49.211: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 17:28:49.215: INFO: Pod pod-with-poststart-exec-hook still exists May 11 17:28:51.211: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 17:28:51.215: INFO: Pod pod-with-poststart-exec-hook still exists May 11 17:28:53.211: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 17:28:53.215: INFO: Pod pod-with-poststart-exec-hook still exists May 11 17:28:55.211: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 17:28:55.217: INFO: Pod pod-with-poststart-exec-hook still exists May 11 17:28:57.211: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 17:28:57.214: INFO: Pod pod-with-poststart-exec-hook still exists May 11 17:28:59.211: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 17:28:59.213: INFO: Pod pod-with-poststart-exec-hook still exists May 11 17:29:01.211: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 17:29:01.246: INFO: Pod pod-with-poststart-exec-hook still exists May 11 17:29:03.211: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 17:29:03.312: INFO: Pod pod-with-poststart-exec-hook still exists May 11 17:29:05.211: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 17:29:05.213: INFO: Pod pod-with-poststart-exec-hook still exists May 11 17:29:07.211: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 17:29:07.215: INFO: Pod pod-with-poststart-exec-hook still exists May 11 17:29:09.211: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 17:29:09.288: INFO: Pod pod-with-poststart-exec-hook still exists May 11 17:29:11.211: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 17:29:11.214: INFO: Pod pod-with-poststart-exec-hook still exists May 11 17:29:13.211: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 17:29:13.214: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:29:13.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-9v587" for this suite. May 11 17:29:37.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:29:37.300: INFO: namespace: e2e-tests-container-lifecycle-hook-9v587, resource: bindings, ignored listing per whitelist May 11 17:29:37.348: INFO: namespace e2e-tests-container-lifecycle-hook-9v587 deletion completed in 24.131139755s • [SLOW TEST:65.059 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:29:37.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 11 17:29:43.510: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-fce7c0c2-93ac-11ea-b832-0242ac110018,GenerateName:,Namespace:e2e-tests-events-dv6pb,SelfLink:/api/v1/namespaces/e2e-tests-events-dv6pb/pods/send-events-fce7c0c2-93ac-11ea-b832-0242ac110018,UID:fce887d4-93ac-11ea-99e8-0242ac110002,ResourceVersion:9986561,Generation:0,CreationTimestamp:2020-05-11 17:29:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 492607267,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-g9qql {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-g9qql,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-g9qql true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000f6a0a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000f6a0c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:29:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:29:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:29:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:29:37 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.15,StartTime:2020-05-11 17:29:37 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-05-11 17:29:42 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://72df6230acb6cdf8c7279e16a8a5ad8d4ef47f65fc92b16c6bf49213c8b84e8f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod May 11 17:29:45.546: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 11 17:29:47.550: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:29:47.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-dv6pb" for this suite. May 11 17:30:25.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:30:25.722: INFO: namespace: e2e-tests-events-dv6pb, resource: bindings, ignored listing per whitelist May 11 17:30:25.794: INFO: namespace e2e-tests-events-dv6pb deletion completed in 38.198787533s • [SLOW TEST:48.446 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:30:25.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller May 11 17:30:26.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-92qpq' May 11 17:30:26.429: INFO: stderr: "" May 11 17:30:26.429: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 11 17:30:26.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-92qpq' May 11 17:30:26.540: INFO: stderr: "" May 11 17:30:26.540: INFO: stdout: "update-demo-nautilus-454vh update-demo-nautilus-k28jd " May 11 17:30:26.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-454vh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-92qpq' May 11 17:30:26.627: INFO: stderr: "" May 11 17:30:26.627: INFO: stdout: "" May 11 17:30:26.627: INFO: update-demo-nautilus-454vh is created but not running May 11 17:30:31.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-92qpq' May 11 17:30:32.045: INFO: stderr: "" May 11 17:30:32.045: INFO: stdout: "update-demo-nautilus-454vh update-demo-nautilus-k28jd " May 11 17:30:32.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-454vh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-92qpq' May 11 17:30:32.234: INFO: stderr: "" May 11 17:30:32.234: INFO: stdout: "true" May 11 17:30:32.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-454vh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-92qpq' May 11 17:30:32.324: INFO: stderr: "" May 11 17:30:32.324: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 17:30:32.324: INFO: validating pod update-demo-nautilus-454vh May 11 17:30:32.328: INFO: got data: { "image": "nautilus.jpg" } May 11 17:30:32.328: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 17:30:32.328: INFO: update-demo-nautilus-454vh is verified up and running May 11 17:30:32.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k28jd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-92qpq' May 11 17:30:32.435: INFO: stderr: "" May 11 17:30:32.435: INFO: stdout: "" May 11 17:30:32.435: INFO: update-demo-nautilus-k28jd is created but not running May 11 17:30:37.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-92qpq' May 11 17:30:38.085: INFO: stderr: "" May 11 17:30:38.085: INFO: stdout: "update-demo-nautilus-454vh update-demo-nautilus-k28jd " May 11 17:30:38.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-454vh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-92qpq' May 11 17:30:38.306: INFO: stderr: "" May 11 17:30:38.306: INFO: stdout: "true" May 11 17:30:38.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-454vh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-92qpq' May 11 17:30:38.491: INFO: stderr: "" May 11 17:30:38.491: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 17:30:38.491: INFO: validating pod update-demo-nautilus-454vh May 11 17:30:38.495: INFO: got data: { "image": "nautilus.jpg" } May 11 17:30:38.495: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 17:30:38.495: INFO: update-demo-nautilus-454vh is verified up and running May 11 17:30:38.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k28jd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-92qpq' May 11 17:30:38.580: INFO: stderr: "" May 11 17:30:38.580: INFO: stdout: "true" May 11 17:30:38.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k28jd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-92qpq' May 11 17:30:38.686: INFO: stderr: "" May 11 17:30:38.686: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 17:30:38.687: INFO: validating pod update-demo-nautilus-k28jd May 11 17:30:38.691: INFO: got data: { "image": "nautilus.jpg" } May 11 17:30:38.691: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 17:30:38.691: INFO: update-demo-nautilus-k28jd is verified up and running STEP: rolling-update to new replication controller May 11 17:30:38.781: INFO: scanned /root for discovery docs: May 11 17:30:38.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-92qpq' May 11 17:31:11.042: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 11 17:31:11.042: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 11 17:31:11.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-92qpq' May 11 17:31:11.204: INFO: stderr: "" May 11 17:31:11.204: INFO: stdout: "update-demo-kitten-fv2vx update-demo-kitten-q4z8n " May 11 17:31:11.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-fv2vx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-92qpq' May 11 17:31:11.296: INFO: stderr: "" May 11 17:31:11.296: INFO: stdout: "true" May 11 17:31:11.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-fv2vx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-92qpq' May 11 17:31:11.382: INFO: stderr: "" May 11 17:31:11.382: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 11 17:31:11.382: INFO: validating pod update-demo-kitten-fv2vx May 11 17:31:11.386: INFO: got data: { "image": "kitten.jpg" } May 11 17:31:11.386: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 11 17:31:11.386: INFO: update-demo-kitten-fv2vx is verified up and running May 11 17:31:11.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-q4z8n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-92qpq' May 11 17:31:11.485: INFO: stderr: "" May 11 17:31:11.485: INFO: stdout: "true" May 11 17:31:11.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-q4z8n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-92qpq' May 11 17:31:11.593: INFO: stderr: "" May 11 17:31:11.593: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 11 17:31:11.593: INFO: validating pod update-demo-kitten-q4z8n May 11 17:31:11.596: INFO: got data: { "image": "kitten.jpg" } May 11 17:31:11.596: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 11 17:31:11.596: INFO: update-demo-kitten-q4z8n is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:31:11.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-92qpq" for this suite. May 11 17:31:39.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:31:39.829: INFO: namespace: e2e-tests-kubectl-92qpq, resource: bindings, ignored listing per whitelist May 11 17:31:40.004: INFO: namespace e2e-tests-kubectl-92qpq deletion completed in 28.40579534s • [SLOW TEST:74.210 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:31:40.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 11 17:31:41.047: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:31:47.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-f8cps" for this suite. May 11 17:32:27.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:32:27.405: INFO: namespace: e2e-tests-pods-f8cps, resource: bindings, ignored listing per whitelist May 11 17:32:27.444: INFO: namespace e2e-tests-pods-f8cps deletion completed in 40.137345694s • [SLOW TEST:47.440 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:32:27.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 11 17:32:28.005: INFO: Pod name pod-release: Found 0 pods out of 1 May 11 17:32:33.213: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:32:34.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-l7jmg" for this suite. May 11 17:32:45.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:32:45.071: INFO: namespace: e2e-tests-replication-controller-l7jmg, resource: bindings, ignored listing per whitelist May 11 17:32:45.110: INFO: namespace e2e-tests-replication-controller-l7jmg deletion completed in 10.62049961s • [SLOW TEST:17.666 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:32:45.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-58vsq A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-58vsq;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-58vsq A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-58vsq;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-58vsq.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-58vsq.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-58vsq.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-58vsq.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-58vsq.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-58vsq.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-58vsq.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-58vsq.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-58vsq.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-58vsq.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-58vsq.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-58vsq.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-58vsq.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 8.226.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.226.8_udp@PTR;check="$$(dig +tcp +noall +answer +search 8.226.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.226.8_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-58vsq A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-58vsq;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-58vsq A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-58vsq;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-58vsq.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-58vsq.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-58vsq.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-58vsq.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-58vsq.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-58vsq.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-58vsq.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-58vsq.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-58vsq.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-58vsq.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-58vsq.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-58vsq.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-58vsq.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 8.226.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.226.8_udp@PTR;check="$$(dig +tcp +noall +answer +search 8.226.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.226.8_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 17:32:57.399: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:32:57.412: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-58vsq.svc from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:32:57.429: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:32:57.432: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:32:57.434: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-58vsq from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:32:57.435: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-58vsq from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:32:57.438: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-58vsq.svc from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:32:57.440: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-58vsq.svc from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:32:57.442: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-58vsq.svc from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:32:57.444: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-58vsq.svc from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:32:57.460: INFO: Lookups using e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018 failed for: [wheezy_tcp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-58vsq.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-58vsq jessie_tcp@dns-test-service.e2e-tests-dns-58vsq jessie_udp@dns-test-service.e2e-tests-dns-58vsq.svc jessie_tcp@dns-test-service.e2e-tests-dns-58vsq.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-58vsq.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-58vsq.svc] May 11 17:33:02.468: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:33:02.481: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-58vsq.svc from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:33:02.499: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:33:02.501: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:33:02.503: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-58vsq from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:33:02.506: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-58vsq from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:33:02.508: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-58vsq.svc from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:33:02.511: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-58vsq.svc from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:33:02.514: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-58vsq.svc from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:33:02.516: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-58vsq.svc from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:33:02.532: INFO: Lookups using e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018 failed for: [wheezy_tcp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-58vsq.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-58vsq jessie_tcp@dns-test-service.e2e-tests-dns-58vsq jessie_udp@dns-test-service.e2e-tests-dns-58vsq.svc jessie_tcp@dns-test-service.e2e-tests-dns-58vsq.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-58vsq.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-58vsq.svc] May 11 17:33:07.664: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:33:07.680: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-58vsq.svc from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:33:08.090: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:33:08.092: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:33:08.094: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-58vsq from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:33:08.097: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-58vsq from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:33:08.099: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-58vsq.svc from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:33:08.101: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-58vsq.svc from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:33:08.103: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-58vsq.svc from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:33:08.105: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-58vsq.svc from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:33:08.119: INFO: Lookups using e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018 failed for: [wheezy_tcp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-58vsq.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-58vsq jessie_tcp@dns-test-service.e2e-tests-dns-58vsq jessie_udp@dns-test-service.e2e-tests-dns-58vsq.svc jessie_tcp@dns-test-service.e2e-tests-dns-58vsq.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-58vsq.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-58vsq.svc] May 11 17:33:12.470: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:33:12.488: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-58vsq.svc from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:33:12.626: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:33:12.628: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:33:12.630: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-58vsq from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:33:12.632: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-58vsq from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:33:12.634: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-58vsq.svc from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:33:12.636: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-58vsq.svc from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:33:12.638: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-58vsq.svc from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:33:12.640: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-58vsq.svc from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:33:13.346: INFO: Lookups using e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018 failed for: [wheezy_tcp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-58vsq.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-58vsq jessie_tcp@dns-test-service.e2e-tests-dns-58vsq jessie_udp@dns-test-service.e2e-tests-dns-58vsq.svc jessie_tcp@dns-test-service.e2e-tests-dns-58vsq.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-58vsq.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-58vsq.svc] May 11 17:33:17.467: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:33:17.476: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-58vsq.svc from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:33:17.493: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:33:17.495: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:33:17.497: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-58vsq from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:33:17.499: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-58vsq from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:33:17.502: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-58vsq.svc from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:33:17.504: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-58vsq.svc from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:33:17.506: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-58vsq.svc from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:33:17.508: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-58vsq.svc from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:33:17.518: INFO: Lookups using e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018 failed for: [wheezy_tcp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-58vsq.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-58vsq jessie_tcp@dns-test-service.e2e-tests-dns-58vsq jessie_udp@dns-test-service.e2e-tests-dns-58vsq.svc jessie_tcp@dns-test-service.e2e-tests-dns-58vsq.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-58vsq.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-58vsq.svc] May 11 17:33:22.466: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:33:22.477: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-58vsq.svc from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:33:22.507: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-58vsq.svc from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:33:22.509: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-58vsq.svc from pod e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018: the server could not find the requested resource (get pods dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018) May 11 17:33:22.526: INFO: Lookups using e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018 failed for: [wheezy_tcp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-58vsq.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-58vsq.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-58vsq.svc] May 11 17:33:27.723: INFO: DNS probes using e2e-tests-dns-58vsq/dns-test-6cdab9b2-93ad-11ea-b832-0242ac110018 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:33:28.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-58vsq" for this suite. May 11 17:33:35.480: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:33:35.492: INFO: namespace: e2e-tests-dns-58vsq, resource: bindings, ignored listing per whitelist May 11 17:33:35.557: INFO: namespace e2e-tests-dns-58vsq deletion completed in 6.782422681s • [SLOW TEST:50.447 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:33:35.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 11 17:33:35.721: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:33:35.723: INFO: Number of nodes with available pods: 0 May 11 17:33:35.723: INFO: Node hunter-worker is running more than one daemon pod May 11 17:33:36.727: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:33:36.730: INFO: Number of nodes with available pods: 0 May 11 17:33:36.730: INFO: Node hunter-worker is running more than one daemon pod May 11 17:33:37.753: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:33:37.755: INFO: Number of nodes with available pods: 0 May 11 17:33:37.755: INFO: Node hunter-worker is running more than one daemon pod May 11 17:33:38.777: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:33:38.780: INFO: Number of nodes with available pods: 0 May 11 17:33:38.780: INFO: Node hunter-worker is running more than one daemon pod May 11 17:33:39.783: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:33:39.785: INFO: Number of nodes with available pods: 0 May 11 17:33:39.785: INFO: Node hunter-worker is running more than one daemon pod May 11 17:33:40.727: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:33:40.731: INFO: Number of nodes with available pods: 0 May 11 17:33:40.731: INFO: Node hunter-worker is running more than one daemon pod May 11 17:33:41.862: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:33:41.867: INFO: Number of nodes with available pods: 1 May 11 17:33:41.867: INFO: Node hunter-worker2 is running more than one daemon pod May 11 17:33:44.425: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:33:44.683: INFO: Number of nodes with available pods: 2 May 11 17:33:44.683: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 11 17:33:44.780: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:33:45.053: INFO: Number of nodes with available pods: 1 May 11 17:33:45.053: INFO: Node hunter-worker2 is running more than one daemon pod May 11 17:33:46.058: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:33:46.061: INFO: Number of nodes with available pods: 1 May 11 17:33:46.062: INFO: Node hunter-worker2 is running more than one daemon pod May 11 17:33:48.286: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:33:48.747: INFO: Number of nodes with available pods: 1 May 11 17:33:48.747: INFO: Node hunter-worker2 is running more than one daemon pod May 11 17:33:49.316: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:33:49.319: INFO: Number of nodes with available pods: 1 May 11 17:33:49.319: INFO: Node hunter-worker2 is running more than one daemon pod May 11 17:33:50.057: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:33:50.061: INFO: Number of nodes with available pods: 1 May 11 17:33:50.061: INFO: Node hunter-worker2 is running more than one daemon pod May 11 17:33:51.088: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:33:51.090: INFO: Number of nodes with available pods: 1 May 11 17:33:51.090: INFO: Node hunter-worker2 is running more than one daemon pod May 11 17:33:52.430: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:33:53.060: INFO: Number of nodes with available pods: 1 May 11 17:33:53.060: INFO: Node hunter-worker2 is running more than one daemon pod May 11 17:33:54.057: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:33:54.060: INFO: Number of nodes with available pods: 1 May 11 17:33:54.060: INFO: Node hunter-worker2 is running more than one daemon pod May 11 17:33:55.057: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:33:55.060: INFO: Number of nodes with available pods: 1 May 11 17:33:55.060: INFO: Node hunter-worker2 is running more than one daemon pod May 11 17:33:56.058: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:33:56.062: INFO: Number of nodes with available pods: 1 May 11 17:33:56.062: INFO: Node hunter-worker2 is running more than one daemon pod May 11 17:33:57.057: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:33:57.060: INFO: Number of nodes with available pods: 1 May 11 17:33:57.060: INFO: Node hunter-worker2 is running more than one daemon pod May 11 17:33:58.057: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:33:58.060: INFO: Number of nodes with available pods: 1 May 11 17:33:58.061: INFO: Node hunter-worker2 is running more than one daemon pod May 11 17:33:59.058: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:33:59.061: INFO: Number of nodes with available pods: 1 May 11 17:33:59.061: INFO: Node hunter-worker2 is running more than one daemon pod May 11 17:34:00.375: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 17:34:00.580: INFO: Number of nodes with available pods: 2 May 11 17:34:00.580: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-cpgjw, will wait for the garbage collector to delete the pods May 11 17:34:00.642: INFO: Deleting DaemonSet.extensions daemon-set took: 6.332063ms May 11 17:34:00.942: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.232475ms May 11 17:34:11.926: INFO: Number of nodes with available pods: 0 May 11 17:34:11.926: INFO: Number of running nodes: 0, number of available pods: 0 May 11 17:34:11.928: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-cpgjw/daemonsets","resourceVersion":"9987377"},"items":null} May 11 17:34:11.930: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-cpgjw/pods","resourceVersion":"9987377"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:34:11.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-cpgjw" for this suite. May 11 17:34:18.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:34:18.289: INFO: namespace: e2e-tests-daemonsets-cpgjw, resource: bindings, ignored listing per whitelist May 11 17:34:18.294: INFO: namespace e2e-tests-daemonsets-cpgjw deletion completed in 6.353716638s • [SLOW TEST:42.737 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:34:18.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 11 17:34:18.408: INFO: (0) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.588022ms) May 11 17:34:18.411: INFO: (1) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.667859ms) May 11 17:34:18.414: INFO: (2) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.921881ms) May 11 17:34:18.417: INFO: (3) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.577902ms) May 11 17:34:18.419: INFO: (4) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.706447ms) May 11 17:34:18.422: INFO: (5) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.694927ms) May 11 17:34:18.448: INFO: (6) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 25.377394ms) May 11 17:34:18.450: INFO: (7) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.813429ms) May 11 17:34:18.454: INFO: (8) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.250253ms) May 11 17:34:18.457: INFO: (9) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.292201ms) May 11 17:34:18.460: INFO: (10) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.978843ms) May 11 17:34:18.462: INFO: (11) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.385893ms) May 11 17:34:18.465: INFO: (12) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.088715ms) May 11 17:34:18.467: INFO: (13) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.269229ms) May 11 17:34:18.469: INFO: (14) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.291147ms) May 11 17:34:18.471: INFO: (15) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.226005ms) May 11 17:34:18.474: INFO: (16) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.765878ms) May 11 17:34:18.477: INFO: (17) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.890208ms) May 11 17:34:18.480: INFO: (18) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.652549ms) May 11 17:34:18.483: INFO: (19) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.962272ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:34:18.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-592d7" for this suite. May 11 17:34:24.501: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:34:24.521: INFO: namespace: e2e-tests-proxy-592d7, resource: bindings, ignored listing per whitelist May 11 17:34:24.618: INFO: namespace e2e-tests-proxy-592d7 deletion completed in 6.131854975s • [SLOW TEST:6.323 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:34:24.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-a8223e56-93ad-11ea-b832-0242ac110018 STEP: Creating configMap with name cm-test-opt-upd-a8223ebb-93ad-11ea-b832-0242ac110018 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-a8223e56-93ad-11ea-b832-0242ac110018 STEP: Updating configmap cm-test-opt-upd-a8223ebb-93ad-11ea-b832-0242ac110018 STEP: Creating configMap with name cm-test-opt-create-a8223ee1-93ad-11ea-b832-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:36:02.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-zn97f" for this suite. May 11 17:36:26.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:36:26.267: INFO: namespace: e2e-tests-configmap-zn97f, resource: bindings, ignored listing per whitelist May 11 17:36:26.297: INFO: namespace e2e-tests-configmap-zn97f deletion completed in 24.232719628s • [SLOW TEST:121.679 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:36:26.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:37:16.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-5jhbr" for this suite. May 11 17:37:22.456: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:37:22.502: INFO: namespace: e2e-tests-container-runtime-5jhbr, resource: bindings, ignored listing per whitelist May 11 17:37:22.512: INFO: namespace e2e-tests-container-runtime-5jhbr deletion completed in 6.075430796s • [SLOW TEST:56.215 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:37:22.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC May 11 17:37:22.601: INFO: namespace e2e-tests-kubectl-mphw6 May 11 17:37:22.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-mphw6' May 11 17:37:22.871: INFO: stderr: "" May 11 17:37:22.871: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 11 17:37:23.874: INFO: Selector matched 1 pods for map[app:redis] May 11 17:37:23.874: INFO: Found 0 / 1 May 11 17:37:25.241: INFO: Selector matched 1 pods for map[app:redis] May 11 17:37:25.241: INFO: Found 0 / 1 May 11 17:37:25.876: INFO: Selector matched 1 pods for map[app:redis] May 11 17:37:25.876: INFO: Found 0 / 1 May 11 17:37:26.875: INFO: Selector matched 1 pods for map[app:redis] May 11 17:37:26.876: INFO: Found 0 / 1 May 11 17:37:27.875: INFO: Selector matched 1 pods for map[app:redis] May 11 17:37:27.875: INFO: Found 0 / 1 May 11 17:37:29.014: INFO: Selector matched 1 pods for map[app:redis] May 11 17:37:29.014: INFO: Found 1 / 1 May 11 17:37:29.014: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 11 17:37:29.018: INFO: Selector matched 1 pods for map[app:redis] May 11 17:37:29.018: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 11 17:37:29.018: INFO: wait on redis-master startup in e2e-tests-kubectl-mphw6 May 11 17:37:29.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-qjlcz redis-master --namespace=e2e-tests-kubectl-mphw6' May 11 17:37:34.599: INFO: stderr: "" May 11 17:37:34.599: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 11 May 17:37:27.515 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 11 May 17:37:27.515 # Server started, Redis version 3.2.12\n1:M 11 May 17:37:27.515 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 11 May 17:37:27.515 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC May 11 17:37:34.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-mphw6' May 11 17:37:34.762: INFO: stderr: "" May 11 17:37:34.762: INFO: stdout: "service/rm2 exposed\n" May 11 17:37:34.775: INFO: Service rm2 in namespace e2e-tests-kubectl-mphw6 found. STEP: exposing service May 11 17:37:36.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-mphw6' May 11 17:37:36.945: INFO: stderr: "" May 11 17:37:36.945: INFO: stdout: "service/rm3 exposed\n" May 11 17:37:37.002: INFO: Service rm3 in namespace e2e-tests-kubectl-mphw6 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:37:39.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-mphw6" for this suite. May 11 17:38:03.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:38:03.307: INFO: namespace: e2e-tests-kubectl-mphw6, resource: bindings, ignored listing per whitelist May 11 17:38:03.348: INFO: namespace e2e-tests-kubectl-mphw6 deletion completed in 24.274738788s • [SLOW TEST:40.837 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:38:03.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0511 17:38:47.164991 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 11 17:38:47.165: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:38:47.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-kd86k" for this suite. May 11 17:39:01.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:39:01.416: INFO: namespace: e2e-tests-gc-kd86k, resource: bindings, ignored listing per whitelist May 11 17:39:01.440: INFO: namespace e2e-tests-gc-kd86k deletion completed in 14.120009788s • [SLOW TEST:58.092 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:39:01.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 11 17:39:01.693: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4d25161e-93ae-11ea-b832-0242ac110018" in namespace "e2e-tests-downward-api-7ncz9" to be "success or failure" May 11 17:39:01.701: INFO: Pod "downwardapi-volume-4d25161e-93ae-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.429039ms May 11 17:39:03.707: INFO: Pod "downwardapi-volume-4d25161e-93ae-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01428577s May 11 17:39:05.711: INFO: Pod "downwardapi-volume-4d25161e-93ae-11ea-b832-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.018264081s May 11 17:39:07.714: INFO: Pod "downwardapi-volume-4d25161e-93ae-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.020995464s STEP: Saw pod success May 11 17:39:07.714: INFO: Pod "downwardapi-volume-4d25161e-93ae-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 17:39:07.715: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-4d25161e-93ae-11ea-b832-0242ac110018 container client-container: STEP: delete the pod May 11 17:39:07.811: INFO: Waiting for pod downwardapi-volume-4d25161e-93ae-11ea-b832-0242ac110018 to disappear May 11 17:39:07.852: INFO: Pod downwardapi-volume-4d25161e-93ae-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:39:07.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-7ncz9" for this suite. May 11 17:39:14.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:39:14.217: INFO: namespace: e2e-tests-downward-api-7ncz9, resource: bindings, ignored listing per whitelist May 11 17:39:14.238: INFO: namespace e2e-tests-downward-api-7ncz9 deletion completed in 6.383257254s • [SLOW TEST:12.797 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:39:14.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 11 17:39:14.572: INFO: PodSpec: initContainers in spec.initContainers May 11 17:40:09.140: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-54df1e4d-93ae-11ea-b832-0242ac110018", GenerateName:"", Namespace:"e2e-tests-init-container-7mjr9", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-7mjr9/pods/pod-init-54df1e4d-93ae-11ea-b832-0242ac110018", UID:"54e29d46-93ae-11ea-99e8-0242ac110002", ResourceVersion:"9988471", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63724815554, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"572248037"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-z4m95", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002121800), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-z4m95", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-z4m95", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-z4m95", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001f7eeb8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00170f980), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001f7ef70)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001f7f000)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001f7f008), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001f7f00c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724815554, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724815554, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724815554, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724815554, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.3", PodIP:"10.244.1.34", StartTime:(*v1.Time)(0xc000ee6a20), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00162aa80)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00162aaf0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://ae538461535d16a3b20448be837363a52ae5e88135afa65513672a80c6ed226f"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000ee6a60), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000ee6a40), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:40:09.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-7mjr9" for this suite. May 11 17:40:33.358: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:40:33.398: INFO: namespace: e2e-tests-init-container-7mjr9, resource: bindings, ignored listing per whitelist May 11 17:40:33.420: INFO: namespace e2e-tests-init-container-7mjr9 deletion completed in 24.122733684s • [SLOW TEST:79.182 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:40:33.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin May 11 17:40:35.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-5fr4n run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 11 17:40:44.094: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0511 17:40:43.944039 579 log.go:172] (0xc0001389a0) (0xc000807400) Create stream\nI0511 17:40:43.944096 579 log.go:172] (0xc0001389a0) (0xc000807400) Stream added, broadcasting: 1\nI0511 17:40:43.946943 579 log.go:172] (0xc0001389a0) Reply frame received for 1\nI0511 17:40:43.946987 579 log.go:172] (0xc0001389a0) (0xc000807540) Create stream\nI0511 17:40:43.947006 579 log.go:172] (0xc0001389a0) (0xc000807540) Stream added, broadcasting: 3\nI0511 17:40:43.947745 579 log.go:172] (0xc0001389a0) Reply frame received for 3\nI0511 17:40:43.947771 579 log.go:172] (0xc0001389a0) (0xc000624280) Create stream\nI0511 17:40:43.947777 579 log.go:172] (0xc0001389a0) (0xc000624280) Stream added, broadcasting: 5\nI0511 17:40:43.948434 579 log.go:172] (0xc0001389a0) Reply frame received for 5\nI0511 17:40:43.948476 579 log.go:172] (0xc0001389a0) (0xc00092e000) Create stream\nI0511 17:40:43.948506 579 log.go:172] (0xc0001389a0) (0xc00092e000) Stream added, broadcasting: 7\nI0511 17:40:43.949069 579 log.go:172] (0xc0001389a0) Reply frame received for 7\nI0511 17:40:43.949326 579 log.go:172] (0xc000807540) (3) Writing data frame\nI0511 17:40:43.949426 579 log.go:172] (0xc000807540) (3) Writing data frame\nI0511 17:40:43.950077 579 log.go:172] (0xc0001389a0) Data frame received for 5\nI0511 17:40:43.950086 579 log.go:172] (0xc000624280) (5) Data frame handling\nI0511 17:40:43.950096 579 log.go:172] (0xc000624280) (5) Data frame sent\nI0511 17:40:43.950628 579 log.go:172] (0xc0001389a0) Data frame received for 5\nI0511 17:40:43.950642 579 log.go:172] (0xc000624280) (5) Data frame handling\nI0511 17:40:43.950652 579 log.go:172] (0xc000624280) (5) Data frame sent\nI0511 17:40:44.000925 579 log.go:172] (0xc0001389a0) Data frame received for 5\nI0511 17:40:44.000996 579 log.go:172] (0xc000624280) (5) Data frame handling\nI0511 17:40:44.001033 579 log.go:172] (0xc0001389a0) Data frame received for 7\nI0511 17:40:44.001046 579 log.go:172] (0xc00092e000) (7) Data frame handling\nI0511 17:40:44.001584 579 log.go:172] (0xc0001389a0) Data frame received for 1\nI0511 17:40:44.001637 579 log.go:172] (0xc000807400) (1) Data frame handling\nI0511 17:40:44.001692 579 log.go:172] (0xc000807400) (1) Data frame sent\nI0511 17:40:44.001738 579 log.go:172] (0xc0001389a0) (0xc000807400) Stream removed, broadcasting: 1\nI0511 17:40:44.001898 579 log.go:172] (0xc0001389a0) (0xc000807400) Stream removed, broadcasting: 1\nI0511 17:40:44.001928 579 log.go:172] (0xc0001389a0) (0xc000807540) Stream removed, broadcasting: 3\nI0511 17:40:44.001952 579 log.go:172] (0xc0001389a0) (0xc000624280) Stream removed, broadcasting: 5\nI0511 17:40:44.001998 579 log.go:172] (0xc0001389a0) Go away received\nI0511 17:40:44.002067 579 log.go:172] (0xc0001389a0) (0xc00092e000) Stream removed, broadcasting: 7\n" May 11 17:40:44.094: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:40:46.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-5fr4n" for this suite. May 11 17:40:54.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:40:54.146: INFO: namespace: e2e-tests-kubectl-5fr4n, resource: bindings, ignored listing per whitelist May 11 17:40:54.197: INFO: namespace e2e-tests-kubectl-5fr4n deletion completed in 8.095728142s • [SLOW TEST:20.777 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:40:54.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server May 11 17:40:54.304: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:40:54.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-24p2t" for this suite. May 11 17:41:00.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:41:01.575: INFO: namespace: e2e-tests-kubectl-24p2t, resource: bindings, ignored listing per whitelist May 11 17:41:01.597: INFO: namespace e2e-tests-kubectl-24p2t deletion completed in 7.201247233s • [SLOW TEST:7.400 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:41:01.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-95761182-93ae-11ea-b832-0242ac110018 STEP: Creating a pod to test consume configMaps May 11 17:41:03.832: INFO: Waiting up to 5m0s for pod "pod-configmaps-959ec512-93ae-11ea-b832-0242ac110018" in namespace "e2e-tests-configmap-9wbtc" to be "success or failure" May 11 17:41:04.130: INFO: Pod "pod-configmaps-959ec512-93ae-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 297.898968ms May 11 17:41:06.133: INFO: Pod "pod-configmaps-959ec512-93ae-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.300920144s May 11 17:41:08.147: INFO: Pod "pod-configmaps-959ec512-93ae-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.315273039s May 11 17:41:10.382: INFO: Pod "pod-configmaps-959ec512-93ae-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.549807206s May 11 17:41:12.386: INFO: Pod "pod-configmaps-959ec512-93ae-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.553984471s STEP: Saw pod success May 11 17:41:12.386: INFO: Pod "pod-configmaps-959ec512-93ae-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 17:41:12.389: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-959ec512-93ae-11ea-b832-0242ac110018 container configmap-volume-test: STEP: delete the pod May 11 17:41:13.604: INFO: Waiting for pod pod-configmaps-959ec512-93ae-11ea-b832-0242ac110018 to disappear May 11 17:41:13.608: INFO: Pod pod-configmaps-959ec512-93ae-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:41:13.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-9wbtc" for this suite. May 11 17:41:24.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:41:24.433: INFO: namespace: e2e-tests-configmap-9wbtc, resource: bindings, ignored listing per whitelist May 11 17:41:24.525: INFO: namespace e2e-tests-configmap-9wbtc deletion completed in 10.597560689s • [SLOW TEST:22.928 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:41:24.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 11 17:41:24.928: INFO: (0) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 60.953406ms) May 11 17:41:24.930: INFO: (1) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.686012ms) May 11 17:41:24.968: INFO: (2) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 38.070773ms) May 11 17:41:24.971: INFO: (3) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.367494ms) May 11 17:41:24.973: INFO: (4) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 1.997668ms) May 11 17:41:24.974: INFO: (5) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 1.676537ms) May 11 17:41:24.976: INFO: (6) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 1.738691ms) May 11 17:41:24.978: INFO: (7) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 1.782491ms) May 11 17:41:24.980: INFO: (8) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 1.873716ms) May 11 17:41:24.982: INFO: (9) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.247989ms) May 11 17:41:24.984: INFO: (10) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 1.763738ms) May 11 17:41:24.986: INFO: (11) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 1.837644ms) May 11 17:41:24.989: INFO: (12) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.034122ms) May 11 17:41:24.991: INFO: (13) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.291289ms) May 11 17:41:24.993: INFO: (14) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.080839ms) May 11 17:41:24.995: INFO: (15) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 1.75645ms) May 11 17:41:24.998: INFO: (16) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.868197ms) May 11 17:41:25.010: INFO: (17) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 12.055879ms) May 11 17:41:25.012: INFO: (18) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.162575ms) May 11 17:41:25.014: INFO: (19) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 1.803728ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:41:25.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-4dp9m" for this suite. May 11 17:41:31.093: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:41:31.266: INFO: namespace: e2e-tests-proxy-4dp9m, resource: bindings, ignored listing per whitelist May 11 17:41:31.309: INFO: namespace e2e-tests-proxy-4dp9m deletion completed in 6.292888855s • [SLOW TEST:6.784 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:41:31.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-dfdh STEP: Creating a pod to test atomic-volume-subpath May 11 17:41:31.976: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-dfdh" in namespace "e2e-tests-subpath-wwkz2" to be "success or failure" May 11 17:41:32.001: INFO: Pod "pod-subpath-test-secret-dfdh": Phase="Pending", Reason="", readiness=false. Elapsed: 25.214267ms May 11 17:41:34.005: INFO: Pod "pod-subpath-test-secret-dfdh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028911881s May 11 17:41:36.010: INFO: Pod "pod-subpath-test-secret-dfdh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033885983s May 11 17:41:38.430: INFO: Pod "pod-subpath-test-secret-dfdh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.454100278s May 11 17:41:41.244: INFO: Pod "pod-subpath-test-secret-dfdh": Phase="Pending", Reason="", readiness=false. Elapsed: 9.267894431s May 11 17:41:43.248: INFO: Pod "pod-subpath-test-secret-dfdh": Phase="Pending", Reason="", readiness=false. Elapsed: 11.271332926s May 11 17:41:46.164: INFO: Pod "pod-subpath-test-secret-dfdh": Phase="Pending", Reason="", readiness=false. Elapsed: 14.18787869s May 11 17:41:48.168: INFO: Pod "pod-subpath-test-secret-dfdh": Phase="Running", Reason="", readiness=false. Elapsed: 16.19169989s May 11 17:41:50.173: INFO: Pod "pod-subpath-test-secret-dfdh": Phase="Running", Reason="", readiness=false. Elapsed: 18.196345324s May 11 17:41:52.176: INFO: Pod "pod-subpath-test-secret-dfdh": Phase="Running", Reason="", readiness=false. Elapsed: 20.199836331s May 11 17:41:54.335: INFO: Pod "pod-subpath-test-secret-dfdh": Phase="Running", Reason="", readiness=false. Elapsed: 22.358669524s May 11 17:41:56.338: INFO: Pod "pod-subpath-test-secret-dfdh": Phase="Running", Reason="", readiness=false. Elapsed: 24.36218821s May 11 17:41:58.342: INFO: Pod "pod-subpath-test-secret-dfdh": Phase="Running", Reason="", readiness=false. Elapsed: 26.365719173s May 11 17:42:00.345: INFO: Pod "pod-subpath-test-secret-dfdh": Phase="Running", Reason="", readiness=false. Elapsed: 28.369266357s May 11 17:42:02.350: INFO: Pod "pod-subpath-test-secret-dfdh": Phase="Running", Reason="", readiness=false. Elapsed: 30.373655305s May 11 17:42:04.353: INFO: Pod "pod-subpath-test-secret-dfdh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.377112751s STEP: Saw pod success May 11 17:42:04.353: INFO: Pod "pod-subpath-test-secret-dfdh" satisfied condition "success or failure" May 11 17:42:04.356: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-secret-dfdh container test-container-subpath-secret-dfdh: STEP: delete the pod May 11 17:42:05.048: INFO: Waiting for pod pod-subpath-test-secret-dfdh to disappear May 11 17:42:05.101: INFO: Pod pod-subpath-test-secret-dfdh no longer exists STEP: Deleting pod pod-subpath-test-secret-dfdh May 11 17:42:05.101: INFO: Deleting pod "pod-subpath-test-secret-dfdh" in namespace "e2e-tests-subpath-wwkz2" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:42:05.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-wwkz2" for this suite. May 11 17:42:18.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:42:18.197: INFO: namespace: e2e-tests-subpath-wwkz2, resource: bindings, ignored listing per whitelist May 11 17:42:18.254: INFO: namespace e2e-tests-subpath-wwkz2 deletion completed in 12.94894099s • [SLOW TEST:46.945 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:42:18.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 11 17:42:25.224: INFO: Successfully updated pod "annotationupdatec290dddd-93ae-11ea-b832-0242ac110018" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:42:27.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-nzjjp" for this suite. May 11 17:42:51.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:42:51.429: INFO: namespace: e2e-tests-projected-nzjjp, resource: bindings, ignored listing per whitelist May 11 17:42:51.522: INFO: namespace e2e-tests-projected-nzjjp deletion completed in 24.235647126s • [SLOW TEST:33.267 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:42:51.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 11 17:42:51.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-khf44' May 11 17:42:51.750: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 11 17:42:51.750: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc May 11 17:42:53.819: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-dfmcg] May 11 17:42:53.819: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-dfmcg" in namespace "e2e-tests-kubectl-khf44" to be "running and ready" May 11 17:42:53.821: INFO: Pod "e2e-test-nginx-rc-dfmcg": Phase="Pending", Reason="", readiness=false. Elapsed: 1.709752ms May 11 17:42:55.923: INFO: Pod "e2e-test-nginx-rc-dfmcg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103651216s May 11 17:42:58.155: INFO: Pod "e2e-test-nginx-rc-dfmcg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.335542159s May 11 17:43:00.329: INFO: Pod "e2e-test-nginx-rc-dfmcg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.509673686s May 11 17:43:02.943: INFO: Pod "e2e-test-nginx-rc-dfmcg": Phase="Running", Reason="", readiness=true. Elapsed: 9.123983582s May 11 17:43:02.943: INFO: Pod "e2e-test-nginx-rc-dfmcg" satisfied condition "running and ready" May 11 17:43:02.943: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-dfmcg] May 11 17:43:02.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-khf44' May 11 17:43:03.450: INFO: stderr: "" May 11 17:43:03.450: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 May 11 17:43:03.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-khf44' May 11 17:43:04.225: INFO: stderr: "" May 11 17:43:04.225: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:43:04.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-khf44" for this suite. May 11 17:43:34.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:43:34.446: INFO: namespace: e2e-tests-kubectl-khf44, resource: bindings, ignored listing per whitelist May 11 17:43:34.450: INFO: namespace e2e-tests-kubectl-khf44 deletion completed in 29.910509229s • [SLOW TEST:42.928 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:43:34.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-k5hdk in namespace e2e-tests-proxy-fwsx8 I0511 17:43:36.182130 6 runners.go:184] Created replication controller with name: proxy-service-k5hdk, namespace: e2e-tests-proxy-fwsx8, replica count: 1 I0511 17:43:37.232476 6 runners.go:184] proxy-service-k5hdk Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 17:43:38.232714 6 runners.go:184] proxy-service-k5hdk Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 17:43:39.232941 6 runners.go:184] proxy-service-k5hdk Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 17:43:40.233429 6 runners.go:184] proxy-service-k5hdk Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 17:43:41.233621 6 runners.go:184] proxy-service-k5hdk Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 17:43:42.233843 6 runners.go:184] proxy-service-k5hdk Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0511 17:43:43.234022 6 runners.go:184] proxy-service-k5hdk Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0511 17:43:44.234192 6 runners.go:184] proxy-service-k5hdk Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0511 17:43:45.234404 6 runners.go:184] proxy-service-k5hdk Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0511 17:43:46.234583 6 runners.go:184] proxy-service-k5hdk Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0511 17:43:47.234779 6 runners.go:184] proxy-service-k5hdk Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0511 17:43:48.234967 6 runners.go:184] proxy-service-k5hdk Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0511 17:43:49.235160 6 runners.go:184] proxy-service-k5hdk Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0511 17:43:50.235384 6 runners.go:184] proxy-service-k5hdk Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 11 17:43:50.449: INFO: setup took 14.901570498s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 11 17:43:50.454: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-fwsx8/pods/proxy-service-k5hdk-vlqxv/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 11 17:44:06.081: INFO: Waiting up to 5m0s for pod "downwardapi-volume-029c0efc-93af-11ea-b832-0242ac110018" in namespace "e2e-tests-downward-api-4h4xx" to be "success or failure" May 11 17:44:06.611: INFO: Pod "downwardapi-volume-029c0efc-93af-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 529.386025ms May 11 17:44:08.761: INFO: Pod "downwardapi-volume-029c0efc-93af-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.679529467s May 11 17:44:11.050: INFO: Pod "downwardapi-volume-029c0efc-93af-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.968993079s May 11 17:44:13.054: INFO: Pod "downwardapi-volume-029c0efc-93af-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.972873777s May 11 17:44:15.057: INFO: Pod "downwardapi-volume-029c0efc-93af-11ea-b832-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 8.975489555s May 11 17:44:17.420: INFO: Pod "downwardapi-volume-029c0efc-93af-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.338211718s STEP: Saw pod success May 11 17:44:17.420: INFO: Pod "downwardapi-volume-029c0efc-93af-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 17:44:17.707: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-029c0efc-93af-11ea-b832-0242ac110018 container client-container: STEP: delete the pod May 11 17:44:18.222: INFO: Waiting for pod downwardapi-volume-029c0efc-93af-11ea-b832-0242ac110018 to disappear May 11 17:44:18.845: INFO: Pod downwardapi-volume-029c0efc-93af-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:44:18.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-4h4xx" for this suite. May 11 17:44:28.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:44:29.004: INFO: namespace: e2e-tests-downward-api-4h4xx, resource: bindings, ignored listing per whitelist May 11 17:44:29.059: INFO: namespace e2e-tests-downward-api-4h4xx deletion completed in 10.211160654s • [SLOW TEST:23.562 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:44:29.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 11 17:44:29.588: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 11 17:44:29.655: INFO: Pod name sample-pod: Found 0 pods out of 1 May 11 17:44:34.658: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 11 17:44:34.658: INFO: Creating deployment "test-rolling-update-deployment" May 11 17:44:34.662: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 11 17:44:34.932: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 11 17:44:36.939: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 11 17:44:37.719: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724815875, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724815875, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724815875, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724815874, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 17:44:39.723: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724815875, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724815875, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724815875, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724815874, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 17:44:42.031: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724815875, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724815875, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724815875, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724815874, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 17:44:44.011: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 11 17:44:44.197: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-r2gp4,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-r2gp4/deployments/test-rolling-update-deployment,UID:13a89caf-93af-11ea-99e8-0242ac110002,ResourceVersion:9989271,Generation:1,CreationTimestamp:2020-05-11 17:44:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-11 17:44:35 +0000 UTC 2020-05-11 17:44:35 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-11 17:44:43 +0000 UTC 2020-05-11 17:44:34 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 11 17:44:44.200: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-r2gp4,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-r2gp4/replicasets/test-rolling-update-deployment-75db98fb4c,UID:13d409af-93af-11ea-99e8-0242ac110002,ResourceVersion:9989260,Generation:1,CreationTimestamp:2020-05-11 17:44:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 13a89caf-93af-11ea-99e8-0242ac110002 0xc00233d7c7 0xc00233d7c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 11 17:44:44.200: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 11 17:44:44.200: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-r2gp4,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-r2gp4/replicasets/test-rolling-update-controller,UID:10a303d7-93af-11ea-99e8-0242ac110002,ResourceVersion:9989270,Generation:2,CreationTimestamp:2020-05-11 17:44:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 13a89caf-93af-11ea-99e8-0242ac110002 0xc00233d707 0xc00233d708}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 11 17:44:44.203: INFO: Pod "test-rolling-update-deployment-75db98fb4c-svkgt" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-svkgt,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-r2gp4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r2gp4/pods/test-rolling-update-deployment-75db98fb4c-svkgt,UID:13d691c5-93af-11ea-99e8-0242ac110002,ResourceVersion:9989259,Generation:0,CreationTimestamp:2020-05-11 17:44:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 13d409af-93af-11ea-99e8-0242ac110002 0xc00221bb17 0xc00221bb18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bdb9d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bdb9d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-bdb9d true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00221bb90} {node.kubernetes.io/unreachable Exists NoExecute 0xc00221bbb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:44:35 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:44:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:44:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:44:35 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.32,StartTime:2020-05-11 17:44:35 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-11 17:44:41 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://b9a0715b56b55540139380f1efb08a071016eac1a2a41ee8eea18d42fcc7f6a7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:44:44.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-r2gp4" for this suite. May 11 17:44:56.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:44:56.308: INFO: namespace: e2e-tests-deployment-r2gp4, resource: bindings, ignored listing per whitelist May 11 17:44:56.346: INFO: namespace e2e-tests-deployment-r2gp4 deletion completed in 12.140276825s • [SLOW TEST:27.287 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:44:56.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args May 11 17:44:57.010: INFO: Waiting up to 5m0s for pod "var-expansion-20e52e0b-93af-11ea-b832-0242ac110018" in namespace "e2e-tests-var-expansion-52jmc" to be "success or failure" May 11 17:44:57.076: INFO: Pod "var-expansion-20e52e0b-93af-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 66.016582ms May 11 17:44:59.080: INFO: Pod "var-expansion-20e52e0b-93af-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070341617s May 11 17:45:01.336: INFO: Pod "var-expansion-20e52e0b-93af-11ea-b832-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.326268926s May 11 17:45:03.339: INFO: Pod "var-expansion-20e52e0b-93af-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.329663296s STEP: Saw pod success May 11 17:45:03.339: INFO: Pod "var-expansion-20e52e0b-93af-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 17:45:03.343: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-20e52e0b-93af-11ea-b832-0242ac110018 container dapi-container: STEP: delete the pod May 11 17:45:03.493: INFO: Waiting for pod var-expansion-20e52e0b-93af-11ea-b832-0242ac110018 to disappear May 11 17:45:03.518: INFO: Pod var-expansion-20e52e0b-93af-11ea-b832-0242ac110018 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:45:03.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-52jmc" for this suite. May 11 17:45:11.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:45:11.808: INFO: namespace: e2e-tests-var-expansion-52jmc, resource: bindings, ignored listing per whitelist May 11 17:45:11.815: INFO: namespace e2e-tests-var-expansion-52jmc deletion completed in 8.293363721s • [SLOW TEST:15.469 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:45:11.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 11 17:45:12.701: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2a5499ae-93af-11ea-b832-0242ac110018" in namespace "e2e-tests-projected-mzgxn" to be "success or failure" May 11 17:45:12.882: INFO: Pod "downwardapi-volume-2a5499ae-93af-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 180.443423ms May 11 17:45:14.886: INFO: Pod "downwardapi-volume-2a5499ae-93af-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.18448929s May 11 17:45:16.889: INFO: Pod "downwardapi-volume-2a5499ae-93af-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.188160316s May 11 17:45:18.893: INFO: Pod "downwardapi-volume-2a5499ae-93af-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.1922s STEP: Saw pod success May 11 17:45:18.894: INFO: Pod "downwardapi-volume-2a5499ae-93af-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 17:45:18.896: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-2a5499ae-93af-11ea-b832-0242ac110018 container client-container: STEP: delete the pod May 11 17:45:18.915: INFO: Waiting for pod downwardapi-volume-2a5499ae-93af-11ea-b832-0242ac110018 to disappear May 11 17:45:18.925: INFO: Pod downwardapi-volume-2a5499ae-93af-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:45:18.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-mzgxn" for this suite. May 11 17:45:26.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:45:27.045: INFO: namespace: e2e-tests-projected-mzgxn, resource: bindings, ignored listing per whitelist May 11 17:45:27.048: INFO: namespace e2e-tests-projected-mzgxn deletion completed in 8.11989389s • [SLOW TEST:15.232 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:45:27.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions May 11 17:45:27.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 11 17:45:27.471: INFO: stderr: "" May 11 17:45:27.471: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:45:27.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-nxgp4" for this suite. May 11 17:45:33.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:45:33.734: INFO: namespace: e2e-tests-kubectl-nxgp4, resource: bindings, ignored listing per whitelist May 11 17:45:33.768: INFO: namespace e2e-tests-kubectl-nxgp4 deletion completed in 6.294147451s • [SLOW TEST:6.720 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:45:33.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0511 17:45:35.720744 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 11 17:45:35.720: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:45:35.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-8qm74" for this suite. May 11 17:45:43.738: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:45:43.768: INFO: namespace: e2e-tests-gc-8qm74, resource: bindings, ignored listing per whitelist May 11 17:45:43.805: INFO: namespace e2e-tests-gc-8qm74 deletion completed in 8.082172382s • [SLOW TEST:10.037 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:45:43.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 11 17:45:50.441: INFO: Successfully updated pod "labelsupdate3ce9d931-93af-11ea-b832-0242ac110018" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:45:52.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-scwxc" for this suite. May 11 17:46:16.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:46:16.977: INFO: namespace: e2e-tests-downward-api-scwxc, resource: bindings, ignored listing per whitelist May 11 17:46:17.003: INFO: namespace e2e-tests-downward-api-scwxc deletion completed in 24.155962173s • [SLOW TEST:33.197 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:46:17.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-50bc72f7-93af-11ea-b832-0242ac110018 STEP: Creating secret with name s-test-opt-upd-50bc735a-93af-11ea-b832-0242ac110018 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-50bc72f7-93af-11ea-b832-0242ac110018 STEP: Updating secret s-test-opt-upd-50bc735a-93af-11ea-b832-0242ac110018 STEP: Creating secret with name s-test-opt-create-50bc7394-93af-11ea-b832-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:46:29.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-fg6rz" for this suite. May 11 17:46:53.517: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:46:53.544: INFO: namespace: e2e-tests-projected-fg6rz, resource: bindings, ignored listing per whitelist May 11 17:46:53.574: INFO: namespace e2e-tests-projected-fg6rz deletion completed in 24.222316045s • [SLOW TEST:36.571 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:46:53.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-4hght/secret-test-669e52eb-93af-11ea-b832-0242ac110018 STEP: Creating a pod to test consume secrets May 11 17:46:53.916: INFO: Waiting up to 5m0s for pod "pod-configmaps-66a8dec7-93af-11ea-b832-0242ac110018" in namespace "e2e-tests-secrets-4hght" to be "success or failure" May 11 17:46:53.975: INFO: Pod "pod-configmaps-66a8dec7-93af-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 58.594151ms May 11 17:46:55.978: INFO: Pod "pod-configmaps-66a8dec7-93af-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062328921s May 11 17:46:58.338: INFO: Pod "pod-configmaps-66a8dec7-93af-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.422189785s May 11 17:47:00.341: INFO: Pod "pod-configmaps-66a8dec7-93af-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.425218263s May 11 17:47:02.345: INFO: Pod "pod-configmaps-66a8dec7-93af-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.429401841s STEP: Saw pod success May 11 17:47:02.345: INFO: Pod "pod-configmaps-66a8dec7-93af-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 17:47:02.349: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-66a8dec7-93af-11ea-b832-0242ac110018 container env-test: STEP: delete the pod May 11 17:47:02.692: INFO: Waiting for pod pod-configmaps-66a8dec7-93af-11ea-b832-0242ac110018 to disappear May 11 17:47:02.724: INFO: Pod pod-configmaps-66a8dec7-93af-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:47:02.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-4hght" for this suite. May 11 17:47:10.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:47:10.835: INFO: namespace: e2e-tests-secrets-4hght, resource: bindings, ignored listing per whitelist May 11 17:47:10.844: INFO: namespace e2e-tests-secrets-4hght deletion completed in 8.116941664s • [SLOW TEST:17.270 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:47:10.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components May 11 17:47:12.160: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend May 11 17:47:12.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ffrkv' May 11 17:47:14.384: INFO: stderr: "" May 11 17:47:14.384: INFO: stdout: "service/redis-slave created\n" May 11 17:47:14.384: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend May 11 17:47:14.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ffrkv' May 11 17:47:15.416: INFO: stderr: "" May 11 17:47:15.416: INFO: stdout: "service/redis-master created\n" May 11 17:47:15.417: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 11 17:47:15.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ffrkv' May 11 17:47:15.950: INFO: stderr: "" May 11 17:47:15.950: INFO: stdout: "service/frontend created\n" May 11 17:47:15.950: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 May 11 17:47:15.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ffrkv' May 11 17:47:16.240: INFO: stderr: "" May 11 17:47:16.240: INFO: stdout: "deployment.extensions/frontend created\n" May 11 17:47:16.240: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 11 17:47:16.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ffrkv' May 11 17:47:18.115: INFO: stderr: "" May 11 17:47:18.115: INFO: stdout: "deployment.extensions/redis-master created\n" May 11 17:47:18.115: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 May 11 17:47:18.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ffrkv' May 11 17:47:18.609: INFO: stderr: "" May 11 17:47:18.609: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app May 11 17:47:18.609: INFO: Waiting for all frontend pods to be Running. May 11 17:47:33.659: INFO: Waiting for frontend to serve content. May 11 17:47:33.677: INFO: Trying to add a new entry to the guestbook. May 11 17:47:33.694: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 11 17:47:33.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ffrkv' May 11 17:47:42.335: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 17:47:42.335: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources May 11 17:47:42.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ffrkv' May 11 17:47:42.851: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 17:47:42.851: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 11 17:47:42.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ffrkv' May 11 17:47:43.193: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 17:47:43.193: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 11 17:47:43.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ffrkv' May 11 17:47:43.349: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 17:47:43.349: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources May 11 17:47:43.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ffrkv' May 11 17:47:44.027: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 17:47:44.027: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 11 17:47:44.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ffrkv' May 11 17:47:44.922: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 17:47:44.922: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:47:44.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-ffrkv" for this suite. May 11 17:48:43.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:48:43.783: INFO: namespace: e2e-tests-kubectl-ffrkv, resource: bindings, ignored listing per whitelist May 11 17:48:43.792: INFO: namespace e2e-tests-kubectl-ffrkv deletion completed in 58.780005134s • [SLOW TEST:92.947 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:48:43.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 11 17:48:44.449: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 11 17:48:44.495: INFO: Number of nodes with available pods: 0 May 11 17:48:44.495: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 11 17:48:44.769: INFO: Number of nodes with available pods: 0 May 11 17:48:44.769: INFO: Node hunter-worker is running more than one daemon pod May 11 17:48:45.860: INFO: Number of nodes with available pods: 0 May 11 17:48:45.860: INFO: Node hunter-worker is running more than one daemon pod May 11 17:48:46.772: INFO: Number of nodes with available pods: 0 May 11 17:48:46.772: INFO: Node hunter-worker is running more than one daemon pod May 11 17:48:47.772: INFO: Number of nodes with available pods: 0 May 11 17:48:47.772: INFO: Node hunter-worker is running more than one daemon pod May 11 17:48:48.772: INFO: Number of nodes with available pods: 0 May 11 17:48:48.772: INFO: Node hunter-worker is running more than one daemon pod May 11 17:48:49.794: INFO: Number of nodes with available pods: 1 May 11 17:48:49.794: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 11 17:48:49.852: INFO: Number of nodes with available pods: 1 May 11 17:48:49.852: INFO: Number of running nodes: 0, number of available pods: 1 May 11 17:48:50.878: INFO: Number of nodes with available pods: 0 May 11 17:48:50.878: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 11 17:48:50.893: INFO: Number of nodes with available pods: 0 May 11 17:48:50.893: INFO: Node hunter-worker is running more than one daemon pod May 11 17:48:51.897: INFO: Number of nodes with available pods: 0 May 11 17:48:51.897: INFO: Node hunter-worker is running more than one daemon pod May 11 17:48:53.112: INFO: Number of nodes with available pods: 0 May 11 17:48:53.112: INFO: Node hunter-worker is running more than one daemon pod May 11 17:48:53.897: INFO: Number of nodes with available pods: 0 May 11 17:48:53.897: INFO: Node hunter-worker is running more than one daemon pod May 11 17:48:54.919: INFO: Number of nodes with available pods: 0 May 11 17:48:54.919: INFO: Node hunter-worker is running more than one daemon pod May 11 17:48:55.897: INFO: Number of nodes with available pods: 0 May 11 17:48:55.897: INFO: Node hunter-worker is running more than one daemon pod May 11 17:48:56.896: INFO: Number of nodes with available pods: 0 May 11 17:48:56.896: INFO: Node hunter-worker is running more than one daemon pod May 11 17:48:57.897: INFO: Number of nodes with available pods: 0 May 11 17:48:57.897: INFO: Node hunter-worker is running more than one daemon pod May 11 17:48:58.896: INFO: Number of nodes with available pods: 0 May 11 17:48:58.896: INFO: Node hunter-worker is running more than one daemon pod May 11 17:48:59.898: INFO: Number of nodes with available pods: 0 May 11 17:48:59.898: INFO: Node hunter-worker is running more than one daemon pod May 11 17:49:01.064: INFO: Number of nodes with available pods: 0 May 11 17:49:01.064: INFO: Node hunter-worker is running more than one daemon pod May 11 17:49:01.897: INFO: Number of nodes with available pods: 0 May 11 17:49:01.897: INFO: Node hunter-worker is running more than one daemon pod May 11 17:49:02.896: INFO: Number of nodes with available pods: 0 May 11 17:49:02.896: INFO: Node hunter-worker is running more than one daemon pod May 11 17:49:03.898: INFO: Number of nodes with available pods: 0 May 11 17:49:03.898: INFO: Node hunter-worker is running more than one daemon pod May 11 17:49:04.897: INFO: Number of nodes with available pods: 0 May 11 17:49:04.897: INFO: Node hunter-worker is running more than one daemon pod May 11 17:49:06.077: INFO: Number of nodes with available pods: 0 May 11 17:49:06.077: INFO: Node hunter-worker is running more than one daemon pod May 11 17:49:06.897: INFO: Number of nodes with available pods: 1 May 11 17:49:06.897: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-9q44c, will wait for the garbage collector to delete the pods May 11 17:49:07.132: INFO: Deleting DaemonSet.extensions daemon-set took: 176.65761ms May 11 17:49:07.532: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.206895ms May 11 17:49:21.543: INFO: Number of nodes with available pods: 0 May 11 17:49:21.543: INFO: Number of running nodes: 0, number of available pods: 0 May 11 17:49:21.549: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-9q44c/daemonsets","resourceVersion":"9990232"},"items":null} May 11 17:49:21.551: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-9q44c/pods","resourceVersion":"9990232"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:49:22.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-9q44c" for this suite. May 11 17:49:30.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:49:30.827: INFO: namespace: e2e-tests-daemonsets-9q44c, resource: bindings, ignored listing per whitelist May 11 17:49:30.859: INFO: namespace e2e-tests-daemonsets-9q44c deletion completed in 8.634107436s • [SLOW TEST:47.067 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:49:30.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 11 17:49:31.495: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 11 17:49:36.705: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 11 17:49:38.807: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 11 17:49:40.914: INFO: Creating deployment "test-rollover-deployment" May 11 17:49:41.490: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 11 17:49:43.675: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 11 17:49:43.993: INFO: Ensure that both replica sets have 1 created replica May 11 17:49:44.029: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 11 17:49:44.037: INFO: Updating deployment test-rollover-deployment May 11 17:49:44.037: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 11 17:49:46.299: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 11 17:49:46.331: INFO: Make sure deployment "test-rollover-deployment" is complete May 11 17:49:46.334: INFO: all replica sets need to contain the pod-template-hash label May 11 17:49:46.334: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724816181, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724816181, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724816185, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724816181, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 17:49:48.350: INFO: all replica sets need to contain the pod-template-hash label May 11 17:49:48.350: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724816181, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724816181, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724816185, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724816181, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 17:49:50.485: INFO: all replica sets need to contain the pod-template-hash label May 11 17:49:50.485: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724816181, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724816181, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724816185, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724816181, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 17:49:52.340: INFO: all replica sets need to contain the pod-template-hash label May 11 17:49:52.340: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724816181, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724816181, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724816190, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724816181, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 17:49:54.340: INFO: all replica sets need to contain the pod-template-hash label May 11 17:49:54.340: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724816181, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724816181, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724816190, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724816181, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 17:49:56.372: INFO: all replica sets need to contain the pod-template-hash label May 11 17:49:56.373: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724816181, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724816181, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724816190, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724816181, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 17:49:58.343: INFO: all replica sets need to contain the pod-template-hash label May 11 17:49:58.343: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724816181, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724816181, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724816190, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724816181, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 17:50:00.409: INFO: all replica sets need to contain the pod-template-hash label May 11 17:50:00.409: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724816181, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724816181, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724816190, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724816181, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 17:50:02.342: INFO: May 11 17:50:02.342: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 11 17:50:02.348: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-xlrnz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xlrnz/deployments/test-rollover-deployment,UID:ca339a0f-93af-11ea-99e8-0242ac110002,ResourceVersion:9990397,Generation:2,CreationTimestamp:2020-05-11 17:49:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-11 17:49:41 +0000 UTC 2020-05-11 17:49:41 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-11 17:50:01 +0000 UTC 2020-05-11 17:49:41 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 11 17:50:02.350: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-xlrnz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xlrnz/replicasets/test-rollover-deployment-5b8479fdb6,UID:cc102519-93af-11ea-99e8-0242ac110002,ResourceVersion:9990388,Generation:2,CreationTimestamp:2020-05-11 17:49:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment ca339a0f-93af-11ea-99e8-0242ac110002 0xc001753827 0xc001753828}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 11 17:50:02.350: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 11 17:50:02.350: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-xlrnz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xlrnz/replicasets/test-rollover-controller,UID:c47af519-93af-11ea-99e8-0242ac110002,ResourceVersion:9990396,Generation:2,CreationTimestamp:2020-05-11 17:49:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment ca339a0f-93af-11ea-99e8-0242ac110002 0xc001753377 0xc001753378}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 11 17:50:02.351: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-xlrnz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xlrnz/replicasets/test-rollover-deployment-58494b7559,UID:ca964a6a-93af-11ea-99e8-0242ac110002,ResourceVersion:9990339,Generation:2,CreationTimestamp:2020-05-11 17:49:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment ca339a0f-93af-11ea-99e8-0242ac110002 0xc001753437 0xc001753438}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 11 17:50:02.353: INFO: Pod "test-rollover-deployment-5b8479fdb6-xz89d" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-xz89d,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-xlrnz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xlrnz/pods/test-rollover-deployment-5b8479fdb6-xz89d,UID:ccba2d47-93af-11ea-99e8-0242ac110002,ResourceVersion:9990366,Generation:0,CreationTimestamp:2020-05-11 17:49:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 cc102519-93af-11ea-99e8-0242ac110002 0xc001cd48b7 0xc001cd48b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6rl8t {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6rl8t,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-6rl8t true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001cd4930} {node.kubernetes.io/unreachable Exists NoExecute 0xc001cd4950}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:49:45 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:49:50 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:49:50 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:49:45 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.41,StartTime:2020-05-11 17:49:45 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-11 17:49:49 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://15cd87a696c1cda79e152ddbec272ff971cbd6dadc9cb0b3c41762e88a50ec74}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:50:02.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-xlrnz" for this suite. May 11 17:50:10.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:50:10.525: INFO: namespace: e2e-tests-deployment-xlrnz, resource: bindings, ignored listing per whitelist May 11 17:50:10.775: INFO: namespace e2e-tests-deployment-xlrnz deletion completed in 8.4183396s • [SLOW TEST:39.916 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:50:10.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 11 17:50:11.126: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 11 17:50:11.132: INFO: Waiting for terminating namespaces to be deleted... May 11 17:50:11.135: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 11 17:50:11.142: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 11 17:50:11.142: INFO: Container kube-proxy ready: true, restart count 0 May 11 17:50:11.142: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 11 17:50:11.142: INFO: Container kindnet-cni ready: true, restart count 0 May 11 17:50:11.142: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 11 17:50:11.142: INFO: Container coredns ready: true, restart count 0 May 11 17:50:11.142: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 11 17:50:11.148: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 11 17:50:11.148: INFO: Container kindnet-cni ready: true, restart count 0 May 11 17:50:11.148: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 11 17:50:11.148: INFO: Container coredns ready: true, restart count 0 May 11 17:50:11.148: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 11 17:50:11.148: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-deae4f3e-93af-11ea-b832-0242ac110018 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-deae4f3e-93af-11ea-b832-0242ac110018 off the node hunter-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-deae4f3e-93af-11ea-b832-0242ac110018 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:50:19.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-qvr2w" for this suite. May 11 17:50:35.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:50:35.479: INFO: namespace: e2e-tests-sched-pred-qvr2w, resource: bindings, ignored listing per whitelist May 11 17:50:35.485: INFO: namespace e2e-tests-sched-pred-qvr2w deletion completed in 16.090030074s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:24.710 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:50:35.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-eac82001-93af-11ea-b832-0242ac110018 May 11 17:50:35.635: INFO: Pod name my-hostname-basic-eac82001-93af-11ea-b832-0242ac110018: Found 0 pods out of 1 May 11 17:50:40.695: INFO: Pod name my-hostname-basic-eac82001-93af-11ea-b832-0242ac110018: Found 1 pods out of 1 May 11 17:50:40.695: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-eac82001-93af-11ea-b832-0242ac110018" are running May 11 17:50:40.698: INFO: Pod "my-hostname-basic-eac82001-93af-11ea-b832-0242ac110018-96kft" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 17:50:35 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 17:50:39 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 17:50:39 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 17:50:35 +0000 UTC Reason: Message:}]) May 11 17:50:40.698: INFO: Trying to dial the pod May 11 17:50:45.744: INFO: Controller my-hostname-basic-eac82001-93af-11ea-b832-0242ac110018: Got expected result from replica 1 [my-hostname-basic-eac82001-93af-11ea-b832-0242ac110018-96kft]: "my-hostname-basic-eac82001-93af-11ea-b832-0242ac110018-96kft", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:50:45.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-fqkhl" for this suite. May 11 17:50:51.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:50:51.810: INFO: namespace: e2e-tests-replication-controller-fqkhl, resource: bindings, ignored listing per whitelist May 11 17:50:52.252: INFO: namespace e2e-tests-replication-controller-fqkhl deletion completed in 6.504902395s • [SLOW TEST:16.767 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:50:52.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 11 17:50:52.869: INFO: Creating deployment "nginx-deployment" May 11 17:50:52.875: INFO: Waiting for observed generation 1 May 11 17:50:54.979: INFO: Waiting for all required pods to come up May 11 17:50:54.984: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running May 11 17:51:11.155: INFO: Waiting for deployment "nginx-deployment" to complete May 11 17:51:11.160: INFO: Updating deployment "nginx-deployment" with a non-existent image May 11 17:51:11.165: INFO: Updating deployment nginx-deployment May 11 17:51:11.165: INFO: Waiting for observed generation 2 May 11 17:51:13.362: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 11 17:51:13.365: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 11 17:51:13.367: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 11 17:51:13.376: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 11 17:51:13.376: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 11 17:51:13.378: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 11 17:51:13.381: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas May 11 17:51:13.381: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 May 11 17:51:13.386: INFO: Updating deployment nginx-deployment May 11 17:51:13.386: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas May 11 17:51:13.835: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 11 17:51:14.060: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 11 17:51:14.410: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-xx7vz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xx7vz/deployments/nginx-deployment,UID:f516f0e2-93af-11ea-99e8-0242ac110002,ResourceVersion:9990828,Generation:3,CreationTimestamp:2020-05-11 17:50:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-05-11 17:51:11 +0000 UTC 2020-05-11 17:50:52 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-05-11 17:51:14 +0000 UTC 2020-05-11 17:51:14 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} May 11 17:51:15.931: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-xx7vz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xx7vz/replicasets/nginx-deployment-5c98f8fb5,UID:fffec94f-93af-11ea-99e8-0242ac110002,ResourceVersion:9990851,Generation:3,CreationTimestamp:2020-05-11 17:51:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment f516f0e2-93af-11ea-99e8-0242ac110002 0xc0023677c7 0xc0023677c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 11 17:51:15.931: INFO: All old ReplicaSets of Deployment "nginx-deployment": May 11 17:51:15.931: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-xx7vz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xx7vz/replicasets/nginx-deployment-85ddf47c5d,UID:f51db432-93af-11ea-99e8-0242ac110002,ResourceVersion:9990850,Generation:3,CreationTimestamp:2020-05-11 17:50:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment f516f0e2-93af-11ea-99e8-0242ac110002 0xc0023678f7 0xc0023678f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} May 11 17:51:15.979: INFO: Pod "nginx-deployment-5c98f8fb5-2gcwk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-2gcwk,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xx7vz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xx7vz/pods/nginx-deployment-5c98f8fb5-2gcwk,UID:01c83597-93b0-11ea-99e8-0242ac110002,ResourceVersion:9990845,Generation:0,CreationTimestamp:2020-05-11 17:51:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fffec94f-93af-11ea-99e8-0242ac110002 0xc00256a6f7 0xc00256a6f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ljhjl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ljhjl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ljhjl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00256a770} {node.kubernetes.io/unreachable Exists NoExecute 0xc00256a790}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 17:51:15.979: INFO: Pod "nginx-deployment-5c98f8fb5-5ph2b" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-5ph2b,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xx7vz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xx7vz/pods/nginx-deployment-5c98f8fb5-5ph2b,UID:01e42cd3-93b0-11ea-99e8-0242ac110002,ResourceVersion:9990852,Generation:0,CreationTimestamp:2020-05-11 17:51:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fffec94f-93af-11ea-99e8-0242ac110002 0xc00256a807 0xc00256a808}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ljhjl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ljhjl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ljhjl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00256a880} {node.kubernetes.io/unreachable Exists NoExecute 0xc00256a8a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 17:51:15.979: INFO: Pod "nginx-deployment-5c98f8fb5-86kqp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-86kqp,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xx7vz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xx7vz/pods/nginx-deployment-5c98f8fb5-86kqp,UID:01b87195-93b0-11ea-99e8-0242ac110002,ResourceVersion:9990813,Generation:0,CreationTimestamp:2020-05-11 17:51:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fffec94f-93af-11ea-99e8-0242ac110002 0xc00256a917 0xc00256a918}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ljhjl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ljhjl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ljhjl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00256a990} {node.kubernetes.io/unreachable Exists NoExecute 0xc00256a9b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 17:51:15.979: INFO: Pod "nginx-deployment-5c98f8fb5-dmlfs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-dmlfs,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xx7vz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xx7vz/pods/nginx-deployment-5c98f8fb5-dmlfs,UID:00073f76-93b0-11ea-99e8-0242ac110002,ResourceVersion:9990774,Generation:0,CreationTimestamp:2020-05-11 17:51:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fffec94f-93af-11ea-99e8-0242ac110002 0xc00256aa27 0xc00256aa28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ljhjl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ljhjl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ljhjl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00256aaa0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00256aac0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:11 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-11 17:51:11 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 17:51:15.979: INFO: Pod "nginx-deployment-5c98f8fb5-gqbcs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-gqbcs,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xx7vz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xx7vz/pods/nginx-deployment-5c98f8fb5-gqbcs,UID:01b911b7-93b0-11ea-99e8-0242ac110002,ResourceVersion:9990822,Generation:0,CreationTimestamp:2020-05-11 17:51:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fffec94f-93af-11ea-99e8-0242ac110002 0xc00256ab80 0xc00256ab81}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ljhjl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ljhjl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ljhjl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00256ac00} {node.kubernetes.io/unreachable Exists NoExecute 0xc00256ac20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 17:51:15.979: INFO: Pod "nginx-deployment-5c98f8fb5-hf45k" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-hf45k,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xx7vz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xx7vz/pods/nginx-deployment-5c98f8fb5-hf45k,UID:01c832a6-93b0-11ea-99e8-0242ac110002,ResourceVersion:9990847,Generation:0,CreationTimestamp:2020-05-11 17:51:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fffec94f-93af-11ea-99e8-0242ac110002 0xc00256ac97 0xc00256ac98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ljhjl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ljhjl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ljhjl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00256ad10} {node.kubernetes.io/unreachable Exists NoExecute 0xc00256ad30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 17:51:15.980: INFO: Pod "nginx-deployment-5c98f8fb5-hsnlw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-hsnlw,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xx7vz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xx7vz/pods/nginx-deployment-5c98f8fb5-hsnlw,UID:000752ce-93b0-11ea-99e8-0242ac110002,ResourceVersion:9990789,Generation:0,CreationTimestamp:2020-05-11 17:51:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fffec94f-93af-11ea-99e8-0242ac110002 0xc00256ada7 0xc00256ada8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ljhjl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ljhjl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ljhjl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00256ae20} {node.kubernetes.io/unreachable Exists NoExecute 0xc00256ae40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:11 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-11 17:51:11 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 17:51:15.980: INFO: Pod "nginx-deployment-5c98f8fb5-mn99n" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-mn99n,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xx7vz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xx7vz/pods/nginx-deployment-5c98f8fb5-mn99n,UID:0025238e-93b0-11ea-99e8-0242ac110002,ResourceVersion:9990793,Generation:0,CreationTimestamp:2020-05-11 17:51:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fffec94f-93af-11ea-99e8-0242ac110002 0xc00256af00 0xc00256af01}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ljhjl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ljhjl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ljhjl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00256af80} {node.kubernetes.io/unreachable Exists NoExecute 0xc00256afa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:11 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-11 17:51:11 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 17:51:15.980: INFO: Pod "nginx-deployment-5c98f8fb5-nvqrk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-nvqrk,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xx7vz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xx7vz/pods/nginx-deployment-5c98f8fb5-nvqrk,UID:01b90d9b-93b0-11ea-99e8-0242ac110002,ResourceVersion:9990818,Generation:0,CreationTimestamp:2020-05-11 17:51:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fffec94f-93af-11ea-99e8-0242ac110002 0xc00256b070 0xc00256b071}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ljhjl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ljhjl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ljhjl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00256b0f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00256b110}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 17:51:15.980: INFO: Pod "nginx-deployment-5c98f8fb5-v6b5s" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-v6b5s,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xx7vz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xx7vz/pods/nginx-deployment-5c98f8fb5-v6b5s,UID:00039197-93b0-11ea-99e8-0242ac110002,ResourceVersion:9990770,Generation:0,CreationTimestamp:2020-05-11 17:51:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fffec94f-93af-11ea-99e8-0242ac110002 0xc00256b187 0xc00256b188}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ljhjl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ljhjl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ljhjl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00256b200} {node.kubernetes.io/unreachable Exists NoExecute 0xc00256b220}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:11 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-11 17:51:11 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 17:51:15.980: INFO: Pod "nginx-deployment-5c98f8fb5-xhpkl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-xhpkl,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xx7vz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xx7vz/pods/nginx-deployment-5c98f8fb5-xhpkl,UID:002c0a67-93b0-11ea-99e8-0242ac110002,ResourceVersion:9990795,Generation:0,CreationTimestamp:2020-05-11 17:51:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fffec94f-93af-11ea-99e8-0242ac110002 0xc00256b2e0 0xc00256b2e1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ljhjl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ljhjl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ljhjl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00256b360} {node.kubernetes.io/unreachable Exists NoExecute 0xc00256b380}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:11 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-11 17:51:11 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 17:51:15.980: INFO: Pod "nginx-deployment-5c98f8fb5-xvp95" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-xvp95,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xx7vz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xx7vz/pods/nginx-deployment-5c98f8fb5-xvp95,UID:01c8210d-93b0-11ea-99e8-0242ac110002,ResourceVersion:9990834,Generation:0,CreationTimestamp:2020-05-11 17:51:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fffec94f-93af-11ea-99e8-0242ac110002 0xc00256b440 0xc00256b441}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ljhjl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ljhjl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ljhjl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00256b4c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00256b4e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 17:51:15.980: INFO: Pod "nginx-deployment-5c98f8fb5-z5h76" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-z5h76,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xx7vz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xx7vz/pods/nginx-deployment-5c98f8fb5-z5h76,UID:01c83afe-93b0-11ea-99e8-0242ac110002,ResourceVersion:9990840,Generation:0,CreationTimestamp:2020-05-11 17:51:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 fffec94f-93af-11ea-99e8-0242ac110002 0xc00256b557 0xc00256b558}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ljhjl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ljhjl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ljhjl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00256b5d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00256b5f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 17:51:15.980: INFO: Pod "nginx-deployment-85ddf47c5d-2gnns" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2gnns,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xx7vz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xx7vz/pods/nginx-deployment-85ddf47c5d-2gnns,UID:01e45173-93b0-11ea-99e8-0242ac110002,ResourceVersion:9990855,Generation:0,CreationTimestamp:2020-05-11 17:51:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f51db432-93af-11ea-99e8-0242ac110002 0xc00256b667 0xc00256b668}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ljhjl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ljhjl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ljhjl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00256b6e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00256b700}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 17:51:15.981: INFO: Pod "nginx-deployment-85ddf47c5d-2z9dp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2z9dp,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xx7vz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xx7vz/pods/nginx-deployment-85ddf47c5d-2z9dp,UID:0195b669-93b0-11ea-99e8-0242ac110002,ResourceVersion:9990861,Generation:0,CreationTimestamp:2020-05-11 17:51:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f51db432-93af-11ea-99e8-0242ac110002 0xc00256b777 0xc00256b778}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ljhjl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ljhjl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ljhjl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00256b7f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00256b810}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:14 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-11 17:51:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 17:51:15.981: INFO: Pod "nginx-deployment-85ddf47c5d-944g5" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-944g5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xx7vz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xx7vz/pods/nginx-deployment-85ddf47c5d-944g5,UID:f569e89f-93af-11ea-99e8-0242ac110002,ResourceVersion:9990720,Generation:0,CreationTimestamp:2020-05-11 17:50:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f51db432-93af-11ea-99e8-0242ac110002 0xc00256b8c7 0xc00256b8c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ljhjl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ljhjl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ljhjl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00256b940} {node.kubernetes.io/unreachable Exists NoExecute 0xc00256b960}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:50:54 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:09 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:09 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:50:53 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.52,StartTime:2020-05-11 17:50:54 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-11 17:51:07 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://e2f316e3ec2e83d30fe61fe57e97fb2e787f2e6c94c128101b5891f137dc16cb}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 17:51:15.981: INFO: Pod "nginx-deployment-85ddf47c5d-9qdjj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9qdjj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xx7vz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xx7vz/pods/nginx-deployment-85ddf47c5d-9qdjj,UID:01c85de9-93b0-11ea-99e8-0242ac110002,ResourceVersion:9990842,Generation:0,CreationTimestamp:2020-05-11 17:51:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f51db432-93af-11ea-99e8-0242ac110002 0xc00256ba27 0xc00256ba28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ljhjl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ljhjl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ljhjl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00256baa0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00256bac0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 17:51:15.981: INFO: Pod "nginx-deployment-85ddf47c5d-hclfl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-hclfl,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xx7vz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xx7vz/pods/nginx-deployment-85ddf47c5d-hclfl,UID:01e43f3e-93b0-11ea-99e8-0242ac110002,ResourceVersion:9990857,Generation:0,CreationTimestamp:2020-05-11 17:51:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f51db432-93af-11ea-99e8-0242ac110002 0xc00256bb37 0xc00256bb38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ljhjl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ljhjl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ljhjl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00256bbb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00256bbd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 17:51:15.981: INFO: Pod "nginx-deployment-85ddf47c5d-klzr8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-klzr8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xx7vz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xx7vz/pods/nginx-deployment-85ddf47c5d-klzr8,UID:01e453cd-93b0-11ea-99e8-0242ac110002,ResourceVersion:9990856,Generation:0,CreationTimestamp:2020-05-11 17:51:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f51db432-93af-11ea-99e8-0242ac110002 0xc00256bc47 0xc00256bc48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ljhjl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ljhjl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ljhjl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00256bcc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00256bce0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 17:51:15.982: INFO: Pod "nginx-deployment-85ddf47c5d-kwzcn" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-kwzcn,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xx7vz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xx7vz/pods/nginx-deployment-85ddf47c5d-kwzcn,UID:f5390175-93af-11ea-99e8-0242ac110002,ResourceVersion:9990712,Generation:0,CreationTimestamp:2020-05-11 17:50:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f51db432-93af-11ea-99e8-0242ac110002 0xc00256bd57 0xc00256bd58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ljhjl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ljhjl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ljhjl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00256bdd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00256bdf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:50:53 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:50:53 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.50,StartTime:2020-05-11 17:50:53 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-11 17:51:05 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://8cdfa0bf81dc6c49a8a86e894be629f4017ab0535ecbbdb4993506419e9de076}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 17:51:15.982: INFO: Pod "nginx-deployment-85ddf47c5d-lb7c9" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lb7c9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xx7vz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xx7vz/pods/nginx-deployment-85ddf47c5d-lb7c9,UID:f569e73a-93af-11ea-99e8-0242ac110002,ResourceVersion:9990721,Generation:0,CreationTimestamp:2020-05-11 17:50:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f51db432-93af-11ea-99e8-0242ac110002 0xc00256beb7 0xc00256beb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ljhjl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ljhjl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ljhjl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00256bf30} {node.kubernetes.io/unreachable Exists NoExecute 0xc00256bf50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:50:54 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:09 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:09 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:50:53 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.46,StartTime:2020-05-11 17:50:54 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-11 17:51:08 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://5494c2a966dfac0e01cfb78bd0b19598d3397caec99b9511908807a16b483eed}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 17:51:15.982: INFO: Pod "nginx-deployment-85ddf47c5d-lr5fw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lr5fw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xx7vz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xx7vz/pods/nginx-deployment-85ddf47c5d-lr5fw,UID:01e4099b-93b0-11ea-99e8-0242ac110002,ResourceVersion:9990853,Generation:0,CreationTimestamp:2020-05-11 17:51:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f51db432-93af-11ea-99e8-0242ac110002 0xc0017060c7 0xc0017060c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ljhjl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ljhjl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ljhjl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001706140} {node.kubernetes.io/unreachable Exists NoExecute 0xc001706160}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 17:51:15.982: INFO: Pod "nginx-deployment-85ddf47c5d-nzkqp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nzkqp,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xx7vz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xx7vz/pods/nginx-deployment-85ddf47c5d-nzkqp,UID:01e441f5-93b0-11ea-99e8-0242ac110002,ResourceVersion:9990854,Generation:0,CreationTimestamp:2020-05-11 17:51:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f51db432-93af-11ea-99e8-0242ac110002 0xc0017061d7 0xc0017061d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ljhjl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ljhjl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ljhjl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001706250} {node.kubernetes.io/unreachable Exists NoExecute 0xc001706270}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 17:51:15.982: INFO: Pod "nginx-deployment-85ddf47c5d-pbjxz" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pbjxz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xx7vz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xx7vz/pods/nginx-deployment-85ddf47c5d-pbjxz,UID:f569e3dc-93af-11ea-99e8-0242ac110002,ResourceVersion:9990713,Generation:0,CreationTimestamp:2020-05-11 17:50:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f51db432-93af-11ea-99e8-0242ac110002 0xc0017062e7 0xc0017062e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ljhjl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ljhjl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ljhjl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001706360} {node.kubernetes.io/unreachable Exists NoExecute 0xc001706380}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:50:54 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:07 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:50:53 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.45,StartTime:2020-05-11 17:50:54 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-11 17:51:07 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://013d3ffe4cfbd91f35ac0b3ef3544554e83b3998de5783de342a705056489915}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 17:51:15.982: INFO: Pod "nginx-deployment-85ddf47c5d-pw7vk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pw7vk,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xx7vz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xx7vz/pods/nginx-deployment-85ddf47c5d-pw7vk,UID:01b8b8ae-93b0-11ea-99e8-0242ac110002,ResourceVersion:9990821,Generation:0,CreationTimestamp:2020-05-11 17:51:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f51db432-93af-11ea-99e8-0242ac110002 0xc001706457 0xc001706458}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ljhjl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ljhjl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ljhjl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017064d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017064f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 17:51:15.983: INFO: Pod "nginx-deployment-85ddf47c5d-qsvd4" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qsvd4,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xx7vz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xx7vz/pods/nginx-deployment-85ddf47c5d-qsvd4,UID:f53e49a1-93af-11ea-99e8-0242ac110002,ResourceVersion:9990710,Generation:0,CreationTimestamp:2020-05-11 17:50:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f51db432-93af-11ea-99e8-0242ac110002 0xc001706567 0xc001706568}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ljhjl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ljhjl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ljhjl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017065e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001706600}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:50:53 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:50:53 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.51,StartTime:2020-05-11 17:50:53 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-11 17:51:07 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://283c5780aedb1ec1956188bc21bc171029c9c541536cf995d0423baaf1e7d0ee}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 17:51:15.983: INFO: Pod "nginx-deployment-85ddf47c5d-s9x5t" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-s9x5t,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xx7vz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xx7vz/pods/nginx-deployment-85ddf47c5d-s9x5t,UID:f59e5f26-93af-11ea-99e8-0242ac110002,ResourceVersion:9990739,Generation:0,CreationTimestamp:2020-05-11 17:50:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f51db432-93af-11ea-99e8-0242ac110002 0xc0017066c7 0xc0017066c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ljhjl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ljhjl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ljhjl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001706740} {node.kubernetes.io/unreachable Exists NoExecute 0xc001706760}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:50:54 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:10 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:50:53 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.48,StartTime:2020-05-11 17:50:54 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-11 17:51:10 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://8d4761caf7f0715db99f562dfee7cfd1bf214a36c5517e7b71701791b8aa14d9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 17:51:15.983: INFO: Pod "nginx-deployment-85ddf47c5d-txhzj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-txhzj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xx7vz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xx7vz/pods/nginx-deployment-85ddf47c5d-txhzj,UID:01c80a5e-93b0-11ea-99e8-0242ac110002,ResourceVersion:9990836,Generation:0,CreationTimestamp:2020-05-11 17:51:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f51db432-93af-11ea-99e8-0242ac110002 0xc001706827 0xc001706828}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ljhjl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ljhjl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ljhjl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017068b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017068d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 17:51:15.983: INFO: Pod "nginx-deployment-85ddf47c5d-vgff6" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vgff6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xx7vz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xx7vz/pods/nginx-deployment-85ddf47c5d-vgff6,UID:f53e4781-93af-11ea-99e8-0242ac110002,ResourceVersion:9990707,Generation:0,CreationTimestamp:2020-05-11 17:50:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f51db432-93af-11ea-99e8-0242ac110002 0xc001706947 0xc001706948}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ljhjl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ljhjl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ljhjl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017069c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017069e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:50:53 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:07 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:50:53 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.44,StartTime:2020-05-11 17:50:53 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-11 17:51:07 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://718f8b2b05693f19cbc852e3f06249676983ad567ce0bae605bbb4085f30b0c1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 17:51:15.983: INFO: Pod "nginx-deployment-85ddf47c5d-wb28z" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wb28z,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xx7vz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xx7vz/pods/nginx-deployment-85ddf47c5d-wb28z,UID:f569e9a5-93af-11ea-99e8-0242ac110002,ResourceVersion:9990736,Generation:0,CreationTimestamp:2020-05-11 17:50:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f51db432-93af-11ea-99e8-0242ac110002 0xc001706aa7 0xc001706aa8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ljhjl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ljhjl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ljhjl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001706b20} {node.kubernetes.io/unreachable Exists NoExecute 0xc001706b40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:50:54 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:10 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:50:53 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.53,StartTime:2020-05-11 17:50:54 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-11 17:51:09 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://57e28e6009e69b1e623eca08c12c3597f0549aaedcede22574ebd0689beb6882}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 17:51:15.983: INFO: Pod "nginx-deployment-85ddf47c5d-x4l64" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-x4l64,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xx7vz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xx7vz/pods/nginx-deployment-85ddf47c5d-x4l64,UID:01b89295-93b0-11ea-99e8-0242ac110002,ResourceVersion:9990817,Generation:0,CreationTimestamp:2020-05-11 17:51:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f51db432-93af-11ea-99e8-0242ac110002 0xc001706c07 0xc001706c08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ljhjl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ljhjl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ljhjl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001706c80} {node.kubernetes.io/unreachable Exists NoExecute 0xc001706ca0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 17:51:15.983: INFO: Pod "nginx-deployment-85ddf47c5d-xcn6q" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-xcn6q,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xx7vz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xx7vz/pods/nginx-deployment-85ddf47c5d-xcn6q,UID:01c85213-93b0-11ea-99e8-0242ac110002,ResourceVersion:9990846,Generation:0,CreationTimestamp:2020-05-11 17:51:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f51db432-93af-11ea-99e8-0242ac110002 0xc001706d17 0xc001706d18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ljhjl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ljhjl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ljhjl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001706d90} {node.kubernetes.io/unreachable Exists NoExecute 0xc001706db0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 17:51:15.984: INFO: Pod "nginx-deployment-85ddf47c5d-zd99c" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zd99c,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xx7vz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xx7vz/pods/nginx-deployment-85ddf47c5d-zd99c,UID:01c85097-93b0-11ea-99e8-0242ac110002,ResourceVersion:9990843,Generation:0,CreationTimestamp:2020-05-11 17:51:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f51db432-93af-11ea-99e8-0242ac110002 0xc001706e27 0xc001706e28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ljhjl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ljhjl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ljhjl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001706ea0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001706ec0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:51:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:51:15.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-xx7vz" for this suite. May 11 17:52:01.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:52:01.744: INFO: namespace: e2e-tests-deployment-xx7vz, resource: bindings, ignored listing per whitelist May 11 17:52:01.749: INFO: namespace e2e-tests-deployment-xx7vz deletion completed in 45.141268054s • [SLOW TEST:69.496 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:52:01.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all May 11 17:52:01.953: INFO: Waiting up to 5m0s for pod "client-containers-1e41a82f-93b0-11ea-b832-0242ac110018" in namespace "e2e-tests-containers-l77kr" to be "success or failure" May 11 17:52:02.012: INFO: Pod "client-containers-1e41a82f-93b0-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 58.40947ms May 11 17:52:04.016: INFO: Pod "client-containers-1e41a82f-93b0-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062507502s May 11 17:52:06.237: INFO: Pod "client-containers-1e41a82f-93b0-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.283050188s STEP: Saw pod success May 11 17:52:06.237: INFO: Pod "client-containers-1e41a82f-93b0-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 17:52:06.239: INFO: Trying to get logs from node hunter-worker2 pod client-containers-1e41a82f-93b0-11ea-b832-0242ac110018 container test-container: STEP: delete the pod May 11 17:52:06.326: INFO: Waiting for pod client-containers-1e41a82f-93b0-11ea-b832-0242ac110018 to disappear May 11 17:52:06.336: INFO: Pod client-containers-1e41a82f-93b0-11ea-b832-0242ac110018 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:52:06.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-l77kr" for this suite. May 11 17:52:12.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:52:12.386: INFO: namespace: e2e-tests-containers-l77kr, resource: bindings, ignored listing per whitelist May 11 17:52:12.412: INFO: namespace e2e-tests-containers-l77kr deletion completed in 6.0738234s • [SLOW TEST:10.663 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:52:12.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium May 11 17:52:12.540: INFO: Waiting up to 5m0s for pod "pod-2492d9ea-93b0-11ea-b832-0242ac110018" in namespace "e2e-tests-emptydir-52p92" to be "success or failure" May 11 17:52:12.558: INFO: Pod "pod-2492d9ea-93b0-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 17.941852ms May 11 17:52:14.563: INFO: Pod "pod-2492d9ea-93b0-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022359148s May 11 17:52:16.566: INFO: Pod "pod-2492d9ea-93b0-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026042754s STEP: Saw pod success May 11 17:52:16.566: INFO: Pod "pod-2492d9ea-93b0-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 17:52:16.569: INFO: Trying to get logs from node hunter-worker pod pod-2492d9ea-93b0-11ea-b832-0242ac110018 container test-container: STEP: delete the pod May 11 17:52:16.684: INFO: Waiting for pod pod-2492d9ea-93b0-11ea-b832-0242ac110018 to disappear May 11 17:52:16.688: INFO: Pod pod-2492d9ea-93b0-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:52:16.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-52p92" for this suite. May 11 17:52:24.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:52:24.782: INFO: namespace: e2e-tests-emptydir-52p92, resource: bindings, ignored listing per whitelist May 11 17:52:26.272: INFO: namespace e2e-tests-emptydir-52p92 deletion completed in 9.530171768s • [SLOW TEST:13.860 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:52:26.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace May 11 17:52:39.140: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:53:12.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-7qmts" for this suite. May 11 17:53:18.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:53:18.855: INFO: namespace: e2e-tests-namespaces-7qmts, resource: bindings, ignored listing per whitelist May 11 17:53:18.890: INFO: namespace e2e-tests-namespaces-7qmts deletion completed in 6.458166325s STEP: Destroying namespace "e2e-tests-nsdeletetest-lf7m6" for this suite. May 11 17:53:18.891: INFO: Namespace e2e-tests-nsdeletetest-lf7m6 was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-wl277" for this suite. May 11 17:53:25.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:53:25.071: INFO: namespace: e2e-tests-nsdeletetest-wl277, resource: bindings, ignored listing per whitelist May 11 17:53:25.107: INFO: namespace e2e-tests-nsdeletetest-wl277 deletion completed in 6.215744279s • [SLOW TEST:58.835 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:53:25.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 11 17:53:26.758: INFO: Waiting up to 5m0s for pod "downward-api-50c4c401-93b0-11ea-b832-0242ac110018" in namespace "e2e-tests-downward-api-ztc22" to be "success or failure" May 11 17:53:27.373: INFO: Pod "downward-api-50c4c401-93b0-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 615.1381ms May 11 17:53:29.606: INFO: Pod "downward-api-50c4c401-93b0-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.848260729s May 11 17:53:31.610: INFO: Pod "downward-api-50c4c401-93b0-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.852462295s May 11 17:53:33.614: INFO: Pod "downward-api-50c4c401-93b0-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.856150687s STEP: Saw pod success May 11 17:53:33.614: INFO: Pod "downward-api-50c4c401-93b0-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 17:53:33.616: INFO: Trying to get logs from node hunter-worker2 pod downward-api-50c4c401-93b0-11ea-b832-0242ac110018 container dapi-container: STEP: delete the pod May 11 17:53:33.793: INFO: Waiting for pod downward-api-50c4c401-93b0-11ea-b832-0242ac110018 to disappear May 11 17:53:33.839: INFO: Pod downward-api-50c4c401-93b0-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:53:33.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-ztc22" for this suite. May 11 17:53:39.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:53:39.896: INFO: namespace: e2e-tests-downward-api-ztc22, resource: bindings, ignored listing per whitelist May 11 17:53:39.958: INFO: namespace e2e-tests-downward-api-ztc22 deletion completed in 6.115287597s • [SLOW TEST:14.851 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:53:39.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments May 11 17:53:40.118: INFO: Waiting up to 5m0s for pod "client-containers-58bf2868-93b0-11ea-b832-0242ac110018" in namespace "e2e-tests-containers-7qb28" to be "success or failure" May 11 17:53:40.208: INFO: Pod "client-containers-58bf2868-93b0-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 89.497008ms May 11 17:53:42.211: INFO: Pod "client-containers-58bf2868-93b0-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09322912s May 11 17:53:44.216: INFO: Pod "client-containers-58bf2868-93b0-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097536903s May 11 17:53:46.272: INFO: Pod "client-containers-58bf2868-93b0-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.154376987s May 11 17:53:48.276: INFO: Pod "client-containers-58bf2868-93b0-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.157912686s STEP: Saw pod success May 11 17:53:48.276: INFO: Pod "client-containers-58bf2868-93b0-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 17:53:48.278: INFO: Trying to get logs from node hunter-worker2 pod client-containers-58bf2868-93b0-11ea-b832-0242ac110018 container test-container: STEP: delete the pod May 11 17:53:48.823: INFO: Waiting for pod client-containers-58bf2868-93b0-11ea-b832-0242ac110018 to disappear May 11 17:53:48.898: INFO: Pod client-containers-58bf2868-93b0-11ea-b832-0242ac110018 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:53:48.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-7qb28" for this suite. May 11 17:53:59.127: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:53:59.178: INFO: namespace: e2e-tests-containers-7qb28, resource: bindings, ignored listing per whitelist May 11 17:53:59.208: INFO: namespace e2e-tests-containers-7qb28 deletion completed in 10.307218429s • [SLOW TEST:19.250 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:53:59.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 11 17:54:00.346: INFO: Waiting up to 5m0s for pod "downwardapi-volume-64aa8e6a-93b0-11ea-b832-0242ac110018" in namespace "e2e-tests-downward-api-6hpxx" to be "success or failure" May 11 17:54:00.438: INFO: Pod "downwardapi-volume-64aa8e6a-93b0-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 92.62418ms May 11 17:54:02.840: INFO: Pod "downwardapi-volume-64aa8e6a-93b0-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.494622118s May 11 17:54:05.481: INFO: Pod "downwardapi-volume-64aa8e6a-93b0-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 5.135607858s May 11 17:54:07.484: INFO: Pod "downwardapi-volume-64aa8e6a-93b0-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.138542966s STEP: Saw pod success May 11 17:54:07.484: INFO: Pod "downwardapi-volume-64aa8e6a-93b0-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 17:54:07.486: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-64aa8e6a-93b0-11ea-b832-0242ac110018 container client-container: STEP: delete the pod May 11 17:54:08.228: INFO: Waiting for pod downwardapi-volume-64aa8e6a-93b0-11ea-b832-0242ac110018 to disappear May 11 17:54:08.373: INFO: Pod downwardapi-volume-64aa8e6a-93b0-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:54:08.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-6hpxx" for this suite. May 11 17:54:16.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:54:16.622: INFO: namespace: e2e-tests-downward-api-6hpxx, resource: bindings, ignored listing per whitelist May 11 17:54:16.805: INFO: namespace e2e-tests-downward-api-6hpxx deletion completed in 8.42836229s • [SLOW TEST:17.596 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:54:16.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-7hgsc [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet May 11 17:54:18.115: INFO: Found 0 stateful pods, waiting for 3 May 11 17:54:28.674: INFO: Found 2 stateful pods, waiting for 3 May 11 17:54:38.119: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 11 17:54:38.119: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 11 17:54:38.119: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 11 17:54:48.119: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 11 17:54:48.119: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 11 17:54:48.119: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 11 17:54:48.146: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 11 17:54:58.355: INFO: Updating stateful set ss2 May 11 17:54:58.575: INFO: Waiting for Pod e2e-tests-statefulset-7hgsc/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 11 17:55:08.582: INFO: Waiting for Pod e2e-tests-statefulset-7hgsc/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted May 11 17:55:21.189: INFO: Found 2 stateful pods, waiting for 3 May 11 17:55:31.339: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 11 17:55:31.339: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 11 17:55:31.339: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 11 17:55:41.192: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 11 17:55:41.192: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 11 17:55:41.192: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 11 17:55:41.216: INFO: Updating stateful set ss2 May 11 17:55:41.411: INFO: Waiting for Pod e2e-tests-statefulset-7hgsc/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 11 17:55:52.138: INFO: Waiting for Pod e2e-tests-statefulset-7hgsc/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 11 17:56:02.672: INFO: Updating stateful set ss2 May 11 17:56:03.040: INFO: Waiting for StatefulSet e2e-tests-statefulset-7hgsc/ss2 to complete update May 11 17:56:03.040: INFO: Waiting for Pod e2e-tests-statefulset-7hgsc/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 11 17:56:13.045: INFO: Waiting for StatefulSet e2e-tests-statefulset-7hgsc/ss2 to complete update May 11 17:56:13.045: INFO: Waiting for Pod e2e-tests-statefulset-7hgsc/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 11 17:56:23.692: INFO: Waiting for StatefulSet e2e-tests-statefulset-7hgsc/ss2 to complete update May 11 17:56:37.477: INFO: Waiting for StatefulSet e2e-tests-statefulset-7hgsc/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 11 17:56:43.164: INFO: Deleting all statefulset in ns e2e-tests-statefulset-7hgsc May 11 17:56:43.166: INFO: Scaling statefulset ss2 to 0 May 11 17:57:13.482: INFO: Waiting for statefulset status.replicas updated to 0 May 11 17:57:13.485: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:57:13.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-7hgsc" for this suite. May 11 17:57:27.776: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:57:27.800: INFO: namespace: e2e-tests-statefulset-7hgsc, resource: bindings, ignored listing per whitelist May 11 17:57:27.843: INFO: namespace e2e-tests-statefulset-7hgsc deletion completed in 14.136410527s • [SLOW TEST:191.038 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:57:27.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-jw5bx.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-jw5bx.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-jw5bx.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-jw5bx.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-jw5bx.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-jw5bx.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 17:57:43.962: INFO: DNS probes using e2e-tests-dns-jw5bx/dns-test-e13dc697-93b0-11ea-b832-0242ac110018 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:57:44.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-jw5bx" for this suite. May 11 17:57:52.382: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:57:52.431: INFO: namespace: e2e-tests-dns-jw5bx, resource: bindings, ignored listing per whitelist May 11 17:57:52.462: INFO: namespace e2e-tests-dns-jw5bx deletion completed in 8.295237961s • [SLOW TEST:24.618 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:57:52.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-f0722f37-93b0-11ea-b832-0242ac110018 STEP: Creating a pod to test consume secrets May 11 17:57:56.474: INFO: Waiting up to 5m0s for pod "pod-secrets-f16d4909-93b0-11ea-b832-0242ac110018" in namespace "e2e-tests-secrets-h22w5" to be "success or failure" May 11 17:57:56.539: INFO: Pod "pod-secrets-f16d4909-93b0-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 65.302816ms May 11 17:57:58.550: INFO: Pod "pod-secrets-f16d4909-93b0-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076420182s May 11 17:58:01.143: INFO: Pod "pod-secrets-f16d4909-93b0-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.669735524s May 11 17:58:03.293: INFO: Pod "pod-secrets-f16d4909-93b0-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.819511025s STEP: Saw pod success May 11 17:58:03.293: INFO: Pod "pod-secrets-f16d4909-93b0-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 17:58:03.297: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-f16d4909-93b0-11ea-b832-0242ac110018 container secret-volume-test: STEP: delete the pod May 11 17:58:04.086: INFO: Waiting for pod pod-secrets-f16d4909-93b0-11ea-b832-0242ac110018 to disappear May 11 17:58:04.382: INFO: Pod pod-secrets-f16d4909-93b0-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:58:04.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-h22w5" for this suite. May 11 17:58:14.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:58:14.530: INFO: namespace: e2e-tests-secrets-h22w5, resource: bindings, ignored listing per whitelist May 11 17:58:14.535: INFO: namespace e2e-tests-secrets-h22w5 deletion completed in 10.148516478s STEP: Destroying namespace "e2e-tests-secret-namespace-895xc" for this suite. May 11 17:58:20.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:58:20.713: INFO: namespace: e2e-tests-secret-namespace-895xc, resource: bindings, ignored listing per whitelist May 11 17:58:20.745: INFO: namespace e2e-tests-secret-namespace-895xc deletion completed in 6.209923738s • [SLOW TEST:28.283 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:58:20.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-00279c87-93b1-11ea-b832-0242ac110018 STEP: Creating a pod to test consume configMaps May 11 17:58:21.089: INFO: Waiting up to 5m0s for pod "pod-configmaps-0038887c-93b1-11ea-b832-0242ac110018" in namespace "e2e-tests-configmap-v8qxw" to be "success or failure" May 11 17:58:21.115: INFO: Pod "pod-configmaps-0038887c-93b1-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 25.500713ms May 11 17:58:23.282: INFO: Pod "pod-configmaps-0038887c-93b1-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.192284993s May 11 17:58:25.286: INFO: Pod "pod-configmaps-0038887c-93b1-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.196465248s May 11 17:58:27.310: INFO: Pod "pod-configmaps-0038887c-93b1-11ea-b832-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 6.220939172s May 11 17:58:29.314: INFO: Pod "pod-configmaps-0038887c-93b1-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.225005242s STEP: Saw pod success May 11 17:58:29.314: INFO: Pod "pod-configmaps-0038887c-93b1-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 17:58:29.317: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-0038887c-93b1-11ea-b832-0242ac110018 container configmap-volume-test: STEP: delete the pod May 11 17:58:29.459: INFO: Waiting for pod pod-configmaps-0038887c-93b1-11ea-b832-0242ac110018 to disappear May 11 17:58:29.670: INFO: Pod pod-configmaps-0038887c-93b1-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:58:29.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-v8qxw" for this suite. May 11 17:58:35.822: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:58:35.846: INFO: namespace: e2e-tests-configmap-v8qxw, resource: bindings, ignored listing per whitelist May 11 17:58:35.887: INFO: namespace e2e-tests-configmap-v8qxw deletion completed in 6.213153623s • [SLOW TEST:15.142 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:58:35.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 11 17:58:36.049: INFO: Creating deployment "test-recreate-deployment" May 11 17:58:36.054: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 11 17:58:36.089: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created May 11 17:58:38.270: INFO: Waiting deployment "test-recreate-deployment" to complete May 11 17:58:38.272: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724816716, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724816716, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724816716, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724816716, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 17:58:40.368: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724816716, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724816716, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724816716, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724816716, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 17:58:42.437: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724816716, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724816716, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724816716, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724816716, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 17:58:44.275: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 11 17:58:44.279: INFO: Updating deployment test-recreate-deployment May 11 17:58:44.279: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 11 17:58:47.156: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-dzbm2,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-dzbm2/deployments/test-recreate-deployment,UID:092a96da-93b1-11ea-99e8-0242ac110002,ResourceVersion:9992551,Generation:2,CreationTimestamp:2020-05-11 17:58:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-05-11 17:58:46 +0000 UTC 2020-05-11 17:58:46 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-05-11 17:58:46 +0000 UTC 2020-05-11 17:58:36 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} May 11 17:58:47.159: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-dzbm2,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-dzbm2/replicasets/test-recreate-deployment-589c4bfd,UID:0e7af124-93b1-11ea-99e8-0242ac110002,ResourceVersion:9992546,Generation:1,CreationTimestamp:2020-05-11 17:58:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 092a96da-93b1-11ea-99e8-0242ac110002 0xc00147598f 0xc0014759a0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 11 17:58:47.159: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 11 17:58:47.159: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-dzbm2,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-dzbm2/replicasets/test-recreate-deployment-5bf7f65dc,UID:09308124-93b1-11ea-99e8-0242ac110002,ResourceVersion:9992535,Generation:2,CreationTimestamp:2020-05-11 17:58:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 092a96da-93b1-11ea-99e8-0242ac110002 0xc001475ae0 0xc001475ae1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 11 17:58:47.473: INFO: Pod "test-recreate-deployment-589c4bfd-87xhc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-87xhc,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-dzbm2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dzbm2/pods/test-recreate-deployment-589c4bfd-87xhc,UID:0eb379ea-93b1-11ea-99e8-0242ac110002,ResourceVersion:9992549,Generation:0,CreationTimestamp:2020-05-11 17:58:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 0e7af124-93b1-11ea-99e8-0242ac110002 0xc001be088f 0xc001be0910}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wcptn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wcptn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wcptn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001be0a30} {node.kubernetes.io/unreachable Exists NoExecute 0xc001be0a50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:58:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:58:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:58:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 17:58:45 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-11 17:58:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:58:47.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-dzbm2" for this suite. May 11 17:58:54.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:58:54.080: INFO: namespace: e2e-tests-deployment-dzbm2, resource: bindings, ignored listing per whitelist May 11 17:58:54.086: INFO: namespace e2e-tests-deployment-dzbm2 deletion completed in 6.609598775s • [SLOW TEST:18.198 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:58:54.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-hzz5h STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-hzz5h to expose endpoints map[] May 11 17:58:54.400: INFO: Get endpoints failed (4.061659ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 11 17:58:55.403: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-hzz5h exposes endpoints map[] (1.007672392s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-hzz5h STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-hzz5h to expose endpoints map[pod1:[80]] May 11 17:58:59.834: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.424113309s elapsed, will retry) May 11 17:59:00.839: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-hzz5h exposes endpoints map[pod1:[80]] (5.429383852s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-hzz5h STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-hzz5h to expose endpoints map[pod1:[80] pod2:[80]] May 11 17:59:05.081: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-hzz5h exposes endpoints map[pod1:[80] pod2:[80]] (4.239172383s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-hzz5h STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-hzz5h to expose endpoints map[pod2:[80]] May 11 17:59:06.591: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-hzz5h exposes endpoints map[pod2:[80]] (1.504940958s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-hzz5h STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-hzz5h to expose endpoints map[] May 11 17:59:08.024: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-hzz5h exposes endpoints map[] (1.428533519s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:59:08.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-hzz5h" for this suite. May 11 17:59:31.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:59:31.223: INFO: namespace: e2e-tests-services-hzz5h, resource: bindings, ignored listing per whitelist May 11 17:59:31.431: INFO: namespace e2e-tests-services-hzz5h deletion completed in 22.633036052s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:37.345 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:59:31.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-2a5d232a-93b1-11ea-b832-0242ac110018 STEP: Creating secret with name secret-projected-all-test-volume-2a5d22de-93b1-11ea-b832-0242ac110018 STEP: Creating a pod to test Check all projections for projected volume plugin May 11 17:59:31.967: INFO: Waiting up to 5m0s for pod "projected-volume-2a5d22a2-93b1-11ea-b832-0242ac110018" in namespace "e2e-tests-projected-grsqm" to be "success or failure" May 11 17:59:32.151: INFO: Pod "projected-volume-2a5d22a2-93b1-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 183.865837ms May 11 17:59:34.155: INFO: Pod "projected-volume-2a5d22a2-93b1-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.188002124s May 11 17:59:36.545: INFO: Pod "projected-volume-2a5d22a2-93b1-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.57860364s May 11 17:59:38.549: INFO: Pod "projected-volume-2a5d22a2-93b1-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.582398283s STEP: Saw pod success May 11 17:59:38.549: INFO: Pod "projected-volume-2a5d22a2-93b1-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 17:59:38.552: INFO: Trying to get logs from node hunter-worker pod projected-volume-2a5d22a2-93b1-11ea-b832-0242ac110018 container projected-all-volume-test: STEP: delete the pod May 11 17:59:38.739: INFO: Waiting for pod projected-volume-2a5d22a2-93b1-11ea-b832-0242ac110018 to disappear May 11 17:59:39.198: INFO: Pod projected-volume-2a5d22a2-93b1-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 17:59:39.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-grsqm" for this suite. May 11 17:59:45.441: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 17:59:45.466: INFO: namespace: e2e-tests-projected-grsqm, resource: bindings, ignored listing per whitelist May 11 17:59:45.528: INFO: namespace e2e-tests-projected-grsqm deletion completed in 6.326638081s • [SLOW TEST:14.097 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 17:59:45.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-cl2t STEP: Creating a pod to test atomic-volume-subpath May 11 17:59:45.873: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-cl2t" in namespace "e2e-tests-subpath-vkxpq" to be "success or failure" May 11 17:59:45.977: INFO: Pod "pod-subpath-test-configmap-cl2t": Phase="Pending", Reason="", readiness=false. Elapsed: 103.566835ms May 11 17:59:47.981: INFO: Pod "pod-subpath-test-configmap-cl2t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107419335s May 11 17:59:50.067: INFO: Pod "pod-subpath-test-configmap-cl2t": Phase="Pending", Reason="", readiness=false. Elapsed: 4.193188116s May 11 17:59:52.210: INFO: Pod "pod-subpath-test-configmap-cl2t": Phase="Pending", Reason="", readiness=false. Elapsed: 6.336485278s May 11 17:59:54.213: INFO: Pod "pod-subpath-test-configmap-cl2t": Phase="Pending", Reason="", readiness=false. Elapsed: 8.339830624s May 11 17:59:56.511: INFO: Pod "pod-subpath-test-configmap-cl2t": Phase="Pending", Reason="", readiness=false. Elapsed: 10.637108815s May 11 17:59:58.528: INFO: Pod "pod-subpath-test-configmap-cl2t": Phase="Pending", Reason="", readiness=false. Elapsed: 12.654213151s May 11 18:00:00.636: INFO: Pod "pod-subpath-test-configmap-cl2t": Phase="Running", Reason="", readiness=false. Elapsed: 14.762341144s May 11 18:00:02.639: INFO: Pod "pod-subpath-test-configmap-cl2t": Phase="Running", Reason="", readiness=false. Elapsed: 16.765860916s May 11 18:00:04.642: INFO: Pod "pod-subpath-test-configmap-cl2t": Phase="Running", Reason="", readiness=false. Elapsed: 18.768483747s May 11 18:00:06.646: INFO: Pod "pod-subpath-test-configmap-cl2t": Phase="Running", Reason="", readiness=false. Elapsed: 20.772954798s May 11 18:00:08.651: INFO: Pod "pod-subpath-test-configmap-cl2t": Phase="Running", Reason="", readiness=false. Elapsed: 22.777850258s May 11 18:00:10.684: INFO: Pod "pod-subpath-test-configmap-cl2t": Phase="Running", Reason="", readiness=false. Elapsed: 24.81032689s May 11 18:00:12.689: INFO: Pod "pod-subpath-test-configmap-cl2t": Phase="Running", Reason="", readiness=false. Elapsed: 26.815395203s May 11 18:00:14.692: INFO: Pod "pod-subpath-test-configmap-cl2t": Phase="Running", Reason="", readiness=false. Elapsed: 28.818526617s May 11 18:00:16.972: INFO: Pod "pod-subpath-test-configmap-cl2t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 31.098434589s STEP: Saw pod success May 11 18:00:16.972: INFO: Pod "pod-subpath-test-configmap-cl2t" satisfied condition "success or failure" May 11 18:00:17.022: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-cl2t container test-container-subpath-configmap-cl2t: STEP: delete the pod May 11 18:00:17.451: INFO: Waiting for pod pod-subpath-test-configmap-cl2t to disappear May 11 18:00:17.868: INFO: Pod pod-subpath-test-configmap-cl2t no longer exists STEP: Deleting pod pod-subpath-test-configmap-cl2t May 11 18:00:17.868: INFO: Deleting pod "pod-subpath-test-configmap-cl2t" in namespace "e2e-tests-subpath-vkxpq" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:00:17.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-vkxpq" for this suite. May 11 18:00:24.318: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:00:24.576: INFO: namespace: e2e-tests-subpath-vkxpq, resource: bindings, ignored listing per whitelist May 11 18:00:24.580: INFO: namespace e2e-tests-subpath-vkxpq deletion completed in 6.705405172s • [SLOW TEST:39.052 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:00:24.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 11 18:00:24.990: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 11 18:00:30.109: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 11 18:00:32.114: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 11 18:00:32.455: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-5zqfj,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-5zqfj/deployments/test-cleanup-deployment,UID:4e59419c-93b1-11ea-99e8-0242ac110002,ResourceVersion:9992916,Generation:1,CreationTimestamp:2020-05-11 18:00:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} May 11 18:00:32.606: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. May 11 18:00:32.606: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 11 18:00:32.606: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-5zqfj,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-5zqfj/replicasets/test-cleanup-controller,UID:4a137292-93b1-11ea-99e8-0242ac110002,ResourceVersion:9992918,Generation:1,CreationTimestamp:2020-05-11 18:00:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 4e59419c-93b1-11ea-99e8-0242ac110002 0xc001f7fb4f 0xc001f7fb60}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 11 18:00:32.630: INFO: Pod "test-cleanup-controller-8qdtw" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-8qdtw,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-5zqfj,SelfLink:/api/v1/namespaces/e2e-tests-deployment-5zqfj/pods/test-cleanup-controller-8qdtw,UID:4a1a3a37-93b1-11ea-99e8-0242ac110002,ResourceVersion:9992913,Generation:0,CreationTimestamp:2020-05-11 18:00:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 4a137292-93b1-11ea-99e8-0242ac110002 0xc002096317 0xc002096318}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zft7b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zft7b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-zft7b true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002096390} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020963b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:00:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:00:31 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:00:31 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:00:24 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.82,StartTime:2020-05-11 18:00:25 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-11 18:00:30 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://e590f1be399e523cb56e5ce8bb065a1d75d655587636768d943f674c95ed51f8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:00:32.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-5zqfj" for this suite. May 11 18:00:40.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:00:40.989: INFO: namespace: e2e-tests-deployment-5zqfj, resource: bindings, ignored listing per whitelist May 11 18:00:41.024: INFO: namespace e2e-tests-deployment-5zqfj deletion completed in 8.366521763s • [SLOW TEST:16.444 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:00:41.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-53cc1d23-93b1-11ea-b832-0242ac110018 STEP: Creating a pod to test consume configMaps May 11 18:00:41.358: INFO: Waiting up to 5m0s for pod "pod-configmaps-53cf0b9e-93b1-11ea-b832-0242ac110018" in namespace "e2e-tests-configmap-qn8qj" to be "success or failure" May 11 18:00:41.396: INFO: Pod "pod-configmaps-53cf0b9e-93b1-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 38.511626ms May 11 18:00:43.400: INFO: Pod "pod-configmaps-53cf0b9e-93b1-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042378604s May 11 18:00:45.404: INFO: Pod "pod-configmaps-53cf0b9e-93b1-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046627456s May 11 18:00:47.408: INFO: Pod "pod-configmaps-53cf0b9e-93b1-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050079726s May 11 18:00:49.648: INFO: Pod "pod-configmaps-53cf0b9e-93b1-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.28998576s STEP: Saw pod success May 11 18:00:49.648: INFO: Pod "pod-configmaps-53cf0b9e-93b1-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 18:00:49.650: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-53cf0b9e-93b1-11ea-b832-0242ac110018 container configmap-volume-test: STEP: delete the pod May 11 18:00:50.249: INFO: Waiting for pod pod-configmaps-53cf0b9e-93b1-11ea-b832-0242ac110018 to disappear May 11 18:00:50.439: INFO: Pod pod-configmaps-53cf0b9e-93b1-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:00:50.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-qn8qj" for this suite. May 11 18:00:58.857: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:00:58.865: INFO: namespace: e2e-tests-configmap-qn8qj, resource: bindings, ignored listing per whitelist May 11 18:00:59.206: INFO: namespace e2e-tests-configmap-qn8qj deletion completed in 8.763715586s • [SLOW TEST:18.182 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:00:59.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 11 18:00:59.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-qnw5h' May 11 18:01:06.941: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 11 18:01:06.941: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created May 11 18:01:07.021: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 May 11 18:01:07.070: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller May 11 18:01:07.185: INFO: scanned /root for discovery docs: May 11 18:01:07.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-qnw5h' May 11 18:01:26.849: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 11 18:01:26.849: INFO: stdout: "Created e2e-test-nginx-rc-abcb66ae603946385c3ce7bcf093fa31\nScaling up e2e-test-nginx-rc-abcb66ae603946385c3ce7bcf093fa31 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-abcb66ae603946385c3ce7bcf093fa31 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-abcb66ae603946385c3ce7bcf093fa31 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" May 11 18:01:26.849: INFO: stdout: "Created e2e-test-nginx-rc-abcb66ae603946385c3ce7bcf093fa31\nScaling up e2e-test-nginx-rc-abcb66ae603946385c3ce7bcf093fa31 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-abcb66ae603946385c3ce7bcf093fa31 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-abcb66ae603946385c3ce7bcf093fa31 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. May 11 18:01:26.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-qnw5h' May 11 18:01:27.117: INFO: stderr: "" May 11 18:01:27.117: INFO: stdout: "e2e-test-nginx-rc-abcb66ae603946385c3ce7bcf093fa31-g4r4c " May 11 18:01:27.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-abcb66ae603946385c3ce7bcf093fa31-g4r4c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qnw5h' May 11 18:01:27.319: INFO: stderr: "" May 11 18:01:27.319: INFO: stdout: "true" May 11 18:01:27.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-abcb66ae603946385c3ce7bcf093fa31-g4r4c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qnw5h' May 11 18:01:27.412: INFO: stderr: "" May 11 18:01:27.412: INFO: stdout: "docker.io/library/nginx:1.14-alpine" May 11 18:01:27.412: INFO: e2e-test-nginx-rc-abcb66ae603946385c3ce7bcf093fa31-g4r4c is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 May 11 18:01:27.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-qnw5h' May 11 18:01:27.714: INFO: stderr: "" May 11 18:01:27.714: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:01:27.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-qnw5h" for this suite. May 11 18:01:51.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:01:52.021: INFO: namespace: e2e-tests-kubectl-qnw5h, resource: bindings, ignored listing per whitelist May 11 18:01:52.023: INFO: namespace e2e-tests-kubectl-qnw5h deletion completed in 24.140486999s • [SLOW TEST:52.817 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:01:52.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 11 18:02:02.394: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 18:02:02.476: INFO: Pod pod-with-prestop-exec-hook still exists May 11 18:02:04.476: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 18:02:04.479: INFO: Pod pod-with-prestop-exec-hook still exists May 11 18:02:06.476: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 18:02:06.481: INFO: Pod pod-with-prestop-exec-hook still exists May 11 18:02:08.476: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 18:02:08.480: INFO: Pod pod-with-prestop-exec-hook still exists May 11 18:02:10.476: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 18:02:10.479: INFO: Pod pod-with-prestop-exec-hook still exists May 11 18:02:12.476: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 18:02:12.480: INFO: Pod pod-with-prestop-exec-hook still exists May 11 18:02:14.476: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 18:02:14.626: INFO: Pod pod-with-prestop-exec-hook still exists May 11 18:02:16.476: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 18:02:16.480: INFO: Pod pod-with-prestop-exec-hook still exists May 11 18:02:18.476: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 18:02:18.479: INFO: Pod pod-with-prestop-exec-hook still exists May 11 18:02:20.476: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 18:02:20.481: INFO: Pod pod-with-prestop-exec-hook still exists May 11 18:02:22.476: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 18:02:22.480: INFO: Pod pod-with-prestop-exec-hook still exists May 11 18:02:24.476: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 18:02:24.480: INFO: Pod pod-with-prestop-exec-hook still exists May 11 18:02:26.476: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 18:02:26.480: INFO: Pod pod-with-prestop-exec-hook still exists May 11 18:02:28.476: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 18:02:30.649: INFO: Pod pod-with-prestop-exec-hook still exists May 11 18:02:32.476: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 18:02:32.481: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:02:32.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-f87sd" for this suite. May 11 18:02:56.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:02:56.526: INFO: namespace: e2e-tests-container-lifecycle-hook-f87sd, resource: bindings, ignored listing per whitelist May 11 18:02:56.578: INFO: namespace e2e-tests-container-lifecycle-hook-f87sd deletion completed in 24.085472609s • [SLOW TEST:64.555 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:02:56.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 11 18:02:56.872: INFO: Waiting up to 5m0s for pod "downward-api-a4a015b3-93b1-11ea-b832-0242ac110018" in namespace "e2e-tests-downward-api-j9b8c" to be "success or failure" May 11 18:02:56.882: INFO: Pod "downward-api-a4a015b3-93b1-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 9.666862ms May 11 18:02:58.886: INFO: Pod "downward-api-a4a015b3-93b1-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013803442s May 11 18:03:00.893: INFO: Pod "downward-api-a4a015b3-93b1-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020458113s May 11 18:03:02.895: INFO: Pod "downward-api-a4a015b3-93b1-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.023191761s STEP: Saw pod success May 11 18:03:02.895: INFO: Pod "downward-api-a4a015b3-93b1-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 18:03:02.898: INFO: Trying to get logs from node hunter-worker2 pod downward-api-a4a015b3-93b1-11ea-b832-0242ac110018 container dapi-container: STEP: delete the pod May 11 18:03:02.981: INFO: Waiting for pod downward-api-a4a015b3-93b1-11ea-b832-0242ac110018 to disappear May 11 18:03:03.060: INFO: Pod downward-api-a4a015b3-93b1-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:03:03.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-j9b8c" for this suite. May 11 18:03:11.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:03:11.879: INFO: namespace: e2e-tests-downward-api-j9b8c, resource: bindings, ignored listing per whitelist May 11 18:03:11.915: INFO: namespace e2e-tests-downward-api-j9b8c deletion completed in 8.850553251s • [SLOW TEST:15.336 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:03:11.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults May 11 18:03:12.219: INFO: Waiting up to 5m0s for pod "client-containers-adb74b62-93b1-11ea-b832-0242ac110018" in namespace "e2e-tests-containers-hxqcs" to be "success or failure" May 11 18:03:12.222: INFO: Pod "client-containers-adb74b62-93b1-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.471537ms May 11 18:03:14.226: INFO: Pod "client-containers-adb74b62-93b1-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007258002s May 11 18:03:16.230: INFO: Pod "client-containers-adb74b62-93b1-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011035305s May 11 18:03:18.866: INFO: Pod "client-containers-adb74b62-93b1-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.64741542s STEP: Saw pod success May 11 18:03:18.866: INFO: Pod "client-containers-adb74b62-93b1-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 18:03:18.869: INFO: Trying to get logs from node hunter-worker pod client-containers-adb74b62-93b1-11ea-b832-0242ac110018 container test-container: STEP: delete the pod May 11 18:03:19.159: INFO: Waiting for pod client-containers-adb74b62-93b1-11ea-b832-0242ac110018 to disappear May 11 18:03:19.206: INFO: Pod client-containers-adb74b62-93b1-11ea-b832-0242ac110018 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:03:19.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-hxqcs" for this suite. May 11 18:03:25.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:03:25.334: INFO: namespace: e2e-tests-containers-hxqcs, resource: bindings, ignored listing per whitelist May 11 18:03:25.340: INFO: namespace e2e-tests-containers-hxqcs deletion completed in 6.087697743s • [SLOW TEST:13.425 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:03:25.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc May 11 18:03:25.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-597wp' May 11 18:03:25.803: INFO: stderr: "" May 11 18:03:25.803: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. May 11 18:03:26.807: INFO: Selector matched 1 pods for map[app:redis] May 11 18:03:26.807: INFO: Found 0 / 1 May 11 18:03:27.807: INFO: Selector matched 1 pods for map[app:redis] May 11 18:03:27.807: INFO: Found 0 / 1 May 11 18:03:29.321: INFO: Selector matched 1 pods for map[app:redis] May 11 18:03:29.321: INFO: Found 0 / 1 May 11 18:03:29.807: INFO: Selector matched 1 pods for map[app:redis] May 11 18:03:29.807: INFO: Found 0 / 1 May 11 18:03:30.807: INFO: Selector matched 1 pods for map[app:redis] May 11 18:03:30.807: INFO: Found 0 / 1 May 11 18:03:31.808: INFO: Selector matched 1 pods for map[app:redis] May 11 18:03:31.808: INFO: Found 1 / 1 May 11 18:03:31.808: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 11 18:03:31.812: INFO: Selector matched 1 pods for map[app:redis] May 11 18:03:31.812: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings May 11 18:03:31.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-d7vzd redis-master --namespace=e2e-tests-kubectl-597wp' May 11 18:03:32.616: INFO: stderr: "" May 11 18:03:32.616: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 11 May 18:03:30.090 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 11 May 18:03:30.090 # Server started, Redis version 3.2.12\n1:M 11 May 18:03:30.090 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 11 May 18:03:30.090 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines May 11 18:03:32.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-d7vzd redis-master --namespace=e2e-tests-kubectl-597wp --tail=1' May 11 18:03:32.843: INFO: stderr: "" May 11 18:03:32.843: INFO: stdout: "1:M 11 May 18:03:30.090 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes May 11 18:03:32.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-d7vzd redis-master --namespace=e2e-tests-kubectl-597wp --limit-bytes=1' May 11 18:03:32.943: INFO: stderr: "" May 11 18:03:32.943: INFO: stdout: " " STEP: exposing timestamps May 11 18:03:32.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-d7vzd redis-master --namespace=e2e-tests-kubectl-597wp --tail=1 --timestamps' May 11 18:03:33.042: INFO: stderr: "" May 11 18:03:33.042: INFO: stdout: "2020-05-11T18:03:30.091249925Z 1:M 11 May 18:03:30.090 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range May 11 18:03:35.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-d7vzd redis-master --namespace=e2e-tests-kubectl-597wp --since=1s' May 11 18:03:35.650: INFO: stderr: "" May 11 18:03:35.650: INFO: stdout: "" May 11 18:03:35.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-d7vzd redis-master --namespace=e2e-tests-kubectl-597wp --since=24h' May 11 18:03:35.737: INFO: stderr: "" May 11 18:03:35.737: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 11 May 18:03:30.090 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 11 May 18:03:30.090 # Server started, Redis version 3.2.12\n1:M 11 May 18:03:30.090 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 11 May 18:03:30.090 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources May 11 18:03:35.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-597wp' May 11 18:03:35.955: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 18:03:35.955: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" May 11 18:03:35.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-597wp' May 11 18:03:36.056: INFO: stderr: "No resources found.\n" May 11 18:03:36.056: INFO: stdout: "" May 11 18:03:36.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-597wp -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 11 18:03:36.152: INFO: stderr: "" May 11 18:03:36.153: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:03:36.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-597wp" for this suite. May 11 18:03:58.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:03:58.226: INFO: namespace: e2e-tests-kubectl-597wp, resource: bindings, ignored listing per whitelist May 11 18:03:58.265: INFO: namespace e2e-tests-kubectl-597wp deletion completed in 22.10994249s • [SLOW TEST:32.925 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:03:58.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-t9w8 STEP: Creating a pod to test atomic-volume-subpath May 11 18:03:58.579: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-t9w8" in namespace "e2e-tests-subpath-dcwlg" to be "success or failure" May 11 18:03:58.621: INFO: Pod "pod-subpath-test-configmap-t9w8": Phase="Pending", Reason="", readiness=false. Elapsed: 42.434822ms May 11 18:04:00.625: INFO: Pod "pod-subpath-test-configmap-t9w8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045916504s May 11 18:04:02.681: INFO: Pod "pod-subpath-test-configmap-t9w8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102298813s May 11 18:04:05.436: INFO: Pod "pod-subpath-test-configmap-t9w8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.857435663s May 11 18:04:07.440: INFO: Pod "pod-subpath-test-configmap-t9w8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.861119447s May 11 18:04:09.442: INFO: Pod "pod-subpath-test-configmap-t9w8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.863823264s May 11 18:04:11.760: INFO: Pod "pod-subpath-test-configmap-t9w8": Phase="Running", Reason="", readiness=true. Elapsed: 13.180947733s May 11 18:04:13.763: INFO: Pod "pod-subpath-test-configmap-t9w8": Phase="Running", Reason="", readiness=false. Elapsed: 15.184146074s May 11 18:04:15.766: INFO: Pod "pod-subpath-test-configmap-t9w8": Phase="Running", Reason="", readiness=false. Elapsed: 17.187409549s May 11 18:04:17.769: INFO: Pod "pod-subpath-test-configmap-t9w8": Phase="Running", Reason="", readiness=false. Elapsed: 19.190480076s May 11 18:04:19.773: INFO: Pod "pod-subpath-test-configmap-t9w8": Phase="Running", Reason="", readiness=false. Elapsed: 21.194317814s May 11 18:04:21.777: INFO: Pod "pod-subpath-test-configmap-t9w8": Phase="Running", Reason="", readiness=false. Elapsed: 23.198389312s May 11 18:04:23.782: INFO: Pod "pod-subpath-test-configmap-t9w8": Phase="Running", Reason="", readiness=false. Elapsed: 25.202854476s May 11 18:04:25.787: INFO: Pod "pod-subpath-test-configmap-t9w8": Phase="Running", Reason="", readiness=false. Elapsed: 27.208082923s May 11 18:04:27.790: INFO: Pod "pod-subpath-test-configmap-t9w8": Phase="Running", Reason="", readiness=false. Elapsed: 29.211438419s May 11 18:04:29.795: INFO: Pod "pod-subpath-test-configmap-t9w8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 31.215940707s STEP: Saw pod success May 11 18:04:29.795: INFO: Pod "pod-subpath-test-configmap-t9w8" satisfied condition "success or failure" May 11 18:04:29.798: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-t9w8 container test-container-subpath-configmap-t9w8: STEP: delete the pod May 11 18:04:30.173: INFO: Waiting for pod pod-subpath-test-configmap-t9w8 to disappear May 11 18:04:30.183: INFO: Pod pod-subpath-test-configmap-t9w8 no longer exists STEP: Deleting pod pod-subpath-test-configmap-t9w8 May 11 18:04:30.183: INFO: Deleting pod "pod-subpath-test-configmap-t9w8" in namespace "e2e-tests-subpath-dcwlg" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:04:30.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-dcwlg" for this suite. May 11 18:04:38.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:04:38.427: INFO: namespace: e2e-tests-subpath-dcwlg, resource: bindings, ignored listing per whitelist May 11 18:04:38.455: INFO: namespace e2e-tests-subpath-dcwlg deletion completed in 8.264978777s • [SLOW TEST:40.190 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:04:38.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:04:44.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-wwllz" for this suite. May 11 18:04:52.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:04:53.505: INFO: namespace: e2e-tests-namespaces-wwllz, resource: bindings, ignored listing per whitelist May 11 18:04:53.526: INFO: namespace e2e-tests-namespaces-wwllz deletion completed in 8.748763315s STEP: Destroying namespace "e2e-tests-nsdeletetest-qlmzt" for this suite. May 11 18:04:53.528: INFO: Namespace e2e-tests-nsdeletetest-qlmzt was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-qfnns" for this suite. May 11 18:05:00.123: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:05:00.582: INFO: namespace: e2e-tests-nsdeletetest-qfnns, resource: bindings, ignored listing per whitelist May 11 18:05:00.619: INFO: namespace e2e-tests-nsdeletetest-qfnns deletion completed in 7.091880041s • [SLOW TEST:22.164 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:05:00.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 11 18:05:01.015: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ee94108e-93b1-11ea-b832-0242ac110018" in namespace "e2e-tests-downward-api-g9jx8" to be "success or failure" May 11 18:05:01.046: INFO: Pod "downwardapi-volume-ee94108e-93b1-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 30.937922ms May 11 18:05:03.154: INFO: Pod "downwardapi-volume-ee94108e-93b1-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.139426568s May 11 18:05:05.361: INFO: Pod "downwardapi-volume-ee94108e-93b1-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.34642271s May 11 18:05:07.365: INFO: Pod "downwardapi-volume-ee94108e-93b1-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.350052154s STEP: Saw pod success May 11 18:05:07.365: INFO: Pod "downwardapi-volume-ee94108e-93b1-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 18:05:07.367: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-ee94108e-93b1-11ea-b832-0242ac110018 container client-container: STEP: delete the pod May 11 18:05:07.443: INFO: Waiting for pod downwardapi-volume-ee94108e-93b1-11ea-b832-0242ac110018 to disappear May 11 18:05:07.526: INFO: Pod downwardapi-volume-ee94108e-93b1-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:05:07.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-g9jx8" for this suite. May 11 18:05:13.583: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:05:14.338: INFO: namespace: e2e-tests-downward-api-g9jx8, resource: bindings, ignored listing per whitelist May 11 18:05:14.367: INFO: namespace e2e-tests-downward-api-g9jx8 deletion completed in 6.837971871s • [SLOW TEST:13.747 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:05:14.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 11 18:05:14.630: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-jm8cp,SelfLink:/api/v1/namespaces/e2e-tests-watch-jm8cp/configmaps/e2e-watch-test-watch-closed,UID:f6b740b2-93b1-11ea-99e8-0242ac110002,ResourceVersion:9993814,Generation:0,CreationTimestamp:2020-05-11 18:05:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 11 18:05:14.631: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-jm8cp,SelfLink:/api/v1/namespaces/e2e-tests-watch-jm8cp/configmaps/e2e-watch-test-watch-closed,UID:f6b740b2-93b1-11ea-99e8-0242ac110002,ResourceVersion:9993815,Generation:0,CreationTimestamp:2020-05-11 18:05:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 11 18:05:14.882: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-jm8cp,SelfLink:/api/v1/namespaces/e2e-tests-watch-jm8cp/configmaps/e2e-watch-test-watch-closed,UID:f6b740b2-93b1-11ea-99e8-0242ac110002,ResourceVersion:9993816,Generation:0,CreationTimestamp:2020-05-11 18:05:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 11 18:05:14.882: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-jm8cp,SelfLink:/api/v1/namespaces/e2e-tests-watch-jm8cp/configmaps/e2e-watch-test-watch-closed,UID:f6b740b2-93b1-11ea-99e8-0242ac110002,ResourceVersion:9993817,Generation:0,CreationTimestamp:2020-05-11 18:05:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:05:14.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-jm8cp" for this suite. May 11 18:05:23.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:05:23.109: INFO: namespace: e2e-tests-watch-jm8cp, resource: bindings, ignored listing per whitelist May 11 18:05:23.157: INFO: namespace e2e-tests-watch-jm8cp deletion completed in 8.179992748s • [SLOW TEST:8.789 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:05:23.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-cnd7h STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-cnd7h to expose endpoints map[] May 11 18:05:23.726: INFO: Get endpoints failed (102.411734ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 11 18:05:24.963: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-cnd7h exposes endpoints map[] (1.339826155s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-cnd7h STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-cnd7h to expose endpoints map[pod1:[100]] May 11 18:05:29.573: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.605904886s elapsed, will retry) May 11 18:05:30.631: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-cnd7h exposes endpoints map[pod1:[100]] (5.663450018s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-cnd7h STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-cnd7h to expose endpoints map[pod1:[100] pod2:[101]] May 11 18:05:35.442: INFO: Unexpected endpoints: found map[fce636fb-93b1-11ea-99e8-0242ac110002:[100]], expected map[pod1:[100] pod2:[101]] (4.808980808s elapsed, will retry) May 11 18:05:36.449: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-cnd7h exposes endpoints map[pod2:[101] pod1:[100]] (5.816159978s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-cnd7h STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-cnd7h to expose endpoints map[pod2:[101]] May 11 18:05:37.637: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-cnd7h exposes endpoints map[pod2:[101]] (1.183485979s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-cnd7h STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-cnd7h to expose endpoints map[] May 11 18:05:38.723: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-cnd7h exposes endpoints map[] (1.082362455s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:05:39.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-cnd7h" for this suite. May 11 18:06:05.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:06:05.752: INFO: namespace: e2e-tests-services-cnd7h, resource: bindings, ignored listing per whitelist May 11 18:06:05.773: INFO: namespace e2e-tests-services-cnd7h deletion completed in 26.355840118s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:42.616 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:06:05.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition May 11 18:06:06.406: INFO: Waiting up to 5m0s for pod "var-expansion-1565aaa2-93b2-11ea-b832-0242ac110018" in namespace "e2e-tests-var-expansion-vjlxc" to be "success or failure" May 11 18:06:06.712: INFO: Pod "var-expansion-1565aaa2-93b2-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 306.148201ms May 11 18:06:08.809: INFO: Pod "var-expansion-1565aaa2-93b2-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.402233643s May 11 18:06:10.812: INFO: Pod "var-expansion-1565aaa2-93b2-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.40607315s May 11 18:06:13.955: INFO: Pod "var-expansion-1565aaa2-93b2-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 7.549093256s May 11 18:06:15.959: INFO: Pod "var-expansion-1565aaa2-93b2-11ea-b832-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 9.552353876s May 11 18:06:17.962: INFO: Pod "var-expansion-1565aaa2-93b2-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.555954442s STEP: Saw pod success May 11 18:06:17.962: INFO: Pod "var-expansion-1565aaa2-93b2-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 18:06:17.964: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-1565aaa2-93b2-11ea-b832-0242ac110018 container dapi-container: STEP: delete the pod May 11 18:06:18.016: INFO: Waiting for pod var-expansion-1565aaa2-93b2-11ea-b832-0242ac110018 to disappear May 11 18:06:18.180: INFO: Pod var-expansion-1565aaa2-93b2-11ea-b832-0242ac110018 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:06:18.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-vjlxc" for this suite. May 11 18:06:24.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:06:24.242: INFO: namespace: e2e-tests-var-expansion-vjlxc, resource: bindings, ignored listing per whitelist May 11 18:06:24.299: INFO: namespace e2e-tests-var-expansion-vjlxc deletion completed in 6.115565777s • [SLOW TEST:18.526 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:06:24.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 11 18:06:24.488: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:06:40.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-xkx48" for this suite. May 11 18:06:48.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:06:48.548: INFO: namespace: e2e-tests-init-container-xkx48, resource: bindings, ignored listing per whitelist May 11 18:06:48.582: INFO: namespace e2e-tests-init-container-xkx48 deletion completed in 8.312746895s • [SLOW TEST:24.282 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:06:48.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-f8jrm STEP: creating a selector STEP: Creating the service pods in kubernetes May 11 18:06:49.553: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 11 18:07:22.729: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.2.79 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-f8jrm PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 18:07:22.730: INFO: >>> kubeConfig: /root/.kube/config I0511 18:07:22.768037 6 log.go:172] (0xc001e74420) (0xc0020d8460) Create stream I0511 18:07:22.768064 6 log.go:172] (0xc001e74420) (0xc0020d8460) Stream added, broadcasting: 1 I0511 18:07:22.770447 6 log.go:172] (0xc001e74420) Reply frame received for 1 I0511 18:07:22.770474 6 log.go:172] (0xc001e74420) (0xc0015be6e0) Create stream I0511 18:07:22.770481 6 log.go:172] (0xc001e74420) (0xc0015be6e0) Stream added, broadcasting: 3 I0511 18:07:22.771398 6 log.go:172] (0xc001e74420) Reply frame received for 3 I0511 18:07:22.771450 6 log.go:172] (0xc001e74420) (0xc0015be780) Create stream I0511 18:07:22.771465 6 log.go:172] (0xc001e74420) (0xc0015be780) Stream added, broadcasting: 5 I0511 18:07:22.772316 6 log.go:172] (0xc001e74420) Reply frame received for 5 I0511 18:07:23.849507 6 log.go:172] (0xc001e74420) Data frame received for 3 I0511 18:07:23.849554 6 log.go:172] (0xc001e74420) Data frame received for 5 I0511 18:07:23.849579 6 log.go:172] (0xc0015be780) (5) Data frame handling I0511 18:07:23.849622 6 log.go:172] (0xc0015be6e0) (3) Data frame handling I0511 18:07:23.849677 6 log.go:172] (0xc0015be6e0) (3) Data frame sent I0511 18:07:23.849698 6 log.go:172] (0xc001e74420) Data frame received for 3 I0511 18:07:23.849716 6 log.go:172] (0xc0015be6e0) (3) Data frame handling I0511 18:07:23.855055 6 log.go:172] (0xc001e74420) Data frame received for 1 I0511 18:07:23.855101 6 log.go:172] (0xc0020d8460) (1) Data frame handling I0511 18:07:23.855118 6 log.go:172] (0xc0020d8460) (1) Data frame sent I0511 18:07:23.855133 6 log.go:172] (0xc001e74420) (0xc0020d8460) Stream removed, broadcasting: 1 I0511 18:07:23.855154 6 log.go:172] (0xc001e74420) Go away received I0511 18:07:23.855310 6 log.go:172] (0xc001e74420) (0xc0020d8460) Stream removed, broadcasting: 1 I0511 18:07:23.855337 6 log.go:172] (0xc001e74420) (0xc0015be6e0) Stream removed, broadcasting: 3 I0511 18:07:23.855352 6 log.go:172] (0xc001e74420) (0xc0015be780) Stream removed, broadcasting: 5 May 11 18:07:23.855: INFO: Found all expected endpoints: [netserver-0] May 11 18:07:23.858: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.1.90 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-f8jrm PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 18:07:23.858: INFO: >>> kubeConfig: /root/.kube/config I0511 18:07:23.884838 6 log.go:172] (0xc000db1080) (0xc001fd81e0) Create stream I0511 18:07:23.884863 6 log.go:172] (0xc000db1080) (0xc001fd81e0) Stream added, broadcasting: 1 I0511 18:07:23.887770 6 log.go:172] (0xc000db1080) Reply frame received for 1 I0511 18:07:23.887809 6 log.go:172] (0xc000db1080) (0xc0021a2000) Create stream I0511 18:07:23.887822 6 log.go:172] (0xc000db1080) (0xc0021a2000) Stream added, broadcasting: 3 I0511 18:07:23.888656 6 log.go:172] (0xc000db1080) Reply frame received for 3 I0511 18:07:23.888684 6 log.go:172] (0xc000db1080) (0xc00167c000) Create stream I0511 18:07:23.888694 6 log.go:172] (0xc000db1080) (0xc00167c000) Stream added, broadcasting: 5 I0511 18:07:23.889759 6 log.go:172] (0xc000db1080) Reply frame received for 5 I0511 18:07:24.953486 6 log.go:172] (0xc000db1080) Data frame received for 5 I0511 18:07:24.953528 6 log.go:172] (0xc00167c000) (5) Data frame handling I0511 18:07:24.953591 6 log.go:172] (0xc000db1080) Data frame received for 3 I0511 18:07:24.953615 6 log.go:172] (0xc0021a2000) (3) Data frame handling I0511 18:07:24.953639 6 log.go:172] (0xc0021a2000) (3) Data frame sent I0511 18:07:24.953663 6 log.go:172] (0xc000db1080) Data frame received for 3 I0511 18:07:24.953673 6 log.go:172] (0xc0021a2000) (3) Data frame handling I0511 18:07:24.955216 6 log.go:172] (0xc000db1080) Data frame received for 1 I0511 18:07:24.955245 6 log.go:172] (0xc001fd81e0) (1) Data frame handling I0511 18:07:24.955320 6 log.go:172] (0xc001fd81e0) (1) Data frame sent I0511 18:07:24.955351 6 log.go:172] (0xc000db1080) (0xc001fd81e0) Stream removed, broadcasting: 1 I0511 18:07:24.955379 6 log.go:172] (0xc000db1080) Go away received I0511 18:07:24.955504 6 log.go:172] (0xc000db1080) (0xc001fd81e0) Stream removed, broadcasting: 1 I0511 18:07:24.955528 6 log.go:172] (0xc000db1080) (0xc0021a2000) Stream removed, broadcasting: 3 I0511 18:07:24.955555 6 log.go:172] (0xc000db1080) (0xc00167c000) Stream removed, broadcasting: 5 May 11 18:07:24.955: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:07:24.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-f8jrm" for this suite. May 11 18:07:51.576: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:07:51.856: INFO: namespace: e2e-tests-pod-network-test-f8jrm, resource: bindings, ignored listing per whitelist May 11 18:07:52.815: INFO: namespace e2e-tests-pod-network-test-f8jrm deletion completed in 27.807574155s • [SLOW TEST:64.233 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:07:52.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 11 18:07:55.312: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 May 11 18:07:55.600: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-jfkfd/daemonsets","resourceVersion":"9994306"},"items":null} May 11 18:07:55.603: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-jfkfd/pods","resourceVersion":"9994306"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:07:55.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-jfkfd" for this suite. May 11 18:08:02.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:08:02.141: INFO: namespace: e2e-tests-daemonsets-jfkfd, resource: bindings, ignored listing per whitelist May 11 18:08:02.321: INFO: namespace e2e-tests-daemonsets-jfkfd deletion completed in 6.70371952s S [SKIPPING] [9.505 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 11 18:07:55.312: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:08:02.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-5af270b4-93b2-11ea-b832-0242ac110018 STEP: Creating a pod to test consume secrets May 11 18:08:03.292: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5afe24f9-93b2-11ea-b832-0242ac110018" in namespace "e2e-tests-projected-h9rbd" to be "success or failure" May 11 18:08:03.620: INFO: Pod "pod-projected-secrets-5afe24f9-93b2-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 327.235318ms May 11 18:08:06.160: INFO: Pod "pod-projected-secrets-5afe24f9-93b2-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.868163774s May 11 18:08:08.164: INFO: Pod "pod-projected-secrets-5afe24f9-93b2-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.871826377s May 11 18:08:10.168: INFO: Pod "pod-projected-secrets-5afe24f9-93b2-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.876008236s May 11 18:08:12.172: INFO: Pod "pod-projected-secrets-5afe24f9-93b2-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.879887656s May 11 18:08:14.263: INFO: Pod "pod-projected-secrets-5afe24f9-93b2-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.970263721s STEP: Saw pod success May 11 18:08:14.263: INFO: Pod "pod-projected-secrets-5afe24f9-93b2-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 18:08:14.266: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-5afe24f9-93b2-11ea-b832-0242ac110018 container projected-secret-volume-test: STEP: delete the pod May 11 18:08:14.461: INFO: Waiting for pod pod-projected-secrets-5afe24f9-93b2-11ea-b832-0242ac110018 to disappear May 11 18:08:14.477: INFO: Pod pod-projected-secrets-5afe24f9-93b2-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:08:14.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-h9rbd" for this suite. May 11 18:08:20.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:08:20.658: INFO: namespace: e2e-tests-projected-h9rbd, resource: bindings, ignored listing per whitelist May 11 18:08:20.703: INFO: namespace e2e-tests-projected-h9rbd deletion completed in 6.222831434s • [SLOW TEST:18.382 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:08:20.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 11 18:08:38.199: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 11 18:08:38.206: INFO: Pod pod-with-prestop-http-hook still exists May 11 18:08:40.206: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 11 18:08:40.309: INFO: Pod pod-with-prestop-http-hook still exists May 11 18:08:42.206: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 11 18:08:42.209: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:08:42.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-l5wt2" for this suite. May 11 18:09:08.680: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:09:08.866: INFO: namespace: e2e-tests-container-lifecycle-hook-l5wt2, resource: bindings, ignored listing per whitelist May 11 18:09:08.873: INFO: namespace e2e-tests-container-lifecycle-hook-l5wt2 deletion completed in 26.657226975s • [SLOW TEST:48.170 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:09:08.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 11 18:09:09.914: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 11 18:09:09.921: INFO: Waiting for terminating namespaces to be deleted... May 11 18:09:09.923: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 11 18:09:09.927: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 11 18:09:09.927: INFO: Container coredns ready: true, restart count 0 May 11 18:09:09.927: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 11 18:09:09.927: INFO: Container kube-proxy ready: true, restart count 0 May 11 18:09:09.927: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 11 18:09:09.927: INFO: Container kindnet-cni ready: true, restart count 0 May 11 18:09:09.927: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 11 18:09:09.932: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 11 18:09:09.932: INFO: Container kube-proxy ready: true, restart count 0 May 11 18:09:09.933: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 11 18:09:09.933: INFO: Container kindnet-cni ready: true, restart count 0 May 11 18:09:09.933: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 11 18:09:09.933: INFO: Container coredns ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-worker STEP: verifying the node has the label node hunter-worker2 May 11 18:09:10.274: INFO: Pod coredns-54ff9cd656-4h7lb requesting resource cpu=100m on Node hunter-worker May 11 18:09:10.274: INFO: Pod coredns-54ff9cd656-8vrkk requesting resource cpu=100m on Node hunter-worker2 May 11 18:09:10.274: INFO: Pod kindnet-54h7m requesting resource cpu=100m on Node hunter-worker May 11 18:09:10.274: INFO: Pod kindnet-mtqrs requesting resource cpu=100m on Node hunter-worker2 May 11 18:09:10.274: INFO: Pod kube-proxy-s52ll requesting resource cpu=0m on Node hunter-worker2 May 11 18:09:10.274: INFO: Pod kube-proxy-szbng requesting resource cpu=0m on Node hunter-worker STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-83319713-93b2-11ea-b832-0242ac110018.160e0baba255865a], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-nrtvj/filler-pod-83319713-93b2-11ea-b832-0242ac110018 to hunter-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-83319713-93b2-11ea-b832-0242ac110018.160e0babea4290c5], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-83319713-93b2-11ea-b832-0242ac110018.160e0bac3e68d0c0], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-83319713-93b2-11ea-b832-0242ac110018.160e0bac5f555251], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Normal], Name = [filler-pod-8332448d-93b2-11ea-b832-0242ac110018.160e0baba4b951fa], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-nrtvj/filler-pod-8332448d-93b2-11ea-b832-0242ac110018 to hunter-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-8332448d-93b2-11ea-b832-0242ac110018.160e0bac4165ad44], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-8332448d-93b2-11ea-b832-0242ac110018.160e0bacae264552], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-8332448d-93b2-11ea-b832-0242ac110018.160e0bacc4c55cdd], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.160e0bad11d2174e], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node hunter-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node hunter-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:09:18.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-nrtvj" for this suite. May 11 18:09:31.165: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:09:31.177: INFO: namespace: e2e-tests-sched-pred-nrtvj, resource: bindings, ignored listing per whitelist May 11 18:09:31.236: INFO: namespace e2e-tests-sched-pred-nrtvj deletion completed in 12.982568956s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:22.363 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:09:31.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller May 11 18:09:31.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-84l6l' May 11 18:09:32.975: INFO: stderr: "" May 11 18:09:32.975: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 11 18:09:32.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-84l6l' May 11 18:09:33.211: INFO: stderr: "" May 11 18:09:33.211: INFO: stdout: "update-demo-nautilus-dsmq8 update-demo-nautilus-pw4bh " May 11 18:09:33.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dsmq8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-84l6l' May 11 18:09:33.448: INFO: stderr: "" May 11 18:09:33.448: INFO: stdout: "" May 11 18:09:33.448: INFO: update-demo-nautilus-dsmq8 is created but not running May 11 18:09:38.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-84l6l' May 11 18:09:38.640: INFO: stderr: "" May 11 18:09:38.640: INFO: stdout: "update-demo-nautilus-dsmq8 update-demo-nautilus-pw4bh " May 11 18:09:38.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dsmq8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-84l6l' May 11 18:09:38.812: INFO: stderr: "" May 11 18:09:38.812: INFO: stdout: "" May 11 18:09:38.812: INFO: update-demo-nautilus-dsmq8 is created but not running May 11 18:09:43.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-84l6l' May 11 18:09:43.907: INFO: stderr: "" May 11 18:09:43.907: INFO: stdout: "update-demo-nautilus-dsmq8 update-demo-nautilus-pw4bh " May 11 18:09:43.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dsmq8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-84l6l' May 11 18:09:44.015: INFO: stderr: "" May 11 18:09:44.015: INFO: stdout: "true" May 11 18:09:44.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dsmq8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-84l6l' May 11 18:09:44.105: INFO: stderr: "" May 11 18:09:44.105: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 18:09:44.105: INFO: validating pod update-demo-nautilus-dsmq8 May 11 18:09:44.109: INFO: got data: { "image": "nautilus.jpg" } May 11 18:09:44.109: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 18:09:44.109: INFO: update-demo-nautilus-dsmq8 is verified up and running May 11 18:09:44.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pw4bh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-84l6l' May 11 18:09:44.213: INFO: stderr: "" May 11 18:09:44.213: INFO: stdout: "true" May 11 18:09:44.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pw4bh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-84l6l' May 11 18:09:44.313: INFO: stderr: "" May 11 18:09:44.313: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 18:09:44.313: INFO: validating pod update-demo-nautilus-pw4bh May 11 18:09:44.320: INFO: got data: { "image": "nautilus.jpg" } May 11 18:09:44.320: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 18:09:44.320: INFO: update-demo-nautilus-pw4bh is verified up and running STEP: using delete to clean up resources May 11 18:09:44.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-84l6l' May 11 18:09:44.409: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 18:09:44.409: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 11 18:09:44.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-84l6l' May 11 18:09:44.524: INFO: stderr: "No resources found.\n" May 11 18:09:44.524: INFO: stdout: "" May 11 18:09:44.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-84l6l -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 11 18:09:44.621: INFO: stderr: "" May 11 18:09:44.621: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:09:44.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-84l6l" for this suite. May 11 18:09:54.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:09:54.868: INFO: namespace: e2e-tests-kubectl-84l6l, resource: bindings, ignored listing per whitelist May 11 18:09:54.902: INFO: namespace e2e-tests-kubectl-84l6l deletion completed in 10.278805114s • [SLOW TEST:23.666 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:09:54.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 11 18:09:55.426: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-pphgs,SelfLink:/api/v1/namespaces/e2e-tests-watch-pphgs/configmaps/e2e-watch-test-configmap-a,UID:9e16783f-93b2-11ea-99e8-0242ac110002,ResourceVersion:9994699,Generation:0,CreationTimestamp:2020-05-11 18:09:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 11 18:09:55.427: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-pphgs,SelfLink:/api/v1/namespaces/e2e-tests-watch-pphgs/configmaps/e2e-watch-test-configmap-a,UID:9e16783f-93b2-11ea-99e8-0242ac110002,ResourceVersion:9994699,Generation:0,CreationTimestamp:2020-05-11 18:09:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 11 18:10:05.823: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-pphgs,SelfLink:/api/v1/namespaces/e2e-tests-watch-pphgs/configmaps/e2e-watch-test-configmap-a,UID:9e16783f-93b2-11ea-99e8-0242ac110002,ResourceVersion:9994719,Generation:0,CreationTimestamp:2020-05-11 18:09:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 11 18:10:05.824: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-pphgs,SelfLink:/api/v1/namespaces/e2e-tests-watch-pphgs/configmaps/e2e-watch-test-configmap-a,UID:9e16783f-93b2-11ea-99e8-0242ac110002,ResourceVersion:9994719,Generation:0,CreationTimestamp:2020-05-11 18:09:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 11 18:10:16.140: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-pphgs,SelfLink:/api/v1/namespaces/e2e-tests-watch-pphgs/configmaps/e2e-watch-test-configmap-a,UID:9e16783f-93b2-11ea-99e8-0242ac110002,ResourceVersion:9994738,Generation:0,CreationTimestamp:2020-05-11 18:09:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 11 18:10:16.140: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-pphgs,SelfLink:/api/v1/namespaces/e2e-tests-watch-pphgs/configmaps/e2e-watch-test-configmap-a,UID:9e16783f-93b2-11ea-99e8-0242ac110002,ResourceVersion:9994738,Generation:0,CreationTimestamp:2020-05-11 18:09:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 11 18:10:26.148: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-pphgs,SelfLink:/api/v1/namespaces/e2e-tests-watch-pphgs/configmaps/e2e-watch-test-configmap-a,UID:9e16783f-93b2-11ea-99e8-0242ac110002,ResourceVersion:9994758,Generation:0,CreationTimestamp:2020-05-11 18:09:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 11 18:10:26.148: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-pphgs,SelfLink:/api/v1/namespaces/e2e-tests-watch-pphgs/configmaps/e2e-watch-test-configmap-a,UID:9e16783f-93b2-11ea-99e8-0242ac110002,ResourceVersion:9994758,Generation:0,CreationTimestamp:2020-05-11 18:09:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 11 18:10:36.153: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-pphgs,SelfLink:/api/v1/namespaces/e2e-tests-watch-pphgs/configmaps/e2e-watch-test-configmap-b,UID:b6611dec-93b2-11ea-99e8-0242ac110002,ResourceVersion:9994778,Generation:0,CreationTimestamp:2020-05-11 18:10:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 11 18:10:36.153: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-pphgs,SelfLink:/api/v1/namespaces/e2e-tests-watch-pphgs/configmaps/e2e-watch-test-configmap-b,UID:b6611dec-93b2-11ea-99e8-0242ac110002,ResourceVersion:9994778,Generation:0,CreationTimestamp:2020-05-11 18:10:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 11 18:10:46.160: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-pphgs,SelfLink:/api/v1/namespaces/e2e-tests-watch-pphgs/configmaps/e2e-watch-test-configmap-b,UID:b6611dec-93b2-11ea-99e8-0242ac110002,ResourceVersion:9994798,Generation:0,CreationTimestamp:2020-05-11 18:10:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 11 18:10:46.160: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-pphgs,SelfLink:/api/v1/namespaces/e2e-tests-watch-pphgs/configmaps/e2e-watch-test-configmap-b,UID:b6611dec-93b2-11ea-99e8-0242ac110002,ResourceVersion:9994798,Generation:0,CreationTimestamp:2020-05-11 18:10:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:10:56.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-pphgs" for this suite. May 11 18:11:02.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:11:02.366: INFO: namespace: e2e-tests-watch-pphgs, resource: bindings, ignored listing per whitelist May 11 18:11:02.396: INFO: namespace e2e-tests-watch-pphgs deletion completed in 6.15721234s • [SLOW TEST:67.493 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:11:02.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 11 18:11:02.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-566sh' May 11 18:11:12.923: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 11 18:11:12.923: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 May 11 18:11:12.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-566sh' May 11 18:11:13.872: INFO: stderr: "" May 11 18:11:13.872: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:11:13.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-566sh" for this suite. May 11 18:11:22.017: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:11:22.070: INFO: namespace: e2e-tests-kubectl-566sh, resource: bindings, ignored listing per whitelist May 11 18:11:22.083: INFO: namespace e2e-tests-kubectl-566sh deletion completed in 8.186318498s • [SLOW TEST:19.687 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:11:22.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-d1fadc60-93b2-11ea-b832-0242ac110018 STEP: Creating a pod to test consume secrets May 11 18:11:22.503: INFO: Waiting up to 5m0s for pod "pod-secrets-d1fd2b80-93b2-11ea-b832-0242ac110018" in namespace "e2e-tests-secrets-dmqd6" to be "success or failure" May 11 18:11:22.633: INFO: Pod "pod-secrets-d1fd2b80-93b2-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 130.311134ms May 11 18:11:24.639: INFO: Pod "pod-secrets-d1fd2b80-93b2-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13654561s May 11 18:11:26.730: INFO: Pod "pod-secrets-d1fd2b80-93b2-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.227022698s May 11 18:11:28.891: INFO: Pod "pod-secrets-d1fd2b80-93b2-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.38831202s May 11 18:11:30.895: INFO: Pod "pod-secrets-d1fd2b80-93b2-11ea-b832-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 8.392051469s May 11 18:11:32.899: INFO: Pod "pod-secrets-d1fd2b80-93b2-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.396201357s STEP: Saw pod success May 11 18:11:32.899: INFO: Pod "pod-secrets-d1fd2b80-93b2-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 18:11:32.902: INFO: Trying to get logs from node hunter-worker pod pod-secrets-d1fd2b80-93b2-11ea-b832-0242ac110018 container secret-volume-test: STEP: delete the pod May 11 18:11:33.270: INFO: Waiting for pod pod-secrets-d1fd2b80-93b2-11ea-b832-0242ac110018 to disappear May 11 18:11:33.364: INFO: Pod pod-secrets-d1fd2b80-93b2-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:11:33.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-dmqd6" for this suite. May 11 18:11:39.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:11:39.493: INFO: namespace: e2e-tests-secrets-dmqd6, resource: bindings, ignored listing per whitelist May 11 18:11:39.541: INFO: namespace e2e-tests-secrets-dmqd6 deletion completed in 6.174816557s • [SLOW TEST:17.458 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:11:39.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 11 18:11:40.325: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:11:44.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-8x7v4" for this suite. May 11 18:12:28.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:12:28.789: INFO: namespace: e2e-tests-pods-8x7v4, resource: bindings, ignored listing per whitelist May 11 18:12:28.821: INFO: namespace e2e-tests-pods-8x7v4 deletion completed in 44.126404938s • [SLOW TEST:49.280 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:12:28.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command May 11 18:12:29.433: INFO: Waiting up to 5m0s for pod "var-expansion-f9d5c298-93b2-11ea-b832-0242ac110018" in namespace "e2e-tests-var-expansion-cwpsh" to be "success or failure" May 11 18:12:29.481: INFO: Pod "var-expansion-f9d5c298-93b2-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 48.687602ms May 11 18:12:31.485: INFO: Pod "var-expansion-f9d5c298-93b2-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05251466s May 11 18:12:33.636: INFO: Pod "var-expansion-f9d5c298-93b2-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.20342357s May 11 18:12:35.639: INFO: Pod "var-expansion-f9d5c298-93b2-11ea-b832-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 6.206864443s May 11 18:12:37.803: INFO: Pod "var-expansion-f9d5c298-93b2-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.370584705s STEP: Saw pod success May 11 18:12:37.803: INFO: Pod "var-expansion-f9d5c298-93b2-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 18:12:37.806: INFO: Trying to get logs from node hunter-worker pod var-expansion-f9d5c298-93b2-11ea-b832-0242ac110018 container dapi-container: STEP: delete the pod May 11 18:12:38.073: INFO: Waiting for pod var-expansion-f9d5c298-93b2-11ea-b832-0242ac110018 to disappear May 11 18:12:38.075: INFO: Pod var-expansion-f9d5c298-93b2-11ea-b832-0242ac110018 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:12:38.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-cwpsh" for this suite. May 11 18:12:45.073: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:12:45.141: INFO: namespace: e2e-tests-var-expansion-cwpsh, resource: bindings, ignored listing per whitelist May 11 18:12:45.148: INFO: namespace e2e-tests-var-expansion-cwpsh deletion completed in 6.618735386s • [SLOW TEST:16.326 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:12:45.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-qr9zj STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-qr9zj STEP: Deleting pre-stop pod May 11 18:12:58.486: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:12:58.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-qr9zj" for this suite. May 11 18:13:34.508: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:13:34.603: INFO: namespace: e2e-tests-prestop-qr9zj, resource: bindings, ignored listing per whitelist May 11 18:13:34.622: INFO: namespace e2e-tests-prestop-qr9zj deletion completed in 36.12124565s • [SLOW TEST:49.474 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:13:34.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-20f0f04a-93b3-11ea-b832-0242ac110018 STEP: Creating a pod to test consume secrets May 11 18:13:34.961: INFO: Waiting up to 5m0s for pod "pod-secrets-20f16520-93b3-11ea-b832-0242ac110018" in namespace "e2e-tests-secrets-k2wsj" to be "success or failure" May 11 18:13:34.972: INFO: Pod "pod-secrets-20f16520-93b3-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.389572ms May 11 18:13:36.975: INFO: Pod "pod-secrets-20f16520-93b3-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013251477s May 11 18:13:38.979: INFO: Pod "pod-secrets-20f16520-93b3-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017875828s May 11 18:13:40.982: INFO: Pod "pod-secrets-20f16520-93b3-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.020725834s STEP: Saw pod success May 11 18:13:40.982: INFO: Pod "pod-secrets-20f16520-93b3-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 18:13:40.984: INFO: Trying to get logs from node hunter-worker pod pod-secrets-20f16520-93b3-11ea-b832-0242ac110018 container secret-volume-test: STEP: delete the pod May 11 18:13:41.130: INFO: Waiting for pod pod-secrets-20f16520-93b3-11ea-b832-0242ac110018 to disappear May 11 18:13:41.141: INFO: Pod pod-secrets-20f16520-93b3-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:13:41.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-k2wsj" for this suite. May 11 18:13:47.239: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:13:47.250: INFO: namespace: e2e-tests-secrets-k2wsj, resource: bindings, ignored listing per whitelist May 11 18:13:47.307: INFO: namespace e2e-tests-secrets-k2wsj deletion completed in 6.088903163s • [SLOW TEST:12.685 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:13:47.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-2hp8t STEP: creating a selector STEP: Creating the service pods in kubernetes May 11 18:13:47.388: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 11 18:14:20.059: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.86:8080/dial?request=hostName&protocol=udp&host=10.244.2.85&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-2hp8t PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 18:14:20.059: INFO: >>> kubeConfig: /root/.kube/config I0511 18:14:20.086237 6 log.go:172] (0xc000db1290) (0xc00182adc0) Create stream I0511 18:14:20.086255 6 log.go:172] (0xc000db1290) (0xc00182adc0) Stream added, broadcasting: 1 I0511 18:14:20.087592 6 log.go:172] (0xc000db1290) Reply frame received for 1 I0511 18:14:20.087619 6 log.go:172] (0xc000db1290) (0xc0021a2820) Create stream I0511 18:14:20.087628 6 log.go:172] (0xc000db1290) (0xc0021a2820) Stream added, broadcasting: 3 I0511 18:14:20.088533 6 log.go:172] (0xc000db1290) Reply frame received for 3 I0511 18:14:20.088549 6 log.go:172] (0xc000db1290) (0xc00182ae60) Create stream I0511 18:14:20.088556 6 log.go:172] (0xc000db1290) (0xc00182ae60) Stream added, broadcasting: 5 I0511 18:14:20.089735 6 log.go:172] (0xc000db1290) Reply frame received for 5 I0511 18:14:20.140928 6 log.go:172] (0xc000db1290) Data frame received for 3 I0511 18:14:20.140969 6 log.go:172] (0xc0021a2820) (3) Data frame handling I0511 18:14:20.141036 6 log.go:172] (0xc0021a2820) (3) Data frame sent I0511 18:14:20.141568 6 log.go:172] (0xc000db1290) Data frame received for 3 I0511 18:14:20.141640 6 log.go:172] (0xc0021a2820) (3) Data frame handling I0511 18:14:20.141792 6 log.go:172] (0xc000db1290) Data frame received for 5 I0511 18:14:20.141826 6 log.go:172] (0xc00182ae60) (5) Data frame handling I0511 18:14:20.143249 6 log.go:172] (0xc000db1290) Data frame received for 1 I0511 18:14:20.143286 6 log.go:172] (0xc00182adc0) (1) Data frame handling I0511 18:14:20.143322 6 log.go:172] (0xc00182adc0) (1) Data frame sent I0511 18:14:20.143345 6 log.go:172] (0xc000db1290) (0xc00182adc0) Stream removed, broadcasting: 1 I0511 18:14:20.143373 6 log.go:172] (0xc000db1290) Go away received I0511 18:14:20.143477 6 log.go:172] (0xc000db1290) (0xc00182adc0) Stream removed, broadcasting: 1 I0511 18:14:20.143500 6 log.go:172] (0xc000db1290) (0xc0021a2820) Stream removed, broadcasting: 3 I0511 18:14:20.143517 6 log.go:172] (0xc000db1290) (0xc00182ae60) Stream removed, broadcasting: 5 May 11 18:14:20.143: INFO: Waiting for endpoints: map[] May 11 18:14:20.146: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.86:8080/dial?request=hostName&protocol=udp&host=10.244.1.101&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-2hp8t PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 18:14:20.146: INFO: >>> kubeConfig: /root/.kube/config I0511 18:14:20.171520 6 log.go:172] (0xc001320580) (0xc0021a2b40) Create stream I0511 18:14:20.171562 6 log.go:172] (0xc001320580) (0xc0021a2b40) Stream added, broadcasting: 1 I0511 18:14:20.175704 6 log.go:172] (0xc001320580) Reply frame received for 1 I0511 18:14:20.175786 6 log.go:172] (0xc001320580) (0xc0021a2be0) Create stream I0511 18:14:20.175813 6 log.go:172] (0xc001320580) (0xc0021a2be0) Stream added, broadcasting: 3 I0511 18:14:20.176867 6 log.go:172] (0xc001320580) Reply frame received for 3 I0511 18:14:20.176895 6 log.go:172] (0xc001320580) (0xc001b6ac80) Create stream I0511 18:14:20.176902 6 log.go:172] (0xc001320580) (0xc001b6ac80) Stream added, broadcasting: 5 I0511 18:14:20.178085 6 log.go:172] (0xc001320580) Reply frame received for 5 I0511 18:14:20.231011 6 log.go:172] (0xc001320580) Data frame received for 3 I0511 18:14:20.231053 6 log.go:172] (0xc0021a2be0) (3) Data frame handling I0511 18:14:20.231087 6 log.go:172] (0xc0021a2be0) (3) Data frame sent I0511 18:14:20.231504 6 log.go:172] (0xc001320580) Data frame received for 3 I0511 18:14:20.231526 6 log.go:172] (0xc0021a2be0) (3) Data frame handling I0511 18:14:20.231938 6 log.go:172] (0xc001320580) Data frame received for 5 I0511 18:14:20.231964 6 log.go:172] (0xc001b6ac80) (5) Data frame handling I0511 18:14:20.233063 6 log.go:172] (0xc001320580) Data frame received for 1 I0511 18:14:20.233077 6 log.go:172] (0xc0021a2b40) (1) Data frame handling I0511 18:14:20.233086 6 log.go:172] (0xc0021a2b40) (1) Data frame sent I0511 18:14:20.233101 6 log.go:172] (0xc001320580) (0xc0021a2b40) Stream removed, broadcasting: 1 I0511 18:14:20.233360 6 log.go:172] (0xc001320580) Go away received I0511 18:14:20.233420 6 log.go:172] (0xc001320580) (0xc0021a2b40) Stream removed, broadcasting: 1 I0511 18:14:20.233443 6 log.go:172] (0xc001320580) (0xc0021a2be0) Stream removed, broadcasting: 3 I0511 18:14:20.233493 6 log.go:172] (0xc001320580) (0xc001b6ac80) Stream removed, broadcasting: 5 May 11 18:14:20.233: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:14:20.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-2hp8t" for this suite. May 11 18:14:50.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:14:50.391: INFO: namespace: e2e-tests-pod-network-test-2hp8t, resource: bindings, ignored listing per whitelist May 11 18:14:50.444: INFO: namespace e2e-tests-pod-network-test-2hp8t deletion completed in 30.207963395s • [SLOW TEST:63.137 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:14:50.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:14:54.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-j5gph" for this suite. May 11 18:15:46.827: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:15:46.845: INFO: namespace: e2e-tests-kubelet-test-j5gph, resource: bindings, ignored listing per whitelist May 11 18:15:46.889: INFO: namespace e2e-tests-kubelet-test-j5gph deletion completed in 52.178183568s • [SLOW TEST:56.443 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:15:46.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 11 18:15:51.974: INFO: Successfully updated pod "annotationupdate6fb29031-93b3-11ea-b832-0242ac110018" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:15:54.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-7l4wc" for this suite. May 11 18:16:20.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:16:21.121: INFO: namespace: e2e-tests-downward-api-7l4wc, resource: bindings, ignored listing per whitelist May 11 18:16:21.125: INFO: namespace e2e-tests-downward-api-7l4wc deletion completed in 26.625763231s • [SLOW TEST:34.236 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:16:21.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 11 18:16:21.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-jvgcw' May 11 18:16:21.645: INFO: stderr: "" May 11 18:16:21.645: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created May 11 18:16:31.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-jvgcw -o json' May 11 18:16:31.889: INFO: stderr: "" May 11 18:16:31.889: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-11T18:16:21Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-jvgcw\",\n \"resourceVersion\": \"9995734\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-jvgcw/pods/e2e-test-nginx-pod\",\n \"uid\": \"844dadf2-93b3-11ea-99e8-0242ac110002\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-2fftc\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-2fftc\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-2fftc\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-11T18:16:22Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-11T18:16:30Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-11T18:16:30Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-11T18:16:21Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://e8805946c8caef1b944204f603bf446e06c5c3f7ea44ea9457294566fd9c5e08\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-11T18:16:29Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.4\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.88\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-11T18:16:22Z\"\n }\n}\n" STEP: replace the image in the pod May 11 18:16:31.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-jvgcw' May 11 18:16:33.632: INFO: stderr: "" May 11 18:16:33.632: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 May 11 18:16:33.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-jvgcw' May 11 18:16:41.968: INFO: stderr: "" May 11 18:16:41.968: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:16:41.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-jvgcw" for this suite. May 11 18:16:52.481: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:16:53.188: INFO: namespace: e2e-tests-kubectl-jvgcw, resource: bindings, ignored listing per whitelist May 11 18:16:53.217: INFO: namespace e2e-tests-kubectl-jvgcw deletion completed in 11.179033645s • [SLOW TEST:32.092 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:16:53.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 11 18:16:54.289: INFO: Waiting up to 5m0s for pod "downwardapi-volume-97ac2c88-93b3-11ea-b832-0242ac110018" in namespace "e2e-tests-projected-vvtpj" to be "success or failure" May 11 18:16:54.514: INFO: Pod "downwardapi-volume-97ac2c88-93b3-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 225.084283ms May 11 18:16:56.518: INFO: Pod "downwardapi-volume-97ac2c88-93b3-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.229245409s May 11 18:16:58.522: INFO: Pod "downwardapi-volume-97ac2c88-93b3-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.233686742s May 11 18:17:00.569: INFO: Pod "downwardapi-volume-97ac2c88-93b3-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.280224399s May 11 18:17:02.573: INFO: Pod "downwardapi-volume-97ac2c88-93b3-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.284155187s May 11 18:17:04.576: INFO: Pod "downwardapi-volume-97ac2c88-93b3-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.287216055s STEP: Saw pod success May 11 18:17:04.576: INFO: Pod "downwardapi-volume-97ac2c88-93b3-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 18:17:04.579: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-97ac2c88-93b3-11ea-b832-0242ac110018 container client-container: STEP: delete the pod May 11 18:17:04.764: INFO: Waiting for pod downwardapi-volume-97ac2c88-93b3-11ea-b832-0242ac110018 to disappear May 11 18:17:04.799: INFO: Pod downwardapi-volume-97ac2c88-93b3-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:17:04.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vvtpj" for this suite. May 11 18:17:10.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:17:11.064: INFO: namespace: e2e-tests-projected-vvtpj, resource: bindings, ignored listing per whitelist May 11 18:17:11.077: INFO: namespace e2e-tests-projected-vvtpj deletion completed in 6.27489242s • [SLOW TEST:17.860 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:17:11.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 11 18:17:19.956: INFO: Successfully updated pod "pod-update-activedeadlineseconds-a1d95ea4-93b3-11ea-b832-0242ac110018" May 11 18:17:19.956: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-a1d95ea4-93b3-11ea-b832-0242ac110018" in namespace "e2e-tests-pods-8wn2b" to be "terminated due to deadline exceeded" May 11 18:17:20.052: INFO: Pod "pod-update-activedeadlineseconds-a1d95ea4-93b3-11ea-b832-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 95.929684ms May 11 18:17:22.148: INFO: Pod "pod-update-activedeadlineseconds-a1d95ea4-93b3-11ea-b832-0242ac110018": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.192658059s May 11 18:17:22.148: INFO: Pod "pod-update-activedeadlineseconds-a1d95ea4-93b3-11ea-b832-0242ac110018" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:17:22.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-8wn2b" for this suite. May 11 18:17:35.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:17:35.155: INFO: namespace: e2e-tests-pods-8wn2b, resource: bindings, ignored listing per whitelist May 11 18:17:35.197: INFO: namespace e2e-tests-pods-8wn2b deletion completed in 12.534502665s • [SLOW TEST:24.120 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:17:35.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-b06154b8-93b3-11ea-b832-0242ac110018 STEP: Creating a pod to test consume secrets May 11 18:17:35.658: INFO: Waiting up to 5m0s for pod "pod-secrets-b061a6f7-93b3-11ea-b832-0242ac110018" in namespace "e2e-tests-secrets-4gtgg" to be "success or failure" May 11 18:17:35.850: INFO: Pod "pod-secrets-b061a6f7-93b3-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 192.574295ms May 11 18:17:37.906: INFO: Pod "pod-secrets-b061a6f7-93b3-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.248701699s May 11 18:17:39.928: INFO: Pod "pod-secrets-b061a6f7-93b3-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.270225706s May 11 18:17:42.096: INFO: Pod "pod-secrets-b061a6f7-93b3-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.43789996s May 11 18:17:44.228: INFO: Pod "pod-secrets-b061a6f7-93b3-11ea-b832-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 8.569965933s May 11 18:17:46.231: INFO: Pod "pod-secrets-b061a6f7-93b3-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.572834367s STEP: Saw pod success May 11 18:17:46.231: INFO: Pod "pod-secrets-b061a6f7-93b3-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 18:17:46.232: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-b061a6f7-93b3-11ea-b832-0242ac110018 container secret-env-test: STEP: delete the pod May 11 18:17:46.472: INFO: Waiting for pod pod-secrets-b061a6f7-93b3-11ea-b832-0242ac110018 to disappear May 11 18:17:46.898: INFO: Pod pod-secrets-b061a6f7-93b3-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:17:46.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-4gtgg" for this suite. May 11 18:17:55.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:17:55.138: INFO: namespace: e2e-tests-secrets-4gtgg, resource: bindings, ignored listing per whitelist May 11 18:17:55.174: INFO: namespace e2e-tests-secrets-4gtgg deletion completed in 8.273322806s • [SLOW TEST:19.977 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:17:55.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 11 18:18:00.232: INFO: Waiting up to 5m0s for pod "client-envvars-bf0f5cdb-93b3-11ea-b832-0242ac110018" in namespace "e2e-tests-pods-tvbx4" to be "success or failure" May 11 18:18:00.256: INFO: Pod "client-envvars-bf0f5cdb-93b3-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 23.556143ms May 11 18:18:02.618: INFO: Pod "client-envvars-bf0f5cdb-93b3-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.385800761s May 11 18:18:04.713: INFO: Pod "client-envvars-bf0f5cdb-93b3-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.481011308s May 11 18:18:06.717: INFO: Pod "client-envvars-bf0f5cdb-93b3-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.485282616s STEP: Saw pod success May 11 18:18:06.717: INFO: Pod "client-envvars-bf0f5cdb-93b3-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 18:18:06.720: INFO: Trying to get logs from node hunter-worker2 pod client-envvars-bf0f5cdb-93b3-11ea-b832-0242ac110018 container env3cont: STEP: delete the pod May 11 18:18:06.895: INFO: Waiting for pod client-envvars-bf0f5cdb-93b3-11ea-b832-0242ac110018 to disappear May 11 18:18:06.945: INFO: Pod client-envvars-bf0f5cdb-93b3-11ea-b832-0242ac110018 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:18:06.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-tvbx4" for this suite. May 11 18:19:05.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:19:05.067: INFO: namespace: e2e-tests-pods-tvbx4, resource: bindings, ignored listing per whitelist May 11 18:19:05.130: INFO: namespace e2e-tests-pods-tvbx4 deletion completed in 58.142823121s • [SLOW TEST:69.956 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:19:05.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:19:09.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-vntgl" for this suite. May 11 18:19:53.463: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:19:53.488: INFO: namespace: e2e-tests-kubelet-test-vntgl, resource: bindings, ignored listing per whitelist May 11 18:19:53.521: INFO: namespace e2e-tests-kubelet-test-vntgl deletion completed in 44.229134425s • [SLOW TEST:48.390 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:19:53.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-02eed925-93b4-11ea-b832-0242ac110018 STEP: Creating a pod to test consume secrets May 11 18:19:54.171: INFO: Waiting up to 5m0s for pod "pod-secrets-02f5ba53-93b4-11ea-b832-0242ac110018" in namespace "e2e-tests-secrets-sb9hs" to be "success or failure" May 11 18:19:54.284: INFO: Pod "pod-secrets-02f5ba53-93b4-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 112.801433ms May 11 18:19:56.440: INFO: Pod "pod-secrets-02f5ba53-93b4-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.268458201s May 11 18:19:58.442: INFO: Pod "pod-secrets-02f5ba53-93b4-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.271337936s May 11 18:20:00.445: INFO: Pod "pod-secrets-02f5ba53-93b4-11ea-b832-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 6.273929442s May 11 18:20:02.449: INFO: Pod "pod-secrets-02f5ba53-93b4-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.277467812s STEP: Saw pod success May 11 18:20:02.449: INFO: Pod "pod-secrets-02f5ba53-93b4-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 18:20:02.451: INFO: Trying to get logs from node hunter-worker pod pod-secrets-02f5ba53-93b4-11ea-b832-0242ac110018 container secret-volume-test: STEP: delete the pod May 11 18:20:02.531: INFO: Waiting for pod pod-secrets-02f5ba53-93b4-11ea-b832-0242ac110018 to disappear May 11 18:20:02.679: INFO: Pod pod-secrets-02f5ba53-93b4-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:20:02.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-sb9hs" for this suite. May 11 18:20:08.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:20:08.723: INFO: namespace: e2e-tests-secrets-sb9hs, resource: bindings, ignored listing per whitelist May 11 18:20:08.766: INFO: namespace e2e-tests-secrets-sb9hs deletion completed in 6.083903014s • [SLOW TEST:15.245 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:20:08.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-0bc1fac8-93b4-11ea-b832-0242ac110018 STEP: Creating a pod to test consume secrets May 11 18:20:08.911: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0bc2bd05-93b4-11ea-b832-0242ac110018" in namespace "e2e-tests-projected-r7hzr" to be "success or failure" May 11 18:20:08.923: INFO: Pod "pod-projected-secrets-0bc2bd05-93b4-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 11.647949ms May 11 18:20:11.230: INFO: Pod "pod-projected-secrets-0bc2bd05-93b4-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.318770161s May 11 18:20:13.571: INFO: Pod "pod-projected-secrets-0bc2bd05-93b4-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.65970611s May 11 18:20:15.661: INFO: Pod "pod-projected-secrets-0bc2bd05-93b4-11ea-b832-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 6.74924886s May 11 18:20:17.781: INFO: Pod "pod-projected-secrets-0bc2bd05-93b4-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.86995543s STEP: Saw pod success May 11 18:20:17.781: INFO: Pod "pod-projected-secrets-0bc2bd05-93b4-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 18:20:17.784: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-0bc2bd05-93b4-11ea-b832-0242ac110018 container projected-secret-volume-test: STEP: delete the pod May 11 18:20:17.871: INFO: Waiting for pod pod-projected-secrets-0bc2bd05-93b4-11ea-b832-0242ac110018 to disappear May 11 18:20:18.044: INFO: Pod pod-projected-secrets-0bc2bd05-93b4-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:20:18.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-r7hzr" for this suite. May 11 18:20:26.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:20:26.115: INFO: namespace: e2e-tests-projected-r7hzr, resource: bindings, ignored listing per whitelist May 11 18:20:26.153: INFO: namespace e2e-tests-projected-r7hzr deletion completed in 8.106187209s • [SLOW TEST:17.387 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:20:26.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 11 18:20:26.428: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1635b3e3-93b4-11ea-b832-0242ac110018" in namespace "e2e-tests-projected-tsjjx" to be "success or failure" May 11 18:20:26.459: INFO: Pod "downwardapi-volume-1635b3e3-93b4-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 30.811539ms May 11 18:20:28.463: INFO: Pod "downwardapi-volume-1635b3e3-93b4-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034926073s May 11 18:20:30.467: INFO: Pod "downwardapi-volume-1635b3e3-93b4-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038828824s May 11 18:20:32.616: INFO: Pod "downwardapi-volume-1635b3e3-93b4-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.187338449s May 11 18:20:34.902: INFO: Pod "downwardapi-volume-1635b3e3-93b4-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.473391227s STEP: Saw pod success May 11 18:20:34.902: INFO: Pod "downwardapi-volume-1635b3e3-93b4-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 18:20:35.075: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-1635b3e3-93b4-11ea-b832-0242ac110018 container client-container: STEP: delete the pod May 11 18:20:35.263: INFO: Waiting for pod downwardapi-volume-1635b3e3-93b4-11ea-b832-0242ac110018 to disappear May 11 18:20:35.296: INFO: Pod downwardapi-volume-1635b3e3-93b4-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:20:35.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-tsjjx" for this suite. May 11 18:20:47.346: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:20:47.445: INFO: namespace: e2e-tests-projected-tsjjx, resource: bindings, ignored listing per whitelist May 11 18:20:47.466: INFO: namespace e2e-tests-projected-tsjjx deletion completed in 12.166600165s • [SLOW TEST:21.313 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:20:47.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 11 18:20:48.578: INFO: Waiting up to 5m0s for pod "downwardapi-volume-233bf9a5-93b4-11ea-b832-0242ac110018" in namespace "e2e-tests-projected-4fd4k" to be "success or failure" May 11 18:20:48.775: INFO: Pod "downwardapi-volume-233bf9a5-93b4-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 196.429661ms May 11 18:20:50.778: INFO: Pod "downwardapi-volume-233bf9a5-93b4-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.200174195s May 11 18:20:52.781: INFO: Pod "downwardapi-volume-233bf9a5-93b4-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.203077155s May 11 18:20:55.069: INFO: Pod "downwardapi-volume-233bf9a5-93b4-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.490754586s May 11 18:20:57.073: INFO: Pod "downwardapi-volume-233bf9a5-93b4-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.494492594s STEP: Saw pod success May 11 18:20:57.073: INFO: Pod "downwardapi-volume-233bf9a5-93b4-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 18:20:57.076: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-233bf9a5-93b4-11ea-b832-0242ac110018 container client-container: STEP: delete the pod May 11 18:20:57.133: INFO: Waiting for pod downwardapi-volume-233bf9a5-93b4-11ea-b832-0242ac110018 to disappear May 11 18:20:57.242: INFO: Pod downwardapi-volume-233bf9a5-93b4-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:20:57.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-4fd4k" for this suite. May 11 18:21:03.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:21:03.430: INFO: namespace: e2e-tests-projected-4fd4k, resource: bindings, ignored listing per whitelist May 11 18:21:03.467: INFO: namespace e2e-tests-projected-4fd4k deletion completed in 6.194448473s • [SLOW TEST:16.000 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:21:03.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 11 18:21:03.617: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-wp878,SelfLink:/api/v1/namespaces/e2e-tests-watch-wp878/configmaps/e2e-watch-test-label-changed,UID:2c5c0d5a-93b4-11ea-99e8-0242ac110002,ResourceVersion:9996503,Generation:0,CreationTimestamp:2020-05-11 18:21:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 11 18:21:03.617: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-wp878,SelfLink:/api/v1/namespaces/e2e-tests-watch-wp878/configmaps/e2e-watch-test-label-changed,UID:2c5c0d5a-93b4-11ea-99e8-0242ac110002,ResourceVersion:9996504,Generation:0,CreationTimestamp:2020-05-11 18:21:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 11 18:21:03.617: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-wp878,SelfLink:/api/v1/namespaces/e2e-tests-watch-wp878/configmaps/e2e-watch-test-label-changed,UID:2c5c0d5a-93b4-11ea-99e8-0242ac110002,ResourceVersion:9996505,Generation:0,CreationTimestamp:2020-05-11 18:21:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 11 18:21:13.777: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-wp878,SelfLink:/api/v1/namespaces/e2e-tests-watch-wp878/configmaps/e2e-watch-test-label-changed,UID:2c5c0d5a-93b4-11ea-99e8-0242ac110002,ResourceVersion:9996526,Generation:0,CreationTimestamp:2020-05-11 18:21:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 11 18:21:13.777: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-wp878,SelfLink:/api/v1/namespaces/e2e-tests-watch-wp878/configmaps/e2e-watch-test-label-changed,UID:2c5c0d5a-93b4-11ea-99e8-0242ac110002,ResourceVersion:9996527,Generation:0,CreationTimestamp:2020-05-11 18:21:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 11 18:21:13.778: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-wp878,SelfLink:/api/v1/namespaces/e2e-tests-watch-wp878/configmaps/e2e-watch-test-label-changed,UID:2c5c0d5a-93b4-11ea-99e8-0242ac110002,ResourceVersion:9996528,Generation:0,CreationTimestamp:2020-05-11 18:21:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:21:13.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-wp878" for this suite. May 11 18:21:20.003: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:21:20.028: INFO: namespace: e2e-tests-watch-wp878, resource: bindings, ignored listing per whitelist May 11 18:21:20.070: INFO: namespace e2e-tests-watch-wp878 deletion completed in 6.209708775s • [SLOW TEST:16.603 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:21:20.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:22:20.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-wt7b4" for this suite. May 11 18:22:43.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:22:43.819: INFO: namespace: e2e-tests-container-probe-wt7b4, resource: bindings, ignored listing per whitelist May 11 18:22:44.574: INFO: namespace e2e-tests-container-probe-wt7b4 deletion completed in 24.254301135s • [SLOW TEST:84.504 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:22:44.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0511 18:22:59.168791 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 11 18:22:59.168: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:22:59.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-55q2n" for this suite. May 11 18:23:07.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:23:07.324: INFO: namespace: e2e-tests-gc-55q2n, resource: bindings, ignored listing per whitelist May 11 18:23:07.349: INFO: namespace e2e-tests-gc-55q2n deletion completed in 8.17518348s • [SLOW TEST:22.774 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:23:07.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy May 11 18:23:07.918: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix353514484/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:23:07.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-ptb8k" for this suite. May 11 18:23:14.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:23:14.412: INFO: namespace: e2e-tests-kubectl-ptb8k, resource: bindings, ignored listing per whitelist May 11 18:23:14.450: INFO: namespace e2e-tests-kubectl-ptb8k deletion completed in 6.174288109s • [SLOW TEST:7.101 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:23:14.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium May 11 18:23:17.408: INFO: Waiting up to 5m0s for pod "pod-7bf1668e-93b4-11ea-b832-0242ac110018" in namespace "e2e-tests-emptydir-9v6lp" to be "success or failure" May 11 18:23:17.450: INFO: Pod "pod-7bf1668e-93b4-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 41.948259ms May 11 18:23:19.577: INFO: Pod "pod-7bf1668e-93b4-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.168802471s May 11 18:23:21.585: INFO: Pod "pod-7bf1668e-93b4-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.176410376s May 11 18:23:23.664: INFO: Pod "pod-7bf1668e-93b4-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.255824769s STEP: Saw pod success May 11 18:23:23.664: INFO: Pod "pod-7bf1668e-93b4-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 18:23:23.667: INFO: Trying to get logs from node hunter-worker2 pod pod-7bf1668e-93b4-11ea-b832-0242ac110018 container test-container: STEP: delete the pod May 11 18:23:23.692: INFO: Waiting for pod pod-7bf1668e-93b4-11ea-b832-0242ac110018 to disappear May 11 18:23:23.892: INFO: Pod pod-7bf1668e-93b4-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:23:23.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-9v6lp" for this suite. May 11 18:23:30.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:23:30.233: INFO: namespace: e2e-tests-emptydir-9v6lp, resource: bindings, ignored listing per whitelist May 11 18:23:30.288: INFO: namespace e2e-tests-emptydir-9v6lp deletion completed in 6.392411284s • [SLOW TEST:15.838 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:23:30.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-83d77b88-93b4-11ea-b832-0242ac110018 STEP: Creating a pod to test consume configMaps May 11 18:23:30.391: INFO: Waiting up to 5m0s for pod "pod-configmaps-83dbbe3a-93b4-11ea-b832-0242ac110018" in namespace "e2e-tests-configmap-7lrp6" to be "success or failure" May 11 18:23:30.394: INFO: Pod "pod-configmaps-83dbbe3a-93b4-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.46493ms May 11 18:23:32.521: INFO: Pod "pod-configmaps-83dbbe3a-93b4-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129505182s May 11 18:23:34.524: INFO: Pod "pod-configmaps-83dbbe3a-93b4-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.133188939s May 11 18:23:36.757: INFO: Pod "pod-configmaps-83dbbe3a-93b4-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.366412021s May 11 18:23:39.688: INFO: Pod "pod-configmaps-83dbbe3a-93b4-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.297425646s STEP: Saw pod success May 11 18:23:39.688: INFO: Pod "pod-configmaps-83dbbe3a-93b4-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 18:23:39.691: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-83dbbe3a-93b4-11ea-b832-0242ac110018 container configmap-volume-test: STEP: delete the pod May 11 18:23:40.826: INFO: Waiting for pod pod-configmaps-83dbbe3a-93b4-11ea-b832-0242ac110018 to disappear May 11 18:23:40.892: INFO: Pod pod-configmaps-83dbbe3a-93b4-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:23:40.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-7lrp6" for this suite. May 11 18:23:47.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:23:47.492: INFO: namespace: e2e-tests-configmap-7lrp6, resource: bindings, ignored listing per whitelist May 11 18:23:47.496: INFO: namespace e2e-tests-configmap-7lrp6 deletion completed in 6.598478161s • [SLOW TEST:17.208 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:23:47.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 11 18:24:04.004: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 11 18:24:04.483: INFO: Pod pod-with-poststart-http-hook still exists May 11 18:24:06.483: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 11 18:24:06.488: INFO: Pod pod-with-poststart-http-hook still exists May 11 18:24:08.483: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 11 18:24:08.503: INFO: Pod pod-with-poststart-http-hook still exists May 11 18:24:10.483: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 11 18:24:11.767: INFO: Pod pod-with-poststart-http-hook still exists May 11 18:24:12.483: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 11 18:24:12.618: INFO: Pod pod-with-poststart-http-hook still exists May 11 18:24:14.483: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 11 18:24:14.582: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:24:14.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-vdggt" for this suite. May 11 18:24:36.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:24:36.802: INFO: namespace: e2e-tests-container-lifecycle-hook-vdggt, resource: bindings, ignored listing per whitelist May 11 18:24:36.809: INFO: namespace e2e-tests-container-lifecycle-hook-vdggt deletion completed in 22.223468703s • [SLOW TEST:49.313 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:24:36.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 11 18:24:44.728: INFO: 9 pods remaining May 11 18:24:44.728: INFO: 0 pods has nil DeletionTimestamp May 11 18:24:44.728: INFO: May 11 18:24:45.374: INFO: 0 pods remaining May 11 18:24:45.374: INFO: 0 pods has nil DeletionTimestamp May 11 18:24:45.374: INFO: May 11 18:24:47.045: INFO: 0 pods remaining May 11 18:24:47.045: INFO: 0 pods has nil DeletionTimestamp May 11 18:24:47.045: INFO: STEP: Gathering metrics W0511 18:24:48.046318 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 11 18:24:48.046: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:24:48.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-p85j8" for this suite. May 11 18:25:00.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:25:00.236: INFO: namespace: e2e-tests-gc-p85j8, resource: bindings, ignored listing per whitelist May 11 18:25:00.244: INFO: namespace e2e-tests-gc-p85j8 deletion completed in 12.194751714s • [SLOW TEST:23.435 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:25:00.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-b99c9065-93b4-11ea-b832-0242ac110018 STEP: Creating a pod to test consume secrets May 11 18:25:00.819: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b9a3c151-93b4-11ea-b832-0242ac110018" in namespace "e2e-tests-projected-564xx" to be "success or failure" May 11 18:25:01.133: INFO: Pod "pod-projected-secrets-b9a3c151-93b4-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 314.457825ms May 11 18:25:03.137: INFO: Pod "pod-projected-secrets-b9a3c151-93b4-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.318689263s May 11 18:25:05.422: INFO: Pod "pod-projected-secrets-b9a3c151-93b4-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.603262984s May 11 18:25:07.821: INFO: Pod "pod-projected-secrets-b9a3c151-93b4-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 7.002727225s May 11 18:25:09.825: INFO: Pod "pod-projected-secrets-b9a3c151-93b4-11ea-b832-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 9.006509194s May 11 18:25:11.829: INFO: Pod "pod-projected-secrets-b9a3c151-93b4-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.010538516s STEP: Saw pod success May 11 18:25:11.829: INFO: Pod "pod-projected-secrets-b9a3c151-93b4-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 18:25:11.831: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-b9a3c151-93b4-11ea-b832-0242ac110018 container projected-secret-volume-test: STEP: delete the pod May 11 18:25:12.027: INFO: Waiting for pod pod-projected-secrets-b9a3c151-93b4-11ea-b832-0242ac110018 to disappear May 11 18:25:12.506: INFO: Pod pod-projected-secrets-b9a3c151-93b4-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:25:12.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-564xx" for this suite. May 11 18:25:19.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:25:19.223: INFO: namespace: e2e-tests-projected-564xx, resource: bindings, ignored listing per whitelist May 11 18:25:19.227: INFO: namespace e2e-tests-projected-564xx deletion completed in 6.147201175s • [SLOW TEST:18.983 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:25:19.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-tsfwk [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet May 11 18:25:19.604: INFO: Found 0 stateful pods, waiting for 3 May 11 18:25:29.806: INFO: Found 2 stateful pods, waiting for 3 May 11 18:25:39.609: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 11 18:25:39.609: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 11 18:25:39.609: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 11 18:25:39.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tsfwk ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 11 18:25:40.136: INFO: stderr: "I0511 18:25:39.810940 1798 log.go:172] (0xc000138160) (0xc0008ea500) Create stream\nI0511 18:25:39.811009 1798 log.go:172] (0xc000138160) (0xc0008ea500) Stream added, broadcasting: 1\nI0511 18:25:39.813485 1798 log.go:172] (0xc000138160) Reply frame received for 1\nI0511 18:25:39.813540 1798 log.go:172] (0xc000138160) (0xc00067a000) Create stream\nI0511 18:25:39.813557 1798 log.go:172] (0xc000138160) (0xc00067a000) Stream added, broadcasting: 3\nI0511 18:25:39.814284 1798 log.go:172] (0xc000138160) Reply frame received for 3\nI0511 18:25:39.814309 1798 log.go:172] (0xc000138160) (0xc00067a140) Create stream\nI0511 18:25:39.814318 1798 log.go:172] (0xc000138160) (0xc00067a140) Stream added, broadcasting: 5\nI0511 18:25:39.815111 1798 log.go:172] (0xc000138160) Reply frame received for 5\nI0511 18:25:40.131101 1798 log.go:172] (0xc000138160) Data frame received for 5\nI0511 18:25:40.131146 1798 log.go:172] (0xc00067a140) (5) Data frame handling\nI0511 18:25:40.131193 1798 log.go:172] (0xc000138160) Data frame received for 3\nI0511 18:25:40.131256 1798 log.go:172] (0xc00067a000) (3) Data frame handling\nI0511 18:25:40.131315 1798 log.go:172] (0xc00067a000) (3) Data frame sent\nI0511 18:25:40.131341 1798 log.go:172] (0xc000138160) Data frame received for 3\nI0511 18:25:40.131354 1798 log.go:172] (0xc00067a000) (3) Data frame handling\nI0511 18:25:40.131847 1798 log.go:172] (0xc000138160) Data frame received for 1\nI0511 18:25:40.131871 1798 log.go:172] (0xc0008ea500) (1) Data frame handling\nI0511 18:25:40.131917 1798 log.go:172] (0xc0008ea500) (1) Data frame sent\nI0511 18:25:40.131940 1798 log.go:172] (0xc000138160) (0xc0008ea500) Stream removed, broadcasting: 1\nI0511 18:25:40.131955 1798 log.go:172] (0xc000138160) Go away received\nI0511 18:25:40.132187 1798 log.go:172] (0xc000138160) (0xc0008ea500) Stream removed, broadcasting: 1\nI0511 18:25:40.132209 1798 log.go:172] (0xc000138160) (0xc00067a000) Stream removed, broadcasting: 3\nI0511 18:25:40.132223 1798 log.go:172] (0xc000138160) (0xc00067a140) Stream removed, broadcasting: 5\n" May 11 18:25:40.136: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 11 18:25:40.136: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 11 18:25:50.959: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 11 18:26:01.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tsfwk ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 18:26:01.997: INFO: stderr: "I0511 18:26:01.931785 1819 log.go:172] (0xc000138840) (0xc000566640) Create stream\nI0511 18:26:01.931846 1819 log.go:172] (0xc000138840) (0xc000566640) Stream added, broadcasting: 1\nI0511 18:26:01.933982 1819 log.go:172] (0xc000138840) Reply frame received for 1\nI0511 18:26:01.934030 1819 log.go:172] (0xc000138840) (0xc0005666e0) Create stream\nI0511 18:26:01.934041 1819 log.go:172] (0xc000138840) (0xc0005666e0) Stream added, broadcasting: 3\nI0511 18:26:01.935061 1819 log.go:172] (0xc000138840) Reply frame received for 3\nI0511 18:26:01.935102 1819 log.go:172] (0xc000138840) (0xc0005eedc0) Create stream\nI0511 18:26:01.935115 1819 log.go:172] (0xc000138840) (0xc0005eedc0) Stream added, broadcasting: 5\nI0511 18:26:01.935865 1819 log.go:172] (0xc000138840) Reply frame received for 5\nI0511 18:26:01.991720 1819 log.go:172] (0xc000138840) Data frame received for 5\nI0511 18:26:01.991762 1819 log.go:172] (0xc0005eedc0) (5) Data frame handling\nI0511 18:26:01.991793 1819 log.go:172] (0xc000138840) Data frame received for 3\nI0511 18:26:01.991817 1819 log.go:172] (0xc0005666e0) (3) Data frame handling\nI0511 18:26:01.991839 1819 log.go:172] (0xc0005666e0) (3) Data frame sent\nI0511 18:26:01.991851 1819 log.go:172] (0xc000138840) Data frame received for 3\nI0511 18:26:01.991859 1819 log.go:172] (0xc0005666e0) (3) Data frame handling\nI0511 18:26:01.992981 1819 log.go:172] (0xc000138840) Data frame received for 1\nI0511 18:26:01.993015 1819 log.go:172] (0xc000566640) (1) Data frame handling\nI0511 18:26:01.993032 1819 log.go:172] (0xc000566640) (1) Data frame sent\nI0511 18:26:01.993054 1819 log.go:172] (0xc000138840) (0xc000566640) Stream removed, broadcasting: 1\nI0511 18:26:01.993076 1819 log.go:172] (0xc000138840) Go away received\nI0511 18:26:01.993427 1819 log.go:172] (0xc000138840) (0xc000566640) Stream removed, broadcasting: 1\nI0511 18:26:01.993454 1819 log.go:172] (0xc000138840) (0xc0005666e0) Stream removed, broadcasting: 3\nI0511 18:26:01.993465 1819 log.go:172] (0xc000138840) (0xc0005eedc0) Stream removed, broadcasting: 5\n" May 11 18:26:01.998: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 11 18:26:01.998: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 11 18:26:32.013: INFO: Waiting for StatefulSet e2e-tests-statefulset-tsfwk/ss2 to complete update May 11 18:26:32.014: INFO: Waiting for Pod e2e-tests-statefulset-tsfwk/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 11 18:26:42.019: INFO: Waiting for StatefulSet e2e-tests-statefulset-tsfwk/ss2 to complete update STEP: Rolling back to a previous revision May 11 18:26:52.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tsfwk ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 11 18:26:52.639: INFO: stderr: "I0511 18:26:52.257804 1841 log.go:172] (0xc000718370) (0xc0007bc6e0) Create stream\nI0511 18:26:52.257852 1841 log.go:172] (0xc000718370) (0xc0007bc6e0) Stream added, broadcasting: 1\nI0511 18:26:52.259658 1841 log.go:172] (0xc000718370) Reply frame received for 1\nI0511 18:26:52.259700 1841 log.go:172] (0xc000718370) (0xc00064ac80) Create stream\nI0511 18:26:52.259719 1841 log.go:172] (0xc000718370) (0xc00064ac80) Stream added, broadcasting: 3\nI0511 18:26:52.260470 1841 log.go:172] (0xc000718370) Reply frame received for 3\nI0511 18:26:52.260489 1841 log.go:172] (0xc000718370) (0xc00071c000) Create stream\nI0511 18:26:52.260496 1841 log.go:172] (0xc000718370) (0xc00071c000) Stream added, broadcasting: 5\nI0511 18:26:52.261274 1841 log.go:172] (0xc000718370) Reply frame received for 5\nI0511 18:26:52.634150 1841 log.go:172] (0xc000718370) Data frame received for 5\nI0511 18:26:52.634185 1841 log.go:172] (0xc000718370) Data frame received for 3\nI0511 18:26:52.634220 1841 log.go:172] (0xc00064ac80) (3) Data frame handling\nI0511 18:26:52.634245 1841 log.go:172] (0xc00064ac80) (3) Data frame sent\nI0511 18:26:52.634266 1841 log.go:172] (0xc00071c000) (5) Data frame handling\nI0511 18:26:52.634307 1841 log.go:172] (0xc000718370) Data frame received for 3\nI0511 18:26:52.634342 1841 log.go:172] (0xc00064ac80) (3) Data frame handling\nI0511 18:26:52.635458 1841 log.go:172] (0xc000718370) Data frame received for 1\nI0511 18:26:52.635482 1841 log.go:172] (0xc0007bc6e0) (1) Data frame handling\nI0511 18:26:52.635499 1841 log.go:172] (0xc0007bc6e0) (1) Data frame sent\nI0511 18:26:52.635525 1841 log.go:172] (0xc000718370) (0xc0007bc6e0) Stream removed, broadcasting: 1\nI0511 18:26:52.635555 1841 log.go:172] (0xc000718370) Go away received\nI0511 18:26:52.635710 1841 log.go:172] (0xc000718370) (0xc0007bc6e0) Stream removed, broadcasting: 1\nI0511 18:26:52.635731 1841 log.go:172] (0xc000718370) (0xc00064ac80) Stream removed, broadcasting: 3\nI0511 18:26:52.635738 1841 log.go:172] (0xc000718370) (0xc00071c000) Stream removed, broadcasting: 5\n" May 11 18:26:52.639: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 11 18:26:52.639: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 11 18:27:02.762: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 11 18:27:12.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tsfwk ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 18:27:13.064: INFO: stderr: "I0511 18:27:12.957365 1863 log.go:172] (0xc000138840) (0xc000796640) Create stream\nI0511 18:27:12.957421 1863 log.go:172] (0xc000138840) (0xc000796640) Stream added, broadcasting: 1\nI0511 18:27:12.959593 1863 log.go:172] (0xc000138840) Reply frame received for 1\nI0511 18:27:12.959630 1863 log.go:172] (0xc000138840) (0xc00068cd20) Create stream\nI0511 18:27:12.959639 1863 log.go:172] (0xc000138840) (0xc00068cd20) Stream added, broadcasting: 3\nI0511 18:27:12.960310 1863 log.go:172] (0xc000138840) Reply frame received for 3\nI0511 18:27:12.960333 1863 log.go:172] (0xc000138840) (0xc0007966e0) Create stream\nI0511 18:27:12.960340 1863 log.go:172] (0xc000138840) (0xc0007966e0) Stream added, broadcasting: 5\nI0511 18:27:12.960980 1863 log.go:172] (0xc000138840) Reply frame received for 5\nI0511 18:27:13.058889 1863 log.go:172] (0xc000138840) Data frame received for 3\nI0511 18:27:13.058922 1863 log.go:172] (0xc00068cd20) (3) Data frame handling\nI0511 18:27:13.058944 1863 log.go:172] (0xc00068cd20) (3) Data frame sent\nI0511 18:27:13.058954 1863 log.go:172] (0xc000138840) Data frame received for 3\nI0511 18:27:13.058961 1863 log.go:172] (0xc00068cd20) (3) Data frame handling\nI0511 18:27:13.059202 1863 log.go:172] (0xc000138840) Data frame received for 5\nI0511 18:27:13.059228 1863 log.go:172] (0xc0007966e0) (5) Data frame handling\nI0511 18:27:13.060277 1863 log.go:172] (0xc000138840) Data frame received for 1\nI0511 18:27:13.060296 1863 log.go:172] (0xc000796640) (1) Data frame handling\nI0511 18:27:13.060316 1863 log.go:172] (0xc000796640) (1) Data frame sent\nI0511 18:27:13.060338 1863 log.go:172] (0xc000138840) (0xc000796640) Stream removed, broadcasting: 1\nI0511 18:27:13.060458 1863 log.go:172] (0xc000138840) Go away received\nI0511 18:27:13.060553 1863 log.go:172] (0xc000138840) (0xc000796640) Stream removed, broadcasting: 1\nI0511 18:27:13.060588 1863 log.go:172] (0xc000138840) (0xc00068cd20) Stream removed, broadcasting: 3\nI0511 18:27:13.060608 1863 log.go:172] (0xc000138840) (0xc0007966e0) Stream removed, broadcasting: 5\n" May 11 18:27:13.064: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 11 18:27:13.064: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 11 18:27:43.084: INFO: Waiting for StatefulSet e2e-tests-statefulset-tsfwk/ss2 to complete update May 11 18:27:43.084: INFO: Waiting for Pod e2e-tests-statefulset-tsfwk/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 11 18:27:53.111: INFO: Waiting for StatefulSet e2e-tests-statefulset-tsfwk/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 11 18:28:03.088: INFO: Deleting all statefulset in ns e2e-tests-statefulset-tsfwk May 11 18:28:03.090: INFO: Scaling statefulset ss2 to 0 May 11 18:28:23.178: INFO: Waiting for statefulset status.replicas updated to 0 May 11 18:28:23.180: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:28:23.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-tsfwk" for this suite. May 11 18:28:35.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:28:35.600: INFO: namespace: e2e-tests-statefulset-tsfwk, resource: bindings, ignored listing per whitelist May 11 18:28:35.777: INFO: namespace e2e-tests-statefulset-tsfwk deletion completed in 12.41057425s • [SLOW TEST:196.549 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:28:35.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 11 18:28:56.785: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bbstx PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 18:28:56.785: INFO: >>> kubeConfig: /root/.kube/config I0511 18:28:56.806343 6 log.go:172] (0xc001320580) (0xc001988be0) Create stream I0511 18:28:56.806363 6 log.go:172] (0xc001320580) (0xc001988be0) Stream added, broadcasting: 1 I0511 18:28:56.808107 6 log.go:172] (0xc001320580) Reply frame received for 1 I0511 18:28:56.808139 6 log.go:172] (0xc001320580) (0xc001090460) Create stream I0511 18:28:56.808155 6 log.go:172] (0xc001320580) (0xc001090460) Stream added, broadcasting: 3 I0511 18:28:56.808991 6 log.go:172] (0xc001320580) Reply frame received for 3 I0511 18:28:56.809020 6 log.go:172] (0xc001320580) (0xc001090500) Create stream I0511 18:28:56.809031 6 log.go:172] (0xc001320580) (0xc001090500) Stream added, broadcasting: 5 I0511 18:28:56.809908 6 log.go:172] (0xc001320580) Reply frame received for 5 I0511 18:28:56.894324 6 log.go:172] (0xc001320580) Data frame received for 3 I0511 18:28:56.894344 6 log.go:172] (0xc001090460) (3) Data frame handling I0511 18:28:56.894351 6 log.go:172] (0xc001090460) (3) Data frame sent I0511 18:28:56.894355 6 log.go:172] (0xc001320580) Data frame received for 3 I0511 18:28:56.894358 6 log.go:172] (0xc001090460) (3) Data frame handling I0511 18:28:56.894372 6 log.go:172] (0xc001320580) Data frame received for 5 I0511 18:28:56.894388 6 log.go:172] (0xc001090500) (5) Data frame handling I0511 18:28:56.895780 6 log.go:172] (0xc001320580) Data frame received for 1 I0511 18:28:56.895824 6 log.go:172] (0xc001988be0) (1) Data frame handling I0511 18:28:56.895854 6 log.go:172] (0xc001988be0) (1) Data frame sent I0511 18:28:56.895876 6 log.go:172] (0xc001320580) (0xc001988be0) Stream removed, broadcasting: 1 I0511 18:28:56.895996 6 log.go:172] (0xc001320580) Go away received I0511 18:28:56.896033 6 log.go:172] (0xc001320580) (0xc001988be0) Stream removed, broadcasting: 1 I0511 18:28:56.896060 6 log.go:172] (0xc001320580) (0xc001090460) Stream removed, broadcasting: 3 I0511 18:28:56.896082 6 log.go:172] (0xc001320580) (0xc001090500) Stream removed, broadcasting: 5 May 11 18:28:56.896: INFO: Exec stderr: "" May 11 18:28:56.896: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bbstx PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 18:28:56.896: INFO: >>> kubeConfig: /root/.kube/config I0511 18:28:56.927963 6 log.go:172] (0xc001320a50) (0xc001988f00) Create stream I0511 18:28:56.927989 6 log.go:172] (0xc001320a50) (0xc001988f00) Stream added, broadcasting: 1 I0511 18:28:56.929760 6 log.go:172] (0xc001320a50) Reply frame received for 1 I0511 18:28:56.929808 6 log.go:172] (0xc001320a50) (0xc001988fa0) Create stream I0511 18:28:56.929815 6 log.go:172] (0xc001320a50) (0xc001988fa0) Stream added, broadcasting: 3 I0511 18:28:56.930390 6 log.go:172] (0xc001320a50) Reply frame received for 3 I0511 18:28:56.930421 6 log.go:172] (0xc001320a50) (0xc001e99860) Create stream I0511 18:28:56.930433 6 log.go:172] (0xc001320a50) (0xc001e99860) Stream added, broadcasting: 5 I0511 18:28:56.930979 6 log.go:172] (0xc001320a50) Reply frame received for 5 I0511 18:28:56.978021 6 log.go:172] (0xc001320a50) Data frame received for 3 I0511 18:28:56.978049 6 log.go:172] (0xc001988fa0) (3) Data frame handling I0511 18:28:56.978066 6 log.go:172] (0xc001988fa0) (3) Data frame sent I0511 18:28:56.978082 6 log.go:172] (0xc001320a50) Data frame received for 3 I0511 18:28:56.978101 6 log.go:172] (0xc001988fa0) (3) Data frame handling I0511 18:28:56.978124 6 log.go:172] (0xc001320a50) Data frame received for 5 I0511 18:28:56.978147 6 log.go:172] (0xc001e99860) (5) Data frame handling I0511 18:28:56.979130 6 log.go:172] (0xc001320a50) Data frame received for 1 I0511 18:28:56.979152 6 log.go:172] (0xc001988f00) (1) Data frame handling I0511 18:28:56.979171 6 log.go:172] (0xc001988f00) (1) Data frame sent I0511 18:28:56.979185 6 log.go:172] (0xc001320a50) (0xc001988f00) Stream removed, broadcasting: 1 I0511 18:28:56.979208 6 log.go:172] (0xc001320a50) Go away received I0511 18:28:56.979357 6 log.go:172] (0xc001320a50) (0xc001988f00) Stream removed, broadcasting: 1 I0511 18:28:56.979383 6 log.go:172] (0xc001320a50) (0xc001988fa0) Stream removed, broadcasting: 3 I0511 18:28:56.979397 6 log.go:172] (0xc001320a50) (0xc001e99860) Stream removed, broadcasting: 5 May 11 18:28:56.979: INFO: Exec stderr: "" May 11 18:28:56.979: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bbstx PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 18:28:56.979: INFO: >>> kubeConfig: /root/.kube/config I0511 18:28:57.007552 6 log.go:172] (0xc000db1080) (0xc001e99b80) Create stream I0511 18:28:57.007589 6 log.go:172] (0xc000db1080) (0xc001e99b80) Stream added, broadcasting: 1 I0511 18:28:57.012011 6 log.go:172] (0xc000db1080) Reply frame received for 1 I0511 18:28:57.012041 6 log.go:172] (0xc000db1080) (0xc0022ff680) Create stream I0511 18:28:57.012051 6 log.go:172] (0xc000db1080) (0xc0022ff680) Stream added, broadcasting: 3 I0511 18:28:57.015181 6 log.go:172] (0xc000db1080) Reply frame received for 3 I0511 18:28:57.015194 6 log.go:172] (0xc000db1080) (0xc0022ff720) Create stream I0511 18:28:57.015200 6 log.go:172] (0xc000db1080) (0xc0022ff720) Stream added, broadcasting: 5 I0511 18:28:57.015887 6 log.go:172] (0xc000db1080) Reply frame received for 5 I0511 18:28:57.058959 6 log.go:172] (0xc000db1080) Data frame received for 3 I0511 18:28:57.058982 6 log.go:172] (0xc0022ff680) (3) Data frame handling I0511 18:28:57.058988 6 log.go:172] (0xc0022ff680) (3) Data frame sent I0511 18:28:57.058995 6 log.go:172] (0xc000db1080) Data frame received for 3 I0511 18:28:57.059000 6 log.go:172] (0xc0022ff680) (3) Data frame handling I0511 18:28:57.059024 6 log.go:172] (0xc000db1080) Data frame received for 5 I0511 18:28:57.059029 6 log.go:172] (0xc0022ff720) (5) Data frame handling I0511 18:28:57.059979 6 log.go:172] (0xc000db1080) Data frame received for 1 I0511 18:28:57.060008 6 log.go:172] (0xc001e99b80) (1) Data frame handling I0511 18:28:57.060031 6 log.go:172] (0xc001e99b80) (1) Data frame sent I0511 18:28:57.060090 6 log.go:172] (0xc000db1080) (0xc001e99b80) Stream removed, broadcasting: 1 I0511 18:28:57.060129 6 log.go:172] (0xc000db1080) Go away received I0511 18:28:57.060201 6 log.go:172] (0xc000db1080) (0xc001e99b80) Stream removed, broadcasting: 1 I0511 18:28:57.060232 6 log.go:172] (0xc000db1080) (0xc0022ff680) Stream removed, broadcasting: 3 I0511 18:28:57.060259 6 log.go:172] (0xc000db1080) (0xc0022ff720) Stream removed, broadcasting: 5 May 11 18:28:57.060: INFO: Exec stderr: "" May 11 18:28:57.060: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bbstx PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 18:28:57.060: INFO: >>> kubeConfig: /root/.kube/config I0511 18:28:57.085050 6 log.go:172] (0xc000b302c0) (0xc0022ff9a0) Create stream I0511 18:28:57.085070 6 log.go:172] (0xc000b302c0) (0xc0022ff9a0) Stream added, broadcasting: 1 I0511 18:28:57.086628 6 log.go:172] (0xc000b302c0) Reply frame received for 1 I0511 18:28:57.086665 6 log.go:172] (0xc000b302c0) (0xc0010905a0) Create stream I0511 18:28:57.086675 6 log.go:172] (0xc000b302c0) (0xc0010905a0) Stream added, broadcasting: 3 I0511 18:28:57.087242 6 log.go:172] (0xc000b302c0) Reply frame received for 3 I0511 18:28:57.087265 6 log.go:172] (0xc000b302c0) (0xc001e99c20) Create stream I0511 18:28:57.087273 6 log.go:172] (0xc000b302c0) (0xc001e99c20) Stream added, broadcasting: 5 I0511 18:28:57.087820 6 log.go:172] (0xc000b302c0) Reply frame received for 5 I0511 18:28:57.152225 6 log.go:172] (0xc000b302c0) Data frame received for 3 I0511 18:28:57.152272 6 log.go:172] (0xc0010905a0) (3) Data frame handling I0511 18:28:57.152291 6 log.go:172] (0xc0010905a0) (3) Data frame sent I0511 18:28:57.152302 6 log.go:172] (0xc000b302c0) Data frame received for 3 I0511 18:28:57.152309 6 log.go:172] (0xc0010905a0) (3) Data frame handling I0511 18:28:57.152326 6 log.go:172] (0xc000b302c0) Data frame received for 5 I0511 18:28:57.152333 6 log.go:172] (0xc001e99c20) (5) Data frame handling I0511 18:28:57.153760 6 log.go:172] (0xc000b302c0) Data frame received for 1 I0511 18:28:57.153790 6 log.go:172] (0xc0022ff9a0) (1) Data frame handling I0511 18:28:57.153804 6 log.go:172] (0xc0022ff9a0) (1) Data frame sent I0511 18:28:57.153833 6 log.go:172] (0xc000b302c0) (0xc0022ff9a0) Stream removed, broadcasting: 1 I0511 18:28:57.153880 6 log.go:172] (0xc000b302c0) Go away received I0511 18:28:57.153924 6 log.go:172] (0xc000b302c0) (0xc0022ff9a0) Stream removed, broadcasting: 1 I0511 18:28:57.153949 6 log.go:172] (0xc000b302c0) (0xc0010905a0) Stream removed, broadcasting: 3 I0511 18:28:57.153968 6 log.go:172] (0xc000b302c0) (0xc001e99c20) Stream removed, broadcasting: 5 May 11 18:28:57.153: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 11 18:28:57.154: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bbstx PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 18:28:57.154: INFO: >>> kubeConfig: /root/.kube/config I0511 18:28:57.199112 6 log.go:172] (0xc001320f20) (0xc001989400) Create stream I0511 18:28:57.199165 6 log.go:172] (0xc001320f20) (0xc001989400) Stream added, broadcasting: 1 I0511 18:28:57.200889 6 log.go:172] (0xc001320f20) Reply frame received for 1 I0511 18:28:57.200921 6 log.go:172] (0xc001320f20) (0xc00182a1e0) Create stream I0511 18:28:57.200931 6 log.go:172] (0xc001320f20) (0xc00182a1e0) Stream added, broadcasting: 3 I0511 18:28:57.201826 6 log.go:172] (0xc001320f20) Reply frame received for 3 I0511 18:28:57.201873 6 log.go:172] (0xc001320f20) (0xc0019894a0) Create stream I0511 18:28:57.201894 6 log.go:172] (0xc001320f20) (0xc0019894a0) Stream added, broadcasting: 5 I0511 18:28:57.202620 6 log.go:172] (0xc001320f20) Reply frame received for 5 I0511 18:28:57.255842 6 log.go:172] (0xc001320f20) Data frame received for 5 I0511 18:28:57.255888 6 log.go:172] (0xc001320f20) Data frame received for 3 I0511 18:28:57.255925 6 log.go:172] (0xc00182a1e0) (3) Data frame handling I0511 18:28:57.255940 6 log.go:172] (0xc00182a1e0) (3) Data frame sent I0511 18:28:57.255952 6 log.go:172] (0xc001320f20) Data frame received for 3 I0511 18:28:57.255963 6 log.go:172] (0xc00182a1e0) (3) Data frame handling I0511 18:28:57.255998 6 log.go:172] (0xc0019894a0) (5) Data frame handling I0511 18:28:57.257277 6 log.go:172] (0xc001320f20) Data frame received for 1 I0511 18:28:57.257316 6 log.go:172] (0xc001989400) (1) Data frame handling I0511 18:28:57.257342 6 log.go:172] (0xc001989400) (1) Data frame sent I0511 18:28:57.257428 6 log.go:172] (0xc001320f20) (0xc001989400) Stream removed, broadcasting: 1 I0511 18:28:57.257479 6 log.go:172] (0xc001320f20) Go away received I0511 18:28:57.257570 6 log.go:172] (0xc001320f20) (0xc001989400) Stream removed, broadcasting: 1 I0511 18:28:57.257594 6 log.go:172] (0xc001320f20) (0xc00182a1e0) Stream removed, broadcasting: 3 I0511 18:28:57.257614 6 log.go:172] (0xc001320f20) (0xc0019894a0) Stream removed, broadcasting: 5 May 11 18:28:57.257: INFO: Exec stderr: "" May 11 18:28:57.257: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bbstx PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 18:28:57.257: INFO: >>> kubeConfig: /root/.kube/config I0511 18:28:57.284750 6 log.go:172] (0xc000b30790) (0xc0022ffcc0) Create stream I0511 18:28:57.284778 6 log.go:172] (0xc000b30790) (0xc0022ffcc0) Stream added, broadcasting: 1 I0511 18:28:57.287465 6 log.go:172] (0xc000b30790) Reply frame received for 1 I0511 18:28:57.287513 6 log.go:172] (0xc000b30790) (0xc001e99cc0) Create stream I0511 18:28:57.287535 6 log.go:172] (0xc000b30790) (0xc001e99cc0) Stream added, broadcasting: 3 I0511 18:28:57.288368 6 log.go:172] (0xc000b30790) Reply frame received for 3 I0511 18:28:57.288416 6 log.go:172] (0xc000b30790) (0xc00182a280) Create stream I0511 18:28:57.288435 6 log.go:172] (0xc000b30790) (0xc00182a280) Stream added, broadcasting: 5 I0511 18:28:57.289088 6 log.go:172] (0xc000b30790) Reply frame received for 5 I0511 18:28:57.346702 6 log.go:172] (0xc000b30790) Data frame received for 3 I0511 18:28:57.346729 6 log.go:172] (0xc001e99cc0) (3) Data frame handling I0511 18:28:57.346748 6 log.go:172] (0xc001e99cc0) (3) Data frame sent I0511 18:28:57.346793 6 log.go:172] (0xc000b30790) Data frame received for 3 I0511 18:28:57.346806 6 log.go:172] (0xc001e99cc0) (3) Data frame handling I0511 18:28:57.346854 6 log.go:172] (0xc000b30790) Data frame received for 5 I0511 18:28:57.346866 6 log.go:172] (0xc00182a280) (5) Data frame handling I0511 18:28:57.347957 6 log.go:172] (0xc000b30790) Data frame received for 1 I0511 18:28:57.347989 6 log.go:172] (0xc0022ffcc0) (1) Data frame handling I0511 18:28:57.348010 6 log.go:172] (0xc0022ffcc0) (1) Data frame sent I0511 18:28:57.348028 6 log.go:172] (0xc000b30790) (0xc0022ffcc0) Stream removed, broadcasting: 1 I0511 18:28:57.348076 6 log.go:172] (0xc000b30790) Go away received I0511 18:28:57.348115 6 log.go:172] (0xc000b30790) (0xc0022ffcc0) Stream removed, broadcasting: 1 I0511 18:28:57.348125 6 log.go:172] (0xc000b30790) (0xc001e99cc0) Stream removed, broadcasting: 3 I0511 18:28:57.348129 6 log.go:172] (0xc000b30790) (0xc00182a280) Stream removed, broadcasting: 5 May 11 18:28:57.348: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 11 18:28:57.348: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bbstx PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 18:28:57.348: INFO: >>> kubeConfig: /root/.kube/config I0511 18:28:57.372407 6 log.go:172] (0xc001d200b0) (0xc00182a3c0) Create stream I0511 18:28:57.372434 6 log.go:172] (0xc001d200b0) (0xc00182a3c0) Stream added, broadcasting: 1 I0511 18:28:57.374075 6 log.go:172] (0xc001d200b0) Reply frame received for 1 I0511 18:28:57.374101 6 log.go:172] (0xc001d200b0) (0xc0022ffd60) Create stream I0511 18:28:57.374112 6 log.go:172] (0xc001d200b0) (0xc0022ffd60) Stream added, broadcasting: 3 I0511 18:28:57.375013 6 log.go:172] (0xc001d200b0) Reply frame received for 3 I0511 18:28:57.375040 6 log.go:172] (0xc001d200b0) (0xc0022ffe00) Create stream I0511 18:28:57.375052 6 log.go:172] (0xc001d200b0) (0xc0022ffe00) Stream added, broadcasting: 5 I0511 18:28:57.375761 6 log.go:172] (0xc001d200b0) Reply frame received for 5 I0511 18:28:57.420636 6 log.go:172] (0xc001d200b0) Data frame received for 3 I0511 18:28:57.420661 6 log.go:172] (0xc0022ffd60) (3) Data frame handling I0511 18:28:57.420683 6 log.go:172] (0xc001d200b0) Data frame received for 5 I0511 18:28:57.420710 6 log.go:172] (0xc0022ffe00) (5) Data frame handling I0511 18:28:57.420738 6 log.go:172] (0xc0022ffd60) (3) Data frame sent I0511 18:28:57.420756 6 log.go:172] (0xc001d200b0) Data frame received for 3 I0511 18:28:57.420767 6 log.go:172] (0xc0022ffd60) (3) Data frame handling I0511 18:28:57.421873 6 log.go:172] (0xc001d200b0) Data frame received for 1 I0511 18:28:57.421891 6 log.go:172] (0xc00182a3c0) (1) Data frame handling I0511 18:28:57.421904 6 log.go:172] (0xc00182a3c0) (1) Data frame sent I0511 18:28:57.421919 6 log.go:172] (0xc001d200b0) (0xc00182a3c0) Stream removed, broadcasting: 1 I0511 18:28:57.421946 6 log.go:172] (0xc001d200b0) Go away received I0511 18:28:57.422004 6 log.go:172] (0xc001d200b0) (0xc00182a3c0) Stream removed, broadcasting: 1 I0511 18:28:57.422023 6 log.go:172] (0xc001d200b0) (0xc0022ffd60) Stream removed, broadcasting: 3 I0511 18:28:57.422035 6 log.go:172] (0xc001d200b0) (0xc0022ffe00) Stream removed, broadcasting: 5 May 11 18:28:57.422: INFO: Exec stderr: "" May 11 18:28:57.422: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bbstx PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 18:28:57.422: INFO: >>> kubeConfig: /root/.kube/config I0511 18:28:57.447508 6 log.go:172] (0xc0012522c0) (0xc001090820) Create stream I0511 18:28:57.447537 6 log.go:172] (0xc0012522c0) (0xc001090820) Stream added, broadcasting: 1 I0511 18:28:57.449387 6 log.go:172] (0xc0012522c0) Reply frame received for 1 I0511 18:28:57.449435 6 log.go:172] (0xc0012522c0) (0xc00182a460) Create stream I0511 18:28:57.449451 6 log.go:172] (0xc0012522c0) (0xc00182a460) Stream added, broadcasting: 3 I0511 18:28:57.450161 6 log.go:172] (0xc0012522c0) Reply frame received for 3 I0511 18:28:57.450183 6 log.go:172] (0xc0012522c0) (0xc001e99d60) Create stream I0511 18:28:57.450191 6 log.go:172] (0xc0012522c0) (0xc001e99d60) Stream added, broadcasting: 5 I0511 18:28:57.450803 6 log.go:172] (0xc0012522c0) Reply frame received for 5 I0511 18:28:57.535484 6 log.go:172] (0xc0012522c0) Data frame received for 3 I0511 18:28:57.535518 6 log.go:172] (0xc00182a460) (3) Data frame handling I0511 18:28:57.535531 6 log.go:172] (0xc00182a460) (3) Data frame sent I0511 18:28:57.535546 6 log.go:172] (0xc0012522c0) Data frame received for 3 I0511 18:28:57.535551 6 log.go:172] (0xc00182a460) (3) Data frame handling I0511 18:28:57.535589 6 log.go:172] (0xc0012522c0) Data frame received for 5 I0511 18:28:57.535606 6 log.go:172] (0xc001e99d60) (5) Data frame handling I0511 18:28:57.537017 6 log.go:172] (0xc0012522c0) Data frame received for 1 I0511 18:28:57.537034 6 log.go:172] (0xc001090820) (1) Data frame handling I0511 18:28:57.537050 6 log.go:172] (0xc001090820) (1) Data frame sent I0511 18:28:57.537070 6 log.go:172] (0xc0012522c0) (0xc001090820) Stream removed, broadcasting: 1 I0511 18:28:57.537133 6 log.go:172] (0xc0012522c0) Go away received I0511 18:28:57.537243 6 log.go:172] (0xc0012522c0) (0xc001090820) Stream removed, broadcasting: 1 I0511 18:28:57.537251 6 log.go:172] (0xc0012522c0) (0xc00182a460) Stream removed, broadcasting: 3 I0511 18:28:57.537256 6 log.go:172] (0xc0012522c0) (0xc001e99d60) Stream removed, broadcasting: 5 May 11 18:28:57.537: INFO: Exec stderr: "" May 11 18:28:57.537: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bbstx PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 18:28:57.537: INFO: >>> kubeConfig: /root/.kube/config I0511 18:28:57.565014 6 log.go:172] (0xc000db16b0) (0xc0015be140) Create stream I0511 18:28:57.565044 6 log.go:172] (0xc000db16b0) (0xc0015be140) Stream added, broadcasting: 1 I0511 18:28:57.566648 6 log.go:172] (0xc000db16b0) Reply frame received for 1 I0511 18:28:57.566679 6 log.go:172] (0xc000db16b0) (0xc00182a500) Create stream I0511 18:28:57.566699 6 log.go:172] (0xc000db16b0) (0xc00182a500) Stream added, broadcasting: 3 I0511 18:28:57.567434 6 log.go:172] (0xc000db16b0) Reply frame received for 3 I0511 18:28:57.567465 6 log.go:172] (0xc000db16b0) (0xc0010908c0) Create stream I0511 18:28:57.567477 6 log.go:172] (0xc000db16b0) (0xc0010908c0) Stream added, broadcasting: 5 I0511 18:28:57.568188 6 log.go:172] (0xc000db16b0) Reply frame received for 5 I0511 18:28:57.625828 6 log.go:172] (0xc000db16b0) Data frame received for 5 I0511 18:28:57.625866 6 log.go:172] (0xc0010908c0) (5) Data frame handling I0511 18:28:57.625893 6 log.go:172] (0xc000db16b0) Data frame received for 3 I0511 18:28:57.625909 6 log.go:172] (0xc00182a500) (3) Data frame handling I0511 18:28:57.625921 6 log.go:172] (0xc00182a500) (3) Data frame sent I0511 18:28:57.625932 6 log.go:172] (0xc000db16b0) Data frame received for 3 I0511 18:28:57.625957 6 log.go:172] (0xc00182a500) (3) Data frame handling I0511 18:28:57.627394 6 log.go:172] (0xc000db16b0) Data frame received for 1 I0511 18:28:57.627425 6 log.go:172] (0xc0015be140) (1) Data frame handling I0511 18:28:57.627449 6 log.go:172] (0xc0015be140) (1) Data frame sent I0511 18:28:57.627467 6 log.go:172] (0xc000db16b0) (0xc0015be140) Stream removed, broadcasting: 1 I0511 18:28:57.627484 6 log.go:172] (0xc000db16b0) Go away received I0511 18:28:57.627600 6 log.go:172] (0xc000db16b0) (0xc0015be140) Stream removed, broadcasting: 1 I0511 18:28:57.627624 6 log.go:172] (0xc000db16b0) (0xc00182a500) Stream removed, broadcasting: 3 I0511 18:28:57.627641 6 log.go:172] (0xc000db16b0) (0xc0010908c0) Stream removed, broadcasting: 5 May 11 18:28:57.627: INFO: Exec stderr: "" May 11 18:28:57.627: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bbstx PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 18:28:57.627: INFO: >>> kubeConfig: /root/.kube/config I0511 18:28:57.657439 6 log.go:172] (0xc000b30c60) (0xc0012d0140) Create stream I0511 18:28:57.657465 6 log.go:172] (0xc000b30c60) (0xc0012d0140) Stream added, broadcasting: 1 I0511 18:28:57.658793 6 log.go:172] (0xc000b30c60) Reply frame received for 1 I0511 18:28:57.658825 6 log.go:172] (0xc000b30c60) (0xc0012d0280) Create stream I0511 18:28:57.658837 6 log.go:172] (0xc000b30c60) (0xc0012d0280) Stream added, broadcasting: 3 I0511 18:28:57.659733 6 log.go:172] (0xc000b30c60) Reply frame received for 3 I0511 18:28:57.659763 6 log.go:172] (0xc000b30c60) (0xc0015be1e0) Create stream I0511 18:28:57.659773 6 log.go:172] (0xc000b30c60) (0xc0015be1e0) Stream added, broadcasting: 5 I0511 18:28:57.660659 6 log.go:172] (0xc000b30c60) Reply frame received for 5 I0511 18:28:57.709706 6 log.go:172] (0xc000b30c60) Data frame received for 5 I0511 18:28:57.709751 6 log.go:172] (0xc0015be1e0) (5) Data frame handling I0511 18:28:57.709782 6 log.go:172] (0xc000b30c60) Data frame received for 3 I0511 18:28:57.709802 6 log.go:172] (0xc0012d0280) (3) Data frame handling I0511 18:28:57.709827 6 log.go:172] (0xc0012d0280) (3) Data frame sent I0511 18:28:57.709846 6 log.go:172] (0xc000b30c60) Data frame received for 3 I0511 18:28:57.709861 6 log.go:172] (0xc0012d0280) (3) Data frame handling I0511 18:28:57.711036 6 log.go:172] (0xc000b30c60) Data frame received for 1 I0511 18:28:57.711069 6 log.go:172] (0xc0012d0140) (1) Data frame handling I0511 18:28:57.711095 6 log.go:172] (0xc0012d0140) (1) Data frame sent I0511 18:28:57.711123 6 log.go:172] (0xc000b30c60) (0xc0012d0140) Stream removed, broadcasting: 1 I0511 18:28:57.711196 6 log.go:172] (0xc000b30c60) (0xc0012d0140) Stream removed, broadcasting: 1 I0511 18:28:57.711210 6 log.go:172] (0xc000b30c60) (0xc0012d0280) Stream removed, broadcasting: 3 I0511 18:28:57.711313 6 log.go:172] (0xc000b30c60) Go away received I0511 18:28:57.711330 6 log.go:172] (0xc000b30c60) (0xc0015be1e0) Stream removed, broadcasting: 5 May 11 18:28:57.711: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:28:57.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-bbstx" for this suite. May 11 18:29:43.729: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:29:43.740: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-bbstx, resource: bindings, ignored listing per whitelist May 11 18:29:43.796: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-bbstx deletion completed in 46.081743685s • [SLOW TEST:68.019 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:29:43.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 11 18:29:45.123: INFO: Waiting up to 5m0s for pod "downwardapi-volume-632d83fa-93b5-11ea-b832-0242ac110018" in namespace "e2e-tests-projected-mw8lm" to be "success or failure" May 11 18:29:45.498: INFO: Pod "downwardapi-volume-632d83fa-93b5-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 374.41866ms May 11 18:29:47.500: INFO: Pod "downwardapi-volume-632d83fa-93b5-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.37709411s May 11 18:29:49.505: INFO: Pod "downwardapi-volume-632d83fa-93b5-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.381191633s STEP: Saw pod success May 11 18:29:49.505: INFO: Pod "downwardapi-volume-632d83fa-93b5-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 18:29:49.508: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-632d83fa-93b5-11ea-b832-0242ac110018 container client-container: STEP: delete the pod May 11 18:29:49.576: INFO: Waiting for pod downwardapi-volume-632d83fa-93b5-11ea-b832-0242ac110018 to disappear May 11 18:29:49.590: INFO: Pod downwardapi-volume-632d83fa-93b5-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:29:49.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-mw8lm" for this suite. May 11 18:29:55.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:29:55.801: INFO: namespace: e2e-tests-projected-mw8lm, resource: bindings, ignored listing per whitelist May 11 18:29:55.826: INFO: namespace e2e-tests-projected-mw8lm deletion completed in 6.232876443s • [SLOW TEST:12.029 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:29:55.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-69b13a42-93b5-11ea-b832-0242ac110018 STEP: Creating a pod to test consume configMaps May 11 18:29:56.052: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-69bb7a55-93b5-11ea-b832-0242ac110018" in namespace "e2e-tests-projected-jgwwf" to be "success or failure" May 11 18:29:56.110: INFO: Pod "pod-projected-configmaps-69bb7a55-93b5-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 58.094819ms May 11 18:29:58.115: INFO: Pod "pod-projected-configmaps-69bb7a55-93b5-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062934945s May 11 18:30:00.119: INFO: Pod "pod-projected-configmaps-69bb7a55-93b5-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066680086s May 11 18:30:02.122: INFO: Pod "pod-projected-configmaps-69bb7a55-93b5-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.069987023s STEP: Saw pod success May 11 18:30:02.122: INFO: Pod "pod-projected-configmaps-69bb7a55-93b5-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 18:30:02.125: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-69bb7a55-93b5-11ea-b832-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod May 11 18:30:02.314: INFO: Waiting for pod pod-projected-configmaps-69bb7a55-93b5-11ea-b832-0242ac110018 to disappear May 11 18:30:02.355: INFO: Pod pod-projected-configmaps-69bb7a55-93b5-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:30:02.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-jgwwf" for this suite. May 11 18:30:12.494: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:30:12.549: INFO: namespace: e2e-tests-projected-jgwwf, resource: bindings, ignored listing per whitelist May 11 18:30:12.587: INFO: namespace e2e-tests-projected-jgwwf deletion completed in 10.227699283s • [SLOW TEST:16.761 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:30:12.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:30:12.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-f87rs" for this suite. May 11 18:30:36.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:30:36.867: INFO: namespace: e2e-tests-pods-f87rs, resource: bindings, ignored listing per whitelist May 11 18:30:36.899: INFO: namespace e2e-tests-pods-f87rs deletion completed in 24.096389218s • [SLOW TEST:24.312 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:30:36.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 11 18:30:38.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-9n8bw' May 11 18:30:57.655: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 11 18:30:57.655: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 May 11 18:31:01.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-9n8bw' May 11 18:31:02.769: INFO: stderr: "" May 11 18:31:02.769: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:31:02.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-9n8bw" for this suite. May 11 18:31:10.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:31:10.531: INFO: namespace: e2e-tests-kubectl-9n8bw, resource: bindings, ignored listing per whitelist May 11 18:31:10.546: INFO: namespace e2e-tests-kubectl-9n8bw deletion completed in 7.713417002s • [SLOW TEST:33.647 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:31:10.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-963fdc17-93b5-11ea-b832-0242ac110018 STEP: Creating a pod to test consume configMaps May 11 18:31:10.771: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-96418ca4-93b5-11ea-b832-0242ac110018" in namespace "e2e-tests-projected-nx2bl" to be "success or failure" May 11 18:31:10.821: INFO: Pod "pod-projected-configmaps-96418ca4-93b5-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 49.588289ms May 11 18:31:12.989: INFO: Pod "pod-projected-configmaps-96418ca4-93b5-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217602015s May 11 18:31:15.283: INFO: Pod "pod-projected-configmaps-96418ca4-93b5-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.511582524s May 11 18:31:17.287: INFO: Pod "pod-projected-configmaps-96418ca4-93b5-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.515446323s STEP: Saw pod success May 11 18:31:17.287: INFO: Pod "pod-projected-configmaps-96418ca4-93b5-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 18:31:17.290: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-96418ca4-93b5-11ea-b832-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod May 11 18:31:17.463: INFO: Waiting for pod pod-projected-configmaps-96418ca4-93b5-11ea-b832-0242ac110018 to disappear May 11 18:31:17.519: INFO: Pod pod-projected-configmaps-96418ca4-93b5-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:31:17.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-nx2bl" for this suite. May 11 18:31:29.632: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:31:29.650: INFO: namespace: e2e-tests-projected-nx2bl, resource: bindings, ignored listing per whitelist May 11 18:31:29.691: INFO: namespace e2e-tests-projected-nx2bl deletion completed in 12.168929264s • [SLOW TEST:19.145 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:31:29.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 11 18:31:37.847: INFO: Successfully updated pod "labelsupdatea1b53d33-93b5-11ea-b832-0242ac110018" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:31:39.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-cbp9c" for this suite. May 11 18:32:08.151: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:32:08.191: INFO: namespace: e2e-tests-projected-cbp9c, resource: bindings, ignored listing per whitelist May 11 18:32:08.230: INFO: namespace e2e-tests-projected-cbp9c deletion completed in 28.284808953s • [SLOW TEST:38.539 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:32:08.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-2ntkg [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-2ntkg STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-2ntkg May 11 18:32:09.804: INFO: Found 0 stateful pods, waiting for 1 May 11 18:32:20.020: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 11 18:32:20.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2ntkg ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 11 18:32:20.730: INFO: stderr: "I0511 18:32:20.154549 1929 log.go:172] (0xc000138580) (0xc00052f360) Create stream\nI0511 18:32:20.154603 1929 log.go:172] (0xc000138580) (0xc00052f360) Stream added, broadcasting: 1\nI0511 18:32:20.166578 1929 log.go:172] (0xc000138580) Reply frame received for 1\nI0511 18:32:20.166639 1929 log.go:172] (0xc000138580) (0xc0005d8000) Create stream\nI0511 18:32:20.166655 1929 log.go:172] (0xc000138580) (0xc0005d8000) Stream added, broadcasting: 3\nI0511 18:32:20.169702 1929 log.go:172] (0xc000138580) Reply frame received for 3\nI0511 18:32:20.169737 1929 log.go:172] (0xc000138580) (0xc00052f400) Create stream\nI0511 18:32:20.169748 1929 log.go:172] (0xc000138580) (0xc00052f400) Stream added, broadcasting: 5\nI0511 18:32:20.173823 1929 log.go:172] (0xc000138580) Reply frame received for 5\nI0511 18:32:20.723973 1929 log.go:172] (0xc000138580) Data frame received for 5\nI0511 18:32:20.724008 1929 log.go:172] (0xc00052f400) (5) Data frame handling\nI0511 18:32:20.724045 1929 log.go:172] (0xc000138580) Data frame received for 3\nI0511 18:32:20.724053 1929 log.go:172] (0xc0005d8000) (3) Data frame handling\nI0511 18:32:20.724063 1929 log.go:172] (0xc0005d8000) (3) Data frame sent\nI0511 18:32:20.724068 1929 log.go:172] (0xc000138580) Data frame received for 3\nI0511 18:32:20.724072 1929 log.go:172] (0xc0005d8000) (3) Data frame handling\nI0511 18:32:20.726044 1929 log.go:172] (0xc000138580) Data frame received for 1\nI0511 18:32:20.726061 1929 log.go:172] (0xc00052f360) (1) Data frame handling\nI0511 18:32:20.726075 1929 log.go:172] (0xc00052f360) (1) Data frame sent\nI0511 18:32:20.726085 1929 log.go:172] (0xc000138580) (0xc00052f360) Stream removed, broadcasting: 1\nI0511 18:32:20.726096 1929 log.go:172] (0xc000138580) Go away received\nI0511 18:32:20.726387 1929 log.go:172] (0xc000138580) (0xc00052f360) Stream removed, broadcasting: 1\nI0511 18:32:20.726409 1929 log.go:172] (0xc000138580) (0xc0005d8000) Stream removed, broadcasting: 3\nI0511 18:32:20.726417 1929 log.go:172] (0xc000138580) (0xc00052f400) Stream removed, broadcasting: 5\n" May 11 18:32:20.731: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 11 18:32:20.731: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 11 18:32:20.734: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 11 18:32:30.738: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 11 18:32:30.738: INFO: Waiting for statefulset status.replicas updated to 0 May 11 18:32:30.924: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999399s May 11 18:32:33.599: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.822708061s May 11 18:32:34.710: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.148550731s May 11 18:32:35.713: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.037684524s May 11 18:32:36.718: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.033792622s May 11 18:32:37.722: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.029260336s May 11 18:32:38.726: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.02539007s May 11 18:32:39.730: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.02093586s May 11 18:32:40.733: INFO: Verifying statefulset ss doesn't scale past 1 for another 16.805377ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-2ntkg May 11 18:32:41.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2ntkg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 18:32:43.067: INFO: stderr: "I0511 18:32:43.015323 1949 log.go:172] (0xc000138160) (0xc000728000) Create stream\nI0511 18:32:43.015420 1949 log.go:172] (0xc000138160) (0xc000728000) Stream added, broadcasting: 1\nI0511 18:32:43.017342 1949 log.go:172] (0xc000138160) Reply frame received for 1\nI0511 18:32:43.017370 1949 log.go:172] (0xc000138160) (0xc0000cedc0) Create stream\nI0511 18:32:43.017380 1949 log.go:172] (0xc000138160) (0xc0000cedc0) Stream added, broadcasting: 3\nI0511 18:32:43.018043 1949 log.go:172] (0xc000138160) Reply frame received for 3\nI0511 18:32:43.018075 1949 log.go:172] (0xc000138160) (0xc0002dc000) Create stream\nI0511 18:32:43.018083 1949 log.go:172] (0xc000138160) (0xc0002dc000) Stream added, broadcasting: 5\nI0511 18:32:43.018639 1949 log.go:172] (0xc000138160) Reply frame received for 5\nI0511 18:32:43.060459 1949 log.go:172] (0xc000138160) Data frame received for 5\nI0511 18:32:43.060505 1949 log.go:172] (0xc0002dc000) (5) Data frame handling\nI0511 18:32:43.060550 1949 log.go:172] (0xc000138160) Data frame received for 3\nI0511 18:32:43.060563 1949 log.go:172] (0xc0000cedc0) (3) Data frame handling\nI0511 18:32:43.060579 1949 log.go:172] (0xc0000cedc0) (3) Data frame sent\nI0511 18:32:43.060591 1949 log.go:172] (0xc000138160) Data frame received for 3\nI0511 18:32:43.060600 1949 log.go:172] (0xc0000cedc0) (3) Data frame handling\nI0511 18:32:43.062268 1949 log.go:172] (0xc000138160) Data frame received for 1\nI0511 18:32:43.062283 1949 log.go:172] (0xc000728000) (1) Data frame handling\nI0511 18:32:43.062298 1949 log.go:172] (0xc000728000) (1) Data frame sent\nI0511 18:32:43.062311 1949 log.go:172] (0xc000138160) (0xc000728000) Stream removed, broadcasting: 1\nI0511 18:32:43.062325 1949 log.go:172] (0xc000138160) Go away received\nI0511 18:32:43.062626 1949 log.go:172] (0xc000138160) (0xc000728000) Stream removed, broadcasting: 1\nI0511 18:32:43.062657 1949 log.go:172] (0xc000138160) (0xc0000cedc0) Stream removed, broadcasting: 3\nI0511 18:32:43.062669 1949 log.go:172] (0xc000138160) (0xc0002dc000) Stream removed, broadcasting: 5\n" May 11 18:32:43.067: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 11 18:32:43.067: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 11 18:32:43.295: INFO: Found 1 stateful pods, waiting for 3 May 11 18:32:53.523: INFO: Found 2 stateful pods, waiting for 3 May 11 18:33:03.487: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 11 18:33:03.487: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 11 18:33:03.487: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 11 18:33:03.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2ntkg ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 11 18:33:03.982: INFO: stderr: "I0511 18:33:03.883053 1973 log.go:172] (0xc000138580) (0xc0006d0000) Create stream\nI0511 18:33:03.883107 1973 log.go:172] (0xc000138580) (0xc0006d0000) Stream added, broadcasting: 1\nI0511 18:33:03.885644 1973 log.go:172] (0xc000138580) Reply frame received for 1\nI0511 18:33:03.885682 1973 log.go:172] (0xc000138580) (0xc0006d0140) Create stream\nI0511 18:33:03.885690 1973 log.go:172] (0xc000138580) (0xc0006d0140) Stream added, broadcasting: 3\nI0511 18:33:03.886799 1973 log.go:172] (0xc000138580) Reply frame received for 3\nI0511 18:33:03.886833 1973 log.go:172] (0xc000138580) (0xc0007c6b40) Create stream\nI0511 18:33:03.886843 1973 log.go:172] (0xc000138580) (0xc0007c6b40) Stream added, broadcasting: 5\nI0511 18:33:03.887753 1973 log.go:172] (0xc000138580) Reply frame received for 5\nI0511 18:33:03.976348 1973 log.go:172] (0xc000138580) Data frame received for 5\nI0511 18:33:03.976385 1973 log.go:172] (0xc0007c6b40) (5) Data frame handling\nI0511 18:33:03.976434 1973 log.go:172] (0xc000138580) Data frame received for 3\nI0511 18:33:03.976450 1973 log.go:172] (0xc0006d0140) (3) Data frame handling\nI0511 18:33:03.976466 1973 log.go:172] (0xc0006d0140) (3) Data frame sent\nI0511 18:33:03.976472 1973 log.go:172] (0xc000138580) Data frame received for 3\nI0511 18:33:03.976479 1973 log.go:172] (0xc0006d0140) (3) Data frame handling\nI0511 18:33:03.977815 1973 log.go:172] (0xc000138580) Data frame received for 1\nI0511 18:33:03.977844 1973 log.go:172] (0xc0006d0000) (1) Data frame handling\nI0511 18:33:03.977858 1973 log.go:172] (0xc0006d0000) (1) Data frame sent\nI0511 18:33:03.977871 1973 log.go:172] (0xc000138580) (0xc0006d0000) Stream removed, broadcasting: 1\nI0511 18:33:03.977890 1973 log.go:172] (0xc000138580) Go away received\nI0511 18:33:03.978165 1973 log.go:172] (0xc000138580) (0xc0006d0000) Stream removed, broadcasting: 1\nI0511 18:33:03.978196 1973 log.go:172] (0xc000138580) (0xc0006d0140) Stream removed, broadcasting: 3\nI0511 18:33:03.978204 1973 log.go:172] (0xc000138580) (0xc0007c6b40) Stream removed, broadcasting: 5\n" May 11 18:33:03.982: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 11 18:33:03.982: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 11 18:33:03.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2ntkg ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 11 18:33:04.294: INFO: stderr: "I0511 18:33:04.095488 1996 log.go:172] (0xc000138580) (0xc000127360) Create stream\nI0511 18:33:04.095578 1996 log.go:172] (0xc000138580) (0xc000127360) Stream added, broadcasting: 1\nI0511 18:33:04.098727 1996 log.go:172] (0xc000138580) Reply frame received for 1\nI0511 18:33:04.098814 1996 log.go:172] (0xc000138580) (0xc00053a000) Create stream\nI0511 18:33:04.098843 1996 log.go:172] (0xc000138580) (0xc00053a000) Stream added, broadcasting: 3\nI0511 18:33:04.100751 1996 log.go:172] (0xc000138580) Reply frame received for 3\nI0511 18:33:04.100820 1996 log.go:172] (0xc000138580) (0xc00053a0a0) Create stream\nI0511 18:33:04.100859 1996 log.go:172] (0xc000138580) (0xc00053a0a0) Stream added, broadcasting: 5\nI0511 18:33:04.102311 1996 log.go:172] (0xc000138580) Reply frame received for 5\nI0511 18:33:04.288456 1996 log.go:172] (0xc000138580) Data frame received for 3\nI0511 18:33:04.288489 1996 log.go:172] (0xc00053a000) (3) Data frame handling\nI0511 18:33:04.288508 1996 log.go:172] (0xc00053a000) (3) Data frame sent\nI0511 18:33:04.288516 1996 log.go:172] (0xc000138580) Data frame received for 3\nI0511 18:33:04.288522 1996 log.go:172] (0xc00053a000) (3) Data frame handling\nI0511 18:33:04.288597 1996 log.go:172] (0xc000138580) Data frame received for 5\nI0511 18:33:04.288619 1996 log.go:172] (0xc00053a0a0) (5) Data frame handling\nI0511 18:33:04.290226 1996 log.go:172] (0xc000138580) Data frame received for 1\nI0511 18:33:04.290241 1996 log.go:172] (0xc000127360) (1) Data frame handling\nI0511 18:33:04.290247 1996 log.go:172] (0xc000127360) (1) Data frame sent\nI0511 18:33:04.290254 1996 log.go:172] (0xc000138580) (0xc000127360) Stream removed, broadcasting: 1\nI0511 18:33:04.290260 1996 log.go:172] (0xc000138580) Go away received\nI0511 18:33:04.290451 1996 log.go:172] (0xc000138580) (0xc000127360) Stream removed, broadcasting: 1\nI0511 18:33:04.290467 1996 log.go:172] (0xc000138580) (0xc00053a000) Stream removed, broadcasting: 3\nI0511 18:33:04.290474 1996 log.go:172] (0xc000138580) (0xc00053a0a0) Stream removed, broadcasting: 5\n" May 11 18:33:04.294: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 11 18:33:04.294: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 11 18:33:04.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2ntkg ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 11 18:33:04.682: INFO: stderr: "I0511 18:33:04.408225 2018 log.go:172] (0xc000138630) (0xc00066d4a0) Create stream\nI0511 18:33:04.408266 2018 log.go:172] (0xc000138630) (0xc00066d4a0) Stream added, broadcasting: 1\nI0511 18:33:04.410222 2018 log.go:172] (0xc000138630) Reply frame received for 1\nI0511 18:33:04.410272 2018 log.go:172] (0xc000138630) (0xc0003d8000) Create stream\nI0511 18:33:04.410291 2018 log.go:172] (0xc000138630) (0xc0003d8000) Stream added, broadcasting: 3\nI0511 18:33:04.410873 2018 log.go:172] (0xc000138630) Reply frame received for 3\nI0511 18:33:04.410909 2018 log.go:172] (0xc000138630) (0xc0002e4000) Create stream\nI0511 18:33:04.410927 2018 log.go:172] (0xc000138630) (0xc0002e4000) Stream added, broadcasting: 5\nI0511 18:33:04.411558 2018 log.go:172] (0xc000138630) Reply frame received for 5\nI0511 18:33:04.674383 2018 log.go:172] (0xc000138630) Data frame received for 5\nI0511 18:33:04.674427 2018 log.go:172] (0xc0002e4000) (5) Data frame handling\nI0511 18:33:04.674467 2018 log.go:172] (0xc000138630) Data frame received for 3\nI0511 18:33:04.674479 2018 log.go:172] (0xc0003d8000) (3) Data frame handling\nI0511 18:33:04.674497 2018 log.go:172] (0xc0003d8000) (3) Data frame sent\nI0511 18:33:04.674516 2018 log.go:172] (0xc000138630) Data frame received for 3\nI0511 18:33:04.674530 2018 log.go:172] (0xc0003d8000) (3) Data frame handling\nI0511 18:33:04.676616 2018 log.go:172] (0xc000138630) Data frame received for 1\nI0511 18:33:04.676632 2018 log.go:172] (0xc00066d4a0) (1) Data frame handling\nI0511 18:33:04.676787 2018 log.go:172] (0xc00066d4a0) (1) Data frame sent\nI0511 18:33:04.676817 2018 log.go:172] (0xc000138630) (0xc00066d4a0) Stream removed, broadcasting: 1\nI0511 18:33:04.676843 2018 log.go:172] (0xc000138630) Go away received\nI0511 18:33:04.677073 2018 log.go:172] (0xc000138630) (0xc00066d4a0) Stream removed, broadcasting: 1\nI0511 18:33:04.677089 2018 log.go:172] (0xc000138630) (0xc0003d8000) Stream removed, broadcasting: 3\nI0511 18:33:04.677096 2018 log.go:172] (0xc000138630) (0xc0002e4000) Stream removed, broadcasting: 5\n" May 11 18:33:04.682: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 11 18:33:04.682: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 11 18:33:04.682: INFO: Waiting for statefulset status.replicas updated to 0 May 11 18:33:04.685: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 11 18:33:14.749: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 11 18:33:14.749: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 11 18:33:14.749: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 11 18:33:14.823: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999652s May 11 18:33:16.308: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.930954616s May 11 18:33:17.363: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.445867222s May 11 18:33:18.452: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.391028906s May 11 18:33:19.458: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.302680372s May 11 18:33:20.482: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.296152372s May 11 18:33:21.572: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.272096906s May 11 18:33:22.576: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.182081803s May 11 18:33:23.749: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.177835069s May 11 18:33:24.841: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.958382ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-2ntkg May 11 18:33:26.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2ntkg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 18:33:27.644: INFO: stderr: "I0511 18:33:27.568265 2041 log.go:172] (0xc00084cbb0) (0xc00037f900) Create stream\nI0511 18:33:27.568787 2041 log.go:172] (0xc00084cbb0) (0xc00037f900) Stream added, broadcasting: 1\nI0511 18:33:27.572307 2041 log.go:172] (0xc00084cbb0) Reply frame received for 1\nI0511 18:33:27.572342 2041 log.go:172] (0xc00084cbb0) (0xc00037ec80) Create stream\nI0511 18:33:27.572351 2041 log.go:172] (0xc00084cbb0) (0xc00037ec80) Stream added, broadcasting: 3\nI0511 18:33:27.573240 2041 log.go:172] (0xc00084cbb0) Reply frame received for 3\nI0511 18:33:27.573302 2041 log.go:172] (0xc00084cbb0) (0xc000876000) Create stream\nI0511 18:33:27.573309 2041 log.go:172] (0xc00084cbb0) (0xc000876000) Stream added, broadcasting: 5\nI0511 18:33:27.574056 2041 log.go:172] (0xc00084cbb0) Reply frame received for 5\nI0511 18:33:27.637924 2041 log.go:172] (0xc00084cbb0) Data frame received for 5\nI0511 18:33:27.637969 2041 log.go:172] (0xc000876000) (5) Data frame handling\nI0511 18:33:27.637995 2041 log.go:172] (0xc00084cbb0) Data frame received for 3\nI0511 18:33:27.638005 2041 log.go:172] (0xc00037ec80) (3) Data frame handling\nI0511 18:33:27.638017 2041 log.go:172] (0xc00037ec80) (3) Data frame sent\nI0511 18:33:27.638027 2041 log.go:172] (0xc00084cbb0) Data frame received for 3\nI0511 18:33:27.638040 2041 log.go:172] (0xc00037ec80) (3) Data frame handling\nI0511 18:33:27.639150 2041 log.go:172] (0xc00084cbb0) Data frame received for 1\nI0511 18:33:27.639182 2041 log.go:172] (0xc00037f900) (1) Data frame handling\nI0511 18:33:27.639195 2041 log.go:172] (0xc00037f900) (1) Data frame sent\nI0511 18:33:27.639223 2041 log.go:172] (0xc00084cbb0) (0xc00037f900) Stream removed, broadcasting: 1\nI0511 18:33:27.639252 2041 log.go:172] (0xc00084cbb0) Go away received\nI0511 18:33:27.639503 2041 log.go:172] (0xc00084cbb0) (0xc00037f900) Stream removed, broadcasting: 1\nI0511 18:33:27.639538 2041 log.go:172] (0xc00084cbb0) (0xc00037ec80) Stream removed, broadcasting: 3\nI0511 18:33:27.639552 2041 log.go:172] (0xc00084cbb0) (0xc000876000) Stream removed, broadcasting: 5\n" May 11 18:33:27.644: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 11 18:33:27.644: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 11 18:33:27.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2ntkg ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 18:33:27.849: INFO: stderr: "I0511 18:33:27.770281 2064 log.go:172] (0xc000826160) (0xc0007345a0) Create stream\nI0511 18:33:27.770337 2064 log.go:172] (0xc000826160) (0xc0007345a0) Stream added, broadcasting: 1\nI0511 18:33:27.772476 2064 log.go:172] (0xc000826160) Reply frame received for 1\nI0511 18:33:27.772503 2064 log.go:172] (0xc000826160) (0xc00068cd20) Create stream\nI0511 18:33:27.772510 2064 log.go:172] (0xc000826160) (0xc00068cd20) Stream added, broadcasting: 3\nI0511 18:33:27.773600 2064 log.go:172] (0xc000826160) Reply frame received for 3\nI0511 18:33:27.773655 2064 log.go:172] (0xc000826160) (0xc0006d6000) Create stream\nI0511 18:33:27.773682 2064 log.go:172] (0xc000826160) (0xc0006d6000) Stream added, broadcasting: 5\nI0511 18:33:27.774682 2064 log.go:172] (0xc000826160) Reply frame received for 5\nI0511 18:33:27.840693 2064 log.go:172] (0xc000826160) Data frame received for 5\nI0511 18:33:27.840731 2064 log.go:172] (0xc0006d6000) (5) Data frame handling\nI0511 18:33:27.840757 2064 log.go:172] (0xc000826160) Data frame received for 3\nI0511 18:33:27.840767 2064 log.go:172] (0xc00068cd20) (3) Data frame handling\nI0511 18:33:27.840776 2064 log.go:172] (0xc00068cd20) (3) Data frame sent\nI0511 18:33:27.840784 2064 log.go:172] (0xc000826160) Data frame received for 3\nI0511 18:33:27.840790 2064 log.go:172] (0xc00068cd20) (3) Data frame handling\nI0511 18:33:27.842172 2064 log.go:172] (0xc000826160) Data frame received for 1\nI0511 18:33:27.842194 2064 log.go:172] (0xc0007345a0) (1) Data frame handling\nI0511 18:33:27.842208 2064 log.go:172] (0xc0007345a0) (1) Data frame sent\nI0511 18:33:27.842216 2064 log.go:172] (0xc000826160) (0xc0007345a0) Stream removed, broadcasting: 1\nI0511 18:33:27.842227 2064 log.go:172] (0xc000826160) Go away received\nI0511 18:33:27.842501 2064 log.go:172] (0xc000826160) (0xc0007345a0) Stream removed, broadcasting: 1\nI0511 18:33:27.842535 2064 log.go:172] (0xc000826160) (0xc00068cd20) Stream removed, broadcasting: 3\nI0511 18:33:27.842557 2064 log.go:172] (0xc000826160) (0xc0006d6000) Stream removed, broadcasting: 5\n" May 11 18:33:27.849: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 11 18:33:27.849: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 11 18:33:27.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2ntkg ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 18:33:28.868: INFO: stderr: "I0511 18:33:28.441064 2086 log.go:172] (0xc0008242c0) (0xc000706640) Create stream\nI0511 18:33:28.441311 2086 log.go:172] (0xc0008242c0) (0xc000706640) Stream added, broadcasting: 1\nI0511 18:33:28.443457 2086 log.go:172] (0xc0008242c0) Reply frame received for 1\nI0511 18:33:28.443492 2086 log.go:172] (0xc0008242c0) (0xc000666be0) Create stream\nI0511 18:33:28.443504 2086 log.go:172] (0xc0008242c0) (0xc000666be0) Stream added, broadcasting: 3\nI0511 18:33:28.444261 2086 log.go:172] (0xc0008242c0) Reply frame received for 3\nI0511 18:33:28.444306 2086 log.go:172] (0xc0008242c0) (0xc0007066e0) Create stream\nI0511 18:33:28.444317 2086 log.go:172] (0xc0008242c0) (0xc0007066e0) Stream added, broadcasting: 5\nI0511 18:33:28.444999 2086 log.go:172] (0xc0008242c0) Reply frame received for 5\nI0511 18:33:28.861284 2086 log.go:172] (0xc0008242c0) Data frame received for 5\nI0511 18:33:28.861401 2086 log.go:172] (0xc0007066e0) (5) Data frame handling\nI0511 18:33:28.861446 2086 log.go:172] (0xc0008242c0) Data frame received for 3\nI0511 18:33:28.861480 2086 log.go:172] (0xc000666be0) (3) Data frame handling\nI0511 18:33:28.861505 2086 log.go:172] (0xc000666be0) (3) Data frame sent\nI0511 18:33:28.861565 2086 log.go:172] (0xc0008242c0) Data frame received for 3\nI0511 18:33:28.861584 2086 log.go:172] (0xc000666be0) (3) Data frame handling\nI0511 18:33:28.863542 2086 log.go:172] (0xc0008242c0) Data frame received for 1\nI0511 18:33:28.863559 2086 log.go:172] (0xc000706640) (1) Data frame handling\nI0511 18:33:28.863570 2086 log.go:172] (0xc000706640) (1) Data frame sent\nI0511 18:33:28.863634 2086 log.go:172] (0xc0008242c0) (0xc000706640) Stream removed, broadcasting: 1\nI0511 18:33:28.863704 2086 log.go:172] (0xc0008242c0) Go away received\nI0511 18:33:28.863794 2086 log.go:172] (0xc0008242c0) (0xc000706640) Stream removed, broadcasting: 1\nI0511 18:33:28.863812 2086 log.go:172] (0xc0008242c0) (0xc000666be0) Stream removed, broadcasting: 3\nI0511 18:33:28.863821 2086 log.go:172] (0xc0008242c0) (0xc0007066e0) Stream removed, broadcasting: 5\n" May 11 18:33:28.868: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 11 18:33:28.868: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 11 18:33:28.868: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 11 18:33:59.043: INFO: Deleting all statefulset in ns e2e-tests-statefulset-2ntkg May 11 18:33:59.046: INFO: Scaling statefulset ss to 0 May 11 18:33:59.054: INFO: Waiting for statefulset status.replicas updated to 0 May 11 18:33:59.056: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:33:59.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-2ntkg" for this suite. May 11 18:34:14.482: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:34:14.497: INFO: namespace: e2e-tests-statefulset-2ntkg, resource: bindings, ignored listing per whitelist May 11 18:34:14.550: INFO: namespace e2e-tests-statefulset-2ntkg deletion completed in 14.701419559s • [SLOW TEST:126.320 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:34:14.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs May 11 18:34:15.663: INFO: Waiting up to 5m0s for pod "pod-0478c334-93b6-11ea-b832-0242ac110018" in namespace "e2e-tests-emptydir-9tvqh" to be "success or failure" May 11 18:34:15.673: INFO: Pod "pod-0478c334-93b6-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.314885ms May 11 18:34:17.676: INFO: Pod "pod-0478c334-93b6-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013470856s May 11 18:34:19.764: INFO: Pod "pod-0478c334-93b6-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101151916s May 11 18:34:21.767: INFO: Pod "pod-0478c334-93b6-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.104484934s STEP: Saw pod success May 11 18:34:21.767: INFO: Pod "pod-0478c334-93b6-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 18:34:21.770: INFO: Trying to get logs from node hunter-worker2 pod pod-0478c334-93b6-11ea-b832-0242ac110018 container test-container: STEP: delete the pod May 11 18:34:22.418: INFO: Waiting for pod pod-0478c334-93b6-11ea-b832-0242ac110018 to disappear May 11 18:34:22.506: INFO: Pod pod-0478c334-93b6-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:34:22.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-9tvqh" for this suite. May 11 18:34:30.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:34:30.793: INFO: namespace: e2e-tests-emptydir-9tvqh, resource: bindings, ignored listing per whitelist May 11 18:34:30.832: INFO: namespace e2e-tests-emptydir-9tvqh deletion completed in 8.321785159s • [SLOW TEST:16.283 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:34:30.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 11 18:34:31.281: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0dade995-93b6-11ea-b832-0242ac110018" in namespace "e2e-tests-downward-api-jkjpz" to be "success or failure" May 11 18:34:31.329: INFO: Pod "downwardapi-volume-0dade995-93b6-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 47.821839ms May 11 18:34:33.741: INFO: Pod "downwardapi-volume-0dade995-93b6-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.459229632s May 11 18:34:35.744: INFO: Pod "downwardapi-volume-0dade995-93b6-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.462941931s May 11 18:34:38.004: INFO: Pod "downwardapi-volume-0dade995-93b6-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.722940067s STEP: Saw pod success May 11 18:34:38.004: INFO: Pod "downwardapi-volume-0dade995-93b6-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 18:34:38.008: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-0dade995-93b6-11ea-b832-0242ac110018 container client-container: STEP: delete the pod May 11 18:34:38.618: INFO: Waiting for pod downwardapi-volume-0dade995-93b6-11ea-b832-0242ac110018 to disappear May 11 18:34:38.644: INFO: Pod downwardapi-volume-0dade995-93b6-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:34:38.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-jkjpz" for this suite. May 11 18:34:48.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:34:48.964: INFO: namespace: e2e-tests-downward-api-jkjpz, resource: bindings, ignored listing per whitelist May 11 18:34:48.964: INFO: namespace e2e-tests-downward-api-jkjpz deletion completed in 10.316181908s • [SLOW TEST:18.131 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:34:48.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-1892f9e2-93b6-11ea-b832-0242ac110018 STEP: Creating a pod to test consume configMaps May 11 18:34:49.828: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-18985cf2-93b6-11ea-b832-0242ac110018" in namespace "e2e-tests-projected-jc97c" to be "success or failure" May 11 18:34:50.196: INFO: Pod "pod-projected-configmaps-18985cf2-93b6-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 367.785553ms May 11 18:34:52.199: INFO: Pod "pod-projected-configmaps-18985cf2-93b6-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.370455956s May 11 18:34:54.202: INFO: Pod "pod-projected-configmaps-18985cf2-93b6-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.374332748s May 11 18:34:56.206: INFO: Pod "pod-projected-configmaps-18985cf2-93b6-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.377920345s May 11 18:34:58.219: INFO: Pod "pod-projected-configmaps-18985cf2-93b6-11ea-b832-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 8.390993649s May 11 18:35:00.395: INFO: Pod "pod-projected-configmaps-18985cf2-93b6-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.567298251s STEP: Saw pod success May 11 18:35:00.395: INFO: Pod "pod-projected-configmaps-18985cf2-93b6-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 18:35:00.398: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-18985cf2-93b6-11ea-b832-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod May 11 18:35:00.617: INFO: Waiting for pod pod-projected-configmaps-18985cf2-93b6-11ea-b832-0242ac110018 to disappear May 11 18:35:00.910: INFO: Pod pod-projected-configmaps-18985cf2-93b6-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:35:00.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-jc97c" for this suite. May 11 18:35:09.498: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:35:09.509: INFO: namespace: e2e-tests-projected-jc97c, resource: bindings, ignored listing per whitelist May 11 18:35:09.554: INFO: namespace e2e-tests-projected-jc97c deletion completed in 8.640866783s • [SLOW TEST:20.591 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:35:09.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode May 11 18:35:10.114: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-5hk8w" to be "success or failure" May 11 18:35:10.160: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 45.815081ms May 11 18:35:12.532: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.418084s May 11 18:35:14.873: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.75931957s May 11 18:35:16.876: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.762222565s May 11 18:35:18.879: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.765291879s STEP: Saw pod success May 11 18:35:18.879: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 11 18:35:18.881: INFO: Trying to get logs from node hunter-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod May 11 18:35:20.139: INFO: Waiting for pod pod-host-path-test to disappear May 11 18:35:20.292: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:35:20.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-5hk8w" for this suite. May 11 18:35:26.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:35:27.328: INFO: namespace: e2e-tests-hostpath-5hk8w, resource: bindings, ignored listing per whitelist May 11 18:35:27.385: INFO: namespace e2e-tests-hostpath-5hk8w deletion completed in 7.0891755s • [SLOW TEST:17.831 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:35:27.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 11 18:35:27.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 11 18:35:28.326: INFO: stderr: "" May 11 18:35:28.326: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:06Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T01:07:14Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:35:28.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-g89vr" for this suite. May 11 18:35:34.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:35:34.516: INFO: namespace: e2e-tests-kubectl-g89vr, resource: bindings, ignored listing per whitelist May 11 18:35:34.543: INFO: namespace e2e-tests-kubectl-g89vr deletion completed in 6.178587297s • [SLOW TEST:7.158 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:35:34.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 11 18:35:35.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-fxzbz' May 11 18:35:35.395: INFO: stderr: "" May 11 18:35:35.395: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 May 11 18:35:35.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-fxzbz' May 11 18:35:40.830: INFO: stderr: "" May 11 18:35:40.830: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:35:40.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-fxzbz" for this suite. May 11 18:35:53.101: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:35:53.351: INFO: namespace: e2e-tests-kubectl-fxzbz, resource: bindings, ignored listing per whitelist May 11 18:35:53.369: INFO: namespace e2e-tests-kubectl-fxzbz deletion completed in 12.505852631s • [SLOW TEST:18.825 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:35:53.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller May 11 18:35:53.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-hlmbt' May 11 18:35:55.671: INFO: stderr: "" May 11 18:35:55.671: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 11 18:35:55.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hlmbt' May 11 18:35:56.264: INFO: stderr: "" May 11 18:35:56.264: INFO: stdout: "update-demo-nautilus-4c8n9 update-demo-nautilus-987sz " May 11 18:35:56.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4c8n9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hlmbt' May 11 18:35:56.889: INFO: stderr: "" May 11 18:35:56.889: INFO: stdout: "" May 11 18:35:56.890: INFO: update-demo-nautilus-4c8n9 is created but not running May 11 18:36:01.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hlmbt' May 11 18:36:02.390: INFO: stderr: "" May 11 18:36:02.390: INFO: stdout: "update-demo-nautilus-4c8n9 update-demo-nautilus-987sz " May 11 18:36:02.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4c8n9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hlmbt' May 11 18:36:02.695: INFO: stderr: "" May 11 18:36:02.695: INFO: stdout: "" May 11 18:36:02.695: INFO: update-demo-nautilus-4c8n9 is created but not running May 11 18:36:07.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hlmbt' May 11 18:36:07.802: INFO: stderr: "" May 11 18:36:07.802: INFO: stdout: "update-demo-nautilus-4c8n9 update-demo-nautilus-987sz " May 11 18:36:07.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4c8n9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hlmbt' May 11 18:36:07.898: INFO: stderr: "" May 11 18:36:07.898: INFO: stdout: "true" May 11 18:36:07.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4c8n9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hlmbt' May 11 18:36:07.987: INFO: stderr: "" May 11 18:36:07.987: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 18:36:07.987: INFO: validating pod update-demo-nautilus-4c8n9 May 11 18:36:07.990: INFO: got data: { "image": "nautilus.jpg" } May 11 18:36:07.990: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 18:36:07.990: INFO: update-demo-nautilus-4c8n9 is verified up and running May 11 18:36:07.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-987sz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hlmbt' May 11 18:36:08.081: INFO: stderr: "" May 11 18:36:08.081: INFO: stdout: "true" May 11 18:36:08.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-987sz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hlmbt' May 11 18:36:08.187: INFO: stderr: "" May 11 18:36:08.187: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 18:36:08.187: INFO: validating pod update-demo-nautilus-987sz May 11 18:36:08.192: INFO: got data: { "image": "nautilus.jpg" } May 11 18:36:08.192: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 18:36:08.192: INFO: update-demo-nautilus-987sz is verified up and running STEP: scaling down the replication controller May 11 18:36:08.384: INFO: scanned /root for discovery docs: May 11 18:36:08.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-hlmbt' May 11 18:36:09.914: INFO: stderr: "" May 11 18:36:09.914: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 11 18:36:09.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hlmbt' May 11 18:36:10.020: INFO: stderr: "" May 11 18:36:10.020: INFO: stdout: "update-demo-nautilus-4c8n9 update-demo-nautilus-987sz " STEP: Replicas for name=update-demo: expected=1 actual=2 May 11 18:36:15.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hlmbt' May 11 18:36:15.108: INFO: stderr: "" May 11 18:36:15.108: INFO: stdout: "update-demo-nautilus-4c8n9 update-demo-nautilus-987sz " STEP: Replicas for name=update-demo: expected=1 actual=2 May 11 18:36:20.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hlmbt' May 11 18:36:20.232: INFO: stderr: "" May 11 18:36:20.232: INFO: stdout: "update-demo-nautilus-4c8n9 update-demo-nautilus-987sz " STEP: Replicas for name=update-demo: expected=1 actual=2 May 11 18:36:25.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hlmbt' May 11 18:36:25.346: INFO: stderr: "" May 11 18:36:25.346: INFO: stdout: "update-demo-nautilus-4c8n9 " May 11 18:36:25.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4c8n9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hlmbt' May 11 18:36:25.452: INFO: stderr: "" May 11 18:36:25.452: INFO: stdout: "true" May 11 18:36:25.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4c8n9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hlmbt' May 11 18:36:25.546: INFO: stderr: "" May 11 18:36:25.546: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 18:36:25.546: INFO: validating pod update-demo-nautilus-4c8n9 May 11 18:36:25.549: INFO: got data: { "image": "nautilus.jpg" } May 11 18:36:25.549: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 18:36:25.549: INFO: update-demo-nautilus-4c8n9 is verified up and running STEP: scaling up the replication controller May 11 18:36:25.551: INFO: scanned /root for discovery docs: May 11 18:36:25.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-hlmbt' May 11 18:36:28.355: INFO: stderr: "" May 11 18:36:28.355: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 11 18:36:28.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hlmbt' May 11 18:36:28.726: INFO: stderr: "" May 11 18:36:28.726: INFO: stdout: "update-demo-nautilus-4c8n9 update-demo-nautilus-8xjhq " May 11 18:36:28.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4c8n9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hlmbt' May 11 18:36:28.818: INFO: stderr: "" May 11 18:36:28.819: INFO: stdout: "true" May 11 18:36:28.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4c8n9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hlmbt' May 11 18:36:29.253: INFO: stderr: "" May 11 18:36:29.253: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 18:36:29.253: INFO: validating pod update-demo-nautilus-4c8n9 May 11 18:36:29.582: INFO: got data: { "image": "nautilus.jpg" } May 11 18:36:29.582: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 18:36:29.582: INFO: update-demo-nautilus-4c8n9 is verified up and running May 11 18:36:29.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8xjhq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hlmbt' May 11 18:36:29.731: INFO: stderr: "" May 11 18:36:29.731: INFO: stdout: "" May 11 18:36:29.731: INFO: update-demo-nautilus-8xjhq is created but not running May 11 18:36:34.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hlmbt' May 11 18:36:35.096: INFO: stderr: "" May 11 18:36:35.096: INFO: stdout: "update-demo-nautilus-4c8n9 update-demo-nautilus-8xjhq " May 11 18:36:35.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4c8n9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hlmbt' May 11 18:36:35.179: INFO: stderr: "" May 11 18:36:35.179: INFO: stdout: "true" May 11 18:36:35.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4c8n9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hlmbt' May 11 18:36:35.274: INFO: stderr: "" May 11 18:36:35.274: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 18:36:35.274: INFO: validating pod update-demo-nautilus-4c8n9 May 11 18:36:35.277: INFO: got data: { "image": "nautilus.jpg" } May 11 18:36:35.277: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 18:36:35.277: INFO: update-demo-nautilus-4c8n9 is verified up and running May 11 18:36:35.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8xjhq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hlmbt' May 11 18:36:35.366: INFO: stderr: "" May 11 18:36:35.366: INFO: stdout: "" May 11 18:36:35.366: INFO: update-demo-nautilus-8xjhq is created but not running May 11 18:36:40.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-hlmbt' May 11 18:36:40.465: INFO: stderr: "" May 11 18:36:40.465: INFO: stdout: "update-demo-nautilus-4c8n9 update-demo-nautilus-8xjhq " May 11 18:36:40.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4c8n9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hlmbt' May 11 18:36:40.555: INFO: stderr: "" May 11 18:36:40.555: INFO: stdout: "true" May 11 18:36:40.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4c8n9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hlmbt' May 11 18:36:40.652: INFO: stderr: "" May 11 18:36:40.653: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 18:36:40.653: INFO: validating pod update-demo-nautilus-4c8n9 May 11 18:36:40.655: INFO: got data: { "image": "nautilus.jpg" } May 11 18:36:40.655: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 18:36:40.655: INFO: update-demo-nautilus-4c8n9 is verified up and running May 11 18:36:40.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8xjhq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hlmbt' May 11 18:36:40.827: INFO: stderr: "" May 11 18:36:40.827: INFO: stdout: "true" May 11 18:36:40.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8xjhq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-hlmbt' May 11 18:36:40.930: INFO: stderr: "" May 11 18:36:40.930: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 18:36:40.930: INFO: validating pod update-demo-nautilus-8xjhq May 11 18:36:40.933: INFO: got data: { "image": "nautilus.jpg" } May 11 18:36:40.933: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 18:36:40.933: INFO: update-demo-nautilus-8xjhq is verified up and running STEP: using delete to clean up resources May 11 18:36:40.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-hlmbt' May 11 18:36:41.087: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 18:36:41.087: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 11 18:36:41.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-hlmbt' May 11 18:36:41.428: INFO: stderr: "No resources found.\n" May 11 18:36:41.428: INFO: stdout: "" May 11 18:36:41.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-hlmbt -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 11 18:36:41.532: INFO: stderr: "" May 11 18:36:41.532: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:36:41.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-hlmbt" for this suite. May 11 18:36:52.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:36:52.765: INFO: namespace: e2e-tests-kubectl-hlmbt, resource: bindings, ignored listing per whitelist May 11 18:36:52.770: INFO: namespace e2e-tests-kubectl-hlmbt deletion completed in 11.234571469s • [SLOW TEST:59.401 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:36:52.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-62587c5f-93b6-11ea-b832-0242ac110018 STEP: Creating a pod to test consume configMaps May 11 18:36:53.492: INFO: Waiting up to 5m0s for pod "pod-configmaps-6285efa5-93b6-11ea-b832-0242ac110018" in namespace "e2e-tests-configmap-nrzr7" to be "success or failure" May 11 18:36:53.641: INFO: Pod "pod-configmaps-6285efa5-93b6-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 149.170109ms May 11 18:36:55.672: INFO: Pod "pod-configmaps-6285efa5-93b6-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.180371406s May 11 18:36:57.676: INFO: Pod "pod-configmaps-6285efa5-93b6-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.184482546s May 11 18:36:59.681: INFO: Pod "pod-configmaps-6285efa5-93b6-11ea-b832-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 6.188710539s May 11 18:37:02.103: INFO: Pod "pod-configmaps-6285efa5-93b6-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.61120048s STEP: Saw pod success May 11 18:37:02.103: INFO: Pod "pod-configmaps-6285efa5-93b6-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 18:37:02.347: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-6285efa5-93b6-11ea-b832-0242ac110018 container configmap-volume-test: STEP: delete the pod May 11 18:37:02.642: INFO: Waiting for pod pod-configmaps-6285efa5-93b6-11ea-b832-0242ac110018 to disappear May 11 18:37:02.707: INFO: Pod pod-configmaps-6285efa5-93b6-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:37:02.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-nrzr7" for this suite. May 11 18:37:16.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:37:16.924: INFO: namespace: e2e-tests-configmap-nrzr7, resource: bindings, ignored listing per whitelist May 11 18:37:16.929: INFO: namespace e2e-tests-configmap-nrzr7 deletion completed in 14.219359341s • [SLOW TEST:24.158 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:37:16.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 11 18:37:17.595: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:37:19.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-zll5v" for this suite. May 11 18:37:25.472: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:37:25.501: INFO: namespace: e2e-tests-custom-resource-definition-zll5v, resource: bindings, ignored listing per whitelist May 11 18:37:25.526: INFO: namespace e2e-tests-custom-resource-definition-zll5v deletion completed in 6.075052988s • [SLOW TEST:8.597 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:37:25.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 11 18:37:26.081: INFO: Waiting up to 5m0s for pod "downwardapi-volume-75f706d5-93b6-11ea-b832-0242ac110018" in namespace "e2e-tests-projected-rvtjq" to be "success or failure" May 11 18:37:26.116: INFO: Pod "downwardapi-volume-75f706d5-93b6-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 34.756448ms May 11 18:37:28.120: INFO: Pod "downwardapi-volume-75f706d5-93b6-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03889374s May 11 18:37:30.487: INFO: Pod "downwardapi-volume-75f706d5-93b6-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.405259554s May 11 18:37:32.491: INFO: Pod "downwardapi-volume-75f706d5-93b6-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.409492671s STEP: Saw pod success May 11 18:37:32.491: INFO: Pod "downwardapi-volume-75f706d5-93b6-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 18:37:32.493: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-75f706d5-93b6-11ea-b832-0242ac110018 container client-container: STEP: delete the pod May 11 18:37:32.656: INFO: Waiting for pod downwardapi-volume-75f706d5-93b6-11ea-b832-0242ac110018 to disappear May 11 18:37:32.716: INFO: Pod downwardapi-volume-75f706d5-93b6-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:37:32.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-rvtjq" for this suite. May 11 18:37:38.902: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:37:39.026: INFO: namespace: e2e-tests-projected-rvtjq, resource: bindings, ignored listing per whitelist May 11 18:37:39.044: INFO: namespace e2e-tests-projected-rvtjq deletion completed in 6.323689684s • [SLOW TEST:13.517 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:37:39.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 11 18:37:39.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-wv88c' May 11 18:37:39.424: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 11 18:37:39.424: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 May 11 18:37:41.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-wv88c' May 11 18:37:41.919: INFO: stderr: "" May 11 18:37:41.919: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:37:41.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-wv88c" for this suite. May 11 18:38:04.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:38:04.395: INFO: namespace: e2e-tests-kubectl-wv88c, resource: bindings, ignored listing per whitelist May 11 18:38:04.516: INFO: namespace e2e-tests-kubectl-wv88c deletion completed in 22.466503019s • [SLOW TEST:25.472 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:38:04.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 11 18:38:04.982: INFO: Waiting up to 5m0s for pod "downward-api-8d24369b-93b6-11ea-b832-0242ac110018" in namespace "e2e-tests-downward-api-fqbrh" to be "success or failure" May 11 18:38:05.079: INFO: Pod "downward-api-8d24369b-93b6-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 97.063916ms May 11 18:38:07.091: INFO: Pod "downward-api-8d24369b-93b6-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108612171s May 11 18:38:09.343: INFO: Pod "downward-api-8d24369b-93b6-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.361413664s May 11 18:38:11.371: INFO: Pod "downward-api-8d24369b-93b6-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.388451437s STEP: Saw pod success May 11 18:38:11.371: INFO: Pod "downward-api-8d24369b-93b6-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 18:38:11.374: INFO: Trying to get logs from node hunter-worker pod downward-api-8d24369b-93b6-11ea-b832-0242ac110018 container dapi-container: STEP: delete the pod May 11 18:38:11.774: INFO: Waiting for pod downward-api-8d24369b-93b6-11ea-b832-0242ac110018 to disappear May 11 18:38:11.810: INFO: Pod downward-api-8d24369b-93b6-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:38:11.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-fqbrh" for this suite. May 11 18:38:19.824: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:38:19.860: INFO: namespace: e2e-tests-downward-api-fqbrh, resource: bindings, ignored listing per whitelist May 11 18:38:19.886: INFO: namespace e2e-tests-downward-api-fqbrh deletion completed in 8.071952166s • [SLOW TEST:15.369 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:38:19.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-961b455d-93b6-11ea-b832-0242ac110018 STEP: Creating a pod to test consume configMaps May 11 18:38:20.055: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-96241138-93b6-11ea-b832-0242ac110018" in namespace "e2e-tests-projected-z8srw" to be "success or failure" May 11 18:38:20.099: INFO: Pod "pod-projected-configmaps-96241138-93b6-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 43.740592ms May 11 18:38:22.102: INFO: Pod "pod-projected-configmaps-96241138-93b6-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047130779s May 11 18:38:24.313: INFO: Pod "pod-projected-configmaps-96241138-93b6-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.257969113s May 11 18:38:26.316: INFO: Pod "pod-projected-configmaps-96241138-93b6-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.261413698s STEP: Saw pod success May 11 18:38:26.316: INFO: Pod "pod-projected-configmaps-96241138-93b6-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 18:38:26.319: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-96241138-93b6-11ea-b832-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod May 11 18:38:26.645: INFO: Waiting for pod pod-projected-configmaps-96241138-93b6-11ea-b832-0242ac110018 to disappear May 11 18:38:26.810: INFO: Pod pod-projected-configmaps-96241138-93b6-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:38:26.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-z8srw" for this suite. May 11 18:38:34.863: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:38:34.894: INFO: namespace: e2e-tests-projected-z8srw, resource: bindings, ignored listing per whitelist May 11 18:38:34.929: INFO: namespace e2e-tests-projected-z8srw deletion completed in 8.116279053s • [SLOW TEST:15.043 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:38:34.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs May 11 18:38:36.106: INFO: Waiting up to 5m0s for pod "pod-9fb53283-93b6-11ea-b832-0242ac110018" in namespace "e2e-tests-emptydir-qc7gw" to be "success or failure" May 11 18:38:36.126: INFO: Pod "pod-9fb53283-93b6-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 20.509855ms May 11 18:38:38.131: INFO: Pod "pod-9fb53283-93b6-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025073449s May 11 18:38:40.135: INFO: Pod "pod-9fb53283-93b6-11ea-b832-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.028972944s May 11 18:38:42.137: INFO: Pod "pod-9fb53283-93b6-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031623289s STEP: Saw pod success May 11 18:38:42.137: INFO: Pod "pod-9fb53283-93b6-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 18:38:42.139: INFO: Trying to get logs from node hunter-worker2 pod pod-9fb53283-93b6-11ea-b832-0242ac110018 container test-container: STEP: delete the pod May 11 18:38:42.776: INFO: Waiting for pod pod-9fb53283-93b6-11ea-b832-0242ac110018 to disappear May 11 18:38:42.805: INFO: Pod pod-9fb53283-93b6-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:38:42.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-qc7gw" for this suite. May 11 18:38:48.826: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:38:48.866: INFO: namespace: e2e-tests-emptydir-qc7gw, resource: bindings, ignored listing per whitelist May 11 18:38:48.890: INFO: namespace e2e-tests-emptydir-qc7gw deletion completed in 6.082270902s • [SLOW TEST:13.960 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:38:48.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 11 18:38:49.130: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a77048c2-93b6-11ea-b832-0242ac110018" in namespace "e2e-tests-downward-api-zbtmj" to be "success or failure" May 11 18:38:49.203: INFO: Pod "downwardapi-volume-a77048c2-93b6-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 72.770246ms May 11 18:38:51.207: INFO: Pod "downwardapi-volume-a77048c2-93b6-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076561435s May 11 18:38:53.211: INFO: Pod "downwardapi-volume-a77048c2-93b6-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.080943845s STEP: Saw pod success May 11 18:38:53.211: INFO: Pod "downwardapi-volume-a77048c2-93b6-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 18:38:53.214: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-a77048c2-93b6-11ea-b832-0242ac110018 container client-container: STEP: delete the pod May 11 18:38:53.260: INFO: Waiting for pod downwardapi-volume-a77048c2-93b6-11ea-b832-0242ac110018 to disappear May 11 18:38:53.284: INFO: Pod downwardapi-volume-a77048c2-93b6-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:38:53.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-zbtmj" for this suite. May 11 18:38:59.300: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:38:59.313: INFO: namespace: e2e-tests-downward-api-zbtmj, resource: bindings, ignored listing per whitelist May 11 18:38:59.370: INFO: namespace e2e-tests-downward-api-zbtmj deletion completed in 6.082491049s • [SLOW TEST:10.480 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:38:59.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-adfb92b3-93b6-11ea-b832-0242ac110018 STEP: Creating a pod to test consume secrets May 11 18:39:00.635: INFO: Waiting up to 5m0s for pod "pod-secrets-ae30b56c-93b6-11ea-b832-0242ac110018" in namespace "e2e-tests-secrets-dq6g6" to be "success or failure" May 11 18:39:00.678: INFO: Pod "pod-secrets-ae30b56c-93b6-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 42.888033ms May 11 18:39:02.799: INFO: Pod "pod-secrets-ae30b56c-93b6-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.16382619s May 11 18:39:05.146: INFO: Pod "pod-secrets-ae30b56c-93b6-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.510964159s May 11 18:39:07.152: INFO: Pod "pod-secrets-ae30b56c-93b6-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.516470166s May 11 18:39:09.155: INFO: Pod "pod-secrets-ae30b56c-93b6-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.51946228s STEP: Saw pod success May 11 18:39:09.155: INFO: Pod "pod-secrets-ae30b56c-93b6-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 18:39:09.156: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-ae30b56c-93b6-11ea-b832-0242ac110018 container secret-volume-test: STEP: delete the pod May 11 18:39:09.236: INFO: Waiting for pod pod-secrets-ae30b56c-93b6-11ea-b832-0242ac110018 to disappear May 11 18:39:09.251: INFO: Pod pod-secrets-ae30b56c-93b6-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:39:09.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-dq6g6" for this suite. May 11 18:39:17.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:39:17.438: INFO: namespace: e2e-tests-secrets-dq6g6, resource: bindings, ignored listing per whitelist May 11 18:39:17.445: INFO: namespace e2e-tests-secrets-dq6g6 deletion completed in 8.192209188s • [SLOW TEST:18.076 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:39:17.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium May 11 18:39:17.808: INFO: Waiting up to 5m0s for pod "pod-b87e2cf9-93b6-11ea-b832-0242ac110018" in namespace "e2e-tests-emptydir-zlsqc" to be "success or failure" May 11 18:39:17.861: INFO: Pod "pod-b87e2cf9-93b6-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 53.223484ms May 11 18:39:19.980: INFO: Pod "pod-b87e2cf9-93b6-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.172620664s May 11 18:39:21.996: INFO: Pod "pod-b87e2cf9-93b6-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.188286072s May 11 18:39:23.999: INFO: Pod "pod-b87e2cf9-93b6-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.191376928s STEP: Saw pod success May 11 18:39:23.999: INFO: Pod "pod-b87e2cf9-93b6-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 18:39:24.001: INFO: Trying to get logs from node hunter-worker2 pod pod-b87e2cf9-93b6-11ea-b832-0242ac110018 container test-container: STEP: delete the pod May 11 18:39:24.024: INFO: Waiting for pod pod-b87e2cf9-93b6-11ea-b832-0242ac110018 to disappear May 11 18:39:24.050: INFO: Pod pod-b87e2cf9-93b6-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:39:24.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-zlsqc" for this suite. May 11 18:39:30.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:39:30.098: INFO: namespace: e2e-tests-emptydir-zlsqc, resource: bindings, ignored listing per whitelist May 11 18:39:30.129: INFO: namespace e2e-tests-emptydir-zlsqc deletion completed in 6.077185732s • [SLOW TEST:12.684 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:39:30.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-c00792b5-93b6-11ea-b832-0242ac110018 STEP: Creating a pod to test consume secrets May 11 18:39:30.377: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c00d3b14-93b6-11ea-b832-0242ac110018" in namespace "e2e-tests-projected-2hl67" to be "success or failure" May 11 18:39:30.397: INFO: Pod "pod-projected-secrets-c00d3b14-93b6-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 19.55598ms May 11 18:39:32.590: INFO: Pod "pod-projected-secrets-c00d3b14-93b6-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212206877s May 11 18:39:34.799: INFO: Pod "pod-projected-secrets-c00d3b14-93b6-11ea-b832-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.421280313s May 11 18:39:36.802: INFO: Pod "pod-projected-secrets-c00d3b14-93b6-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.424188614s STEP: Saw pod success May 11 18:39:36.802: INFO: Pod "pod-projected-secrets-c00d3b14-93b6-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 18:39:36.804: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-c00d3b14-93b6-11ea-b832-0242ac110018 container projected-secret-volume-test: STEP: delete the pod May 11 18:39:36.944: INFO: Waiting for pod pod-projected-secrets-c00d3b14-93b6-11ea-b832-0242ac110018 to disappear May 11 18:39:37.000: INFO: Pod pod-projected-secrets-c00d3b14-93b6-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:39:37.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-2hl67" for this suite. May 11 18:39:43.020: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:39:43.035: INFO: namespace: e2e-tests-projected-2hl67, resource: bindings, ignored listing per whitelist May 11 18:39:43.079: INFO: namespace e2e-tests-projected-2hl67 deletion completed in 6.075116518s • [SLOW TEST:12.950 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:39:43.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-c7cf06a9-93b6-11ea-b832-0242ac110018 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-c7cf06a9-93b6-11ea-b832-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:39:49.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-6ncds" for this suite. May 11 18:40:15.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:40:15.520: INFO: namespace: e2e-tests-configmap-6ncds, resource: bindings, ignored listing per whitelist May 11 18:40:15.544: INFO: namespace e2e-tests-configmap-6ncds deletion completed in 26.084309064s • [SLOW TEST:32.465 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:40:15.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0511 18:40:47.618759 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 11 18:40:47.618: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:40:47.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-9v86l" for this suite. May 11 18:40:56.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:40:56.336: INFO: namespace: e2e-tests-gc-9v86l, resource: bindings, ignored listing per whitelist May 11 18:40:56.385: INFO: namespace e2e-tests-gc-9v86l deletion completed in 8.762473037s • [SLOW TEST:40.840 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:40:56.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 11 18:40:58.292: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:41:09.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-gjqnn" for this suite. May 11 18:41:20.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:41:20.681: INFO: namespace: e2e-tests-init-container-gjqnn, resource: bindings, ignored listing per whitelist May 11 18:41:20.706: INFO: namespace e2e-tests-init-container-gjqnn deletion completed in 10.90841843s • [SLOW TEST:24.321 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:41:20.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-6l2ml/configmap-test-027ad53a-93b7-11ea-b832-0242ac110018 STEP: Creating a pod to test consume configMaps May 11 18:41:21.927: INFO: Waiting up to 5m0s for pod "pod-configmaps-02823312-93b7-11ea-b832-0242ac110018" in namespace "e2e-tests-configmap-6l2ml" to be "success or failure" May 11 18:41:21.974: INFO: Pod "pod-configmaps-02823312-93b7-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 46.745631ms May 11 18:41:23.976: INFO: Pod "pod-configmaps-02823312-93b7-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04941296s May 11 18:41:25.981: INFO: Pod "pod-configmaps-02823312-93b7-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054197542s May 11 18:41:28.251: INFO: Pod "pod-configmaps-02823312-93b7-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.324107211s STEP: Saw pod success May 11 18:41:28.251: INFO: Pod "pod-configmaps-02823312-93b7-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 18:41:28.255: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-02823312-93b7-11ea-b832-0242ac110018 container env-test: STEP: delete the pod May 11 18:41:28.496: INFO: Waiting for pod pod-configmaps-02823312-93b7-11ea-b832-0242ac110018 to disappear May 11 18:41:28.506: INFO: Pod pod-configmaps-02823312-93b7-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:41:28.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-6l2ml" for this suite. May 11 18:41:36.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:41:37.333: INFO: namespace: e2e-tests-configmap-6l2ml, resource: bindings, ignored listing per whitelist May 11 18:41:37.362: INFO: namespace e2e-tests-configmap-6l2ml deletion completed in 8.851897161s • [SLOW TEST:16.655 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:41:37.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-0cbfca6e-93b7-11ea-b832-0242ac110018 STEP: Creating secret with name s-test-opt-upd-0cbfcaf8-93b7-11ea-b832-0242ac110018 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-0cbfca6e-93b7-11ea-b832-0242ac110018 STEP: Updating secret s-test-opt-upd-0cbfcaf8-93b7-11ea-b832-0242ac110018 STEP: Creating secret with name s-test-opt-create-0cbfcb40-93b7-11ea-b832-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:43:06.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-9d9qf" for this suite. May 11 18:43:32.712: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:43:32.735: INFO: namespace: e2e-tests-secrets-9d9qf, resource: bindings, ignored listing per whitelist May 11 18:43:32.769: INFO: namespace e2e-tests-secrets-9d9qf deletion completed in 26.173327278s • [SLOW TEST:115.407 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:43:32.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:43:46.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-gnlcc" for this suite. May 11 18:44:29.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:44:30.475: INFO: namespace: e2e-tests-kubelet-test-gnlcc, resource: bindings, ignored listing per whitelist May 11 18:44:30.517: INFO: namespace e2e-tests-kubelet-test-gnlcc deletion completed in 43.909006996s • [SLOW TEST:57.748 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:44:30.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 11 18:44:41.047: INFO: Successfully updated pod "pod-update-734a63be-93b7-11ea-b832-0242ac110018" STEP: verifying the updated pod is in kubernetes May 11 18:44:41.095: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:44:41.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-q9b2s" for this suite. May 11 18:45:07.606: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:45:07.694: INFO: namespace: e2e-tests-pods-q9b2s, resource: bindings, ignored listing per whitelist May 11 18:45:07.704: INFO: namespace e2e-tests-pods-q9b2s deletion completed in 26.606004053s • [SLOW TEST:37.186 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:45:07.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs May 11 18:45:08.011: INFO: Waiting up to 5m0s for pod "pod-894cfdfb-93b7-11ea-b832-0242ac110018" in namespace "e2e-tests-emptydir-fk7bp" to be "success or failure" May 11 18:45:08.222: INFO: Pod "pod-894cfdfb-93b7-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 211.805597ms May 11 18:45:10.226: INFO: Pod "pod-894cfdfb-93b7-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215084171s May 11 18:45:12.230: INFO: Pod "pod-894cfdfb-93b7-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.219143523s May 11 18:45:14.232: INFO: Pod "pod-894cfdfb-93b7-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.221473343s May 11 18:45:16.301: INFO: Pod "pod-894cfdfb-93b7-11ea-b832-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 8.29021897s May 11 18:45:18.517: INFO: Pod "pod-894cfdfb-93b7-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.506271578s STEP: Saw pod success May 11 18:45:18.517: INFO: Pod "pod-894cfdfb-93b7-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 18:45:18.520: INFO: Trying to get logs from node hunter-worker pod pod-894cfdfb-93b7-11ea-b832-0242ac110018 container test-container: STEP: delete the pod May 11 18:45:18.754: INFO: Waiting for pod pod-894cfdfb-93b7-11ea-b832-0242ac110018 to disappear May 11 18:45:18.806: INFO: Pod pod-894cfdfb-93b7-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:45:18.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-fk7bp" for this suite. May 11 18:45:27.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:45:27.317: INFO: namespace: e2e-tests-emptydir-fk7bp, resource: bindings, ignored listing per whitelist May 11 18:45:27.354: INFO: namespace e2e-tests-emptydir-fk7bp deletion completed in 8.545469419s • [SLOW TEST:19.650 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:45:27.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium May 11 18:45:28.441: INFO: Waiting up to 5m0s for pod "pod-953ec646-93b7-11ea-b832-0242ac110018" in namespace "e2e-tests-emptydir-wgkwp" to be "success or failure" May 11 18:45:28.533: INFO: Pod "pod-953ec646-93b7-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 91.133779ms May 11 18:45:30.846: INFO: Pod "pod-953ec646-93b7-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.404594772s May 11 18:45:32.851: INFO: Pod "pod-953ec646-93b7-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.409095896s May 11 18:45:34.855: INFO: Pod "pod-953ec646-93b7-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.413530499s May 11 18:45:36.858: INFO: Pod "pod-953ec646-93b7-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.416514859s May 11 18:45:38.861: INFO: Pod "pod-953ec646-93b7-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.419348451s STEP: Saw pod success May 11 18:45:38.861: INFO: Pod "pod-953ec646-93b7-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 18:45:38.863: INFO: Trying to get logs from node hunter-worker2 pod pod-953ec646-93b7-11ea-b832-0242ac110018 container test-container: STEP: delete the pod May 11 18:45:39.057: INFO: Waiting for pod pod-953ec646-93b7-11ea-b832-0242ac110018 to disappear May 11 18:45:39.132: INFO: Pod pod-953ec646-93b7-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:45:39.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wgkwp" for this suite. May 11 18:45:47.379: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:45:47.417: INFO: namespace: e2e-tests-emptydir-wgkwp, resource: bindings, ignored listing per whitelist May 11 18:45:47.437: INFO: namespace e2e-tests-emptydir-wgkwp deletion completed in 8.286464056s • [SLOW TEST:20.082 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:45:47.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium May 11 18:45:48.824: INFO: Waiting up to 5m0s for pod "pod-a1695a7b-93b7-11ea-b832-0242ac110018" in namespace "e2e-tests-emptydir-2r6ff" to be "success or failure" May 11 18:45:48.908: INFO: Pod "pod-a1695a7b-93b7-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 84.214808ms May 11 18:45:50.911: INFO: Pod "pod-a1695a7b-93b7-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087534297s May 11 18:45:52.915: INFO: Pod "pod-a1695a7b-93b7-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090848848s May 11 18:45:54.918: INFO: Pod "pod-a1695a7b-93b7-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.094450814s May 11 18:45:56.984: INFO: Pod "pod-a1695a7b-93b7-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.159886305s May 11 18:45:58.987: INFO: Pod "pod-a1695a7b-93b7-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.163196383s STEP: Saw pod success May 11 18:45:58.987: INFO: Pod "pod-a1695a7b-93b7-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 18:45:58.990: INFO: Trying to get logs from node hunter-worker2 pod pod-a1695a7b-93b7-11ea-b832-0242ac110018 container test-container: STEP: delete the pod May 11 18:46:00.310: INFO: Waiting for pod pod-a1695a7b-93b7-11ea-b832-0242ac110018 to disappear May 11 18:46:00.583: INFO: Pod pod-a1695a7b-93b7-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:46:00.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-2r6ff" for this suite. May 11 18:46:11.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:46:11.353: INFO: namespace: e2e-tests-emptydir-2r6ff, resource: bindings, ignored listing per whitelist May 11 18:46:11.504: INFO: namespace e2e-tests-emptydir-2r6ff deletion completed in 10.711243424s • [SLOW TEST:24.067 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:46:11.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-sjrf5 May 11 18:46:20.442: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-sjrf5 STEP: checking the pod's current state and verifying that restartCount is present May 11 18:46:20.445: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:50:21.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-sjrf5" for this suite. May 11 18:50:27.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:50:27.530: INFO: namespace: e2e-tests-container-probe-sjrf5, resource: bindings, ignored listing per whitelist May 11 18:50:27.560: INFO: namespace e2e-tests-container-probe-sjrf5 deletion completed in 6.070030464s • [SLOW TEST:256.056 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:50:27.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:50:31.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-57hr2" for this suite. May 11 18:50:40.253: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:50:40.266: INFO: namespace: e2e-tests-kubelet-test-57hr2, resource: bindings, ignored listing per whitelist May 11 18:50:40.312: INFO: namespace e2e-tests-kubelet-test-57hr2 deletion completed in 8.555151702s • [SLOW TEST:12.752 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:50:40.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-504f0617-93b8-11ea-b832-0242ac110018 STEP: Creating a pod to test consume secrets May 11 18:50:42.131: INFO: Waiting up to 5m0s for pod "pod-secrets-5073dfbe-93b8-11ea-b832-0242ac110018" in namespace "e2e-tests-secrets-s5jjs" to be "success or failure" May 11 18:50:42.443: INFO: Pod "pod-secrets-5073dfbe-93b8-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 311.681959ms May 11 18:50:44.460: INFO: Pod "pod-secrets-5073dfbe-93b8-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.328323924s May 11 18:50:46.463: INFO: Pod "pod-secrets-5073dfbe-93b8-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.332212524s May 11 18:50:49.503: INFO: Pod "pod-secrets-5073dfbe-93b8-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 7.371418438s May 11 18:50:51.880: INFO: Pod "pod-secrets-5073dfbe-93b8-11ea-b832-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 9.748810364s May 11 18:50:53.884: INFO: Pod "pod-secrets-5073dfbe-93b8-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.752361078s STEP: Saw pod success May 11 18:50:53.884: INFO: Pod "pod-secrets-5073dfbe-93b8-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 18:50:53.886: INFO: Trying to get logs from node hunter-worker pod pod-secrets-5073dfbe-93b8-11ea-b832-0242ac110018 container secret-volume-test: STEP: delete the pod May 11 18:50:54.064: INFO: Waiting for pod pod-secrets-5073dfbe-93b8-11ea-b832-0242ac110018 to disappear May 11 18:50:54.076: INFO: Pod pod-secrets-5073dfbe-93b8-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:50:54.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-s5jjs" for this suite. May 11 18:51:02.115: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:51:02.256: INFO: namespace: e2e-tests-secrets-s5jjs, resource: bindings, ignored listing per whitelist May 11 18:51:02.287: INFO: namespace e2e-tests-secrets-s5jjs deletion completed in 8.20849866s • [SLOW TEST:21.975 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:51:02.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium May 11 18:51:02.688: INFO: Waiting up to 5m0s for pod "pod-5cb0ef82-93b8-11ea-b832-0242ac110018" in namespace "e2e-tests-emptydir-5gb9t" to be "success or failure" May 11 18:51:03.131: INFO: Pod "pod-5cb0ef82-93b8-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 442.443443ms May 11 18:51:05.335: INFO: Pod "pod-5cb0ef82-93b8-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.64660409s May 11 18:51:07.346: INFO: Pod "pod-5cb0ef82-93b8-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.657788353s May 11 18:51:09.442: INFO: Pod "pod-5cb0ef82-93b8-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.75375485s STEP: Saw pod success May 11 18:51:09.442: INFO: Pod "pod-5cb0ef82-93b8-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 18:51:09.444: INFO: Trying to get logs from node hunter-worker2 pod pod-5cb0ef82-93b8-11ea-b832-0242ac110018 container test-container: STEP: delete the pod May 11 18:51:09.498: INFO: Waiting for pod pod-5cb0ef82-93b8-11ea-b832-0242ac110018 to disappear May 11 18:51:09.640: INFO: Pod pod-5cb0ef82-93b8-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:51:09.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-5gb9t" for this suite. May 11 18:51:17.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:51:17.727: INFO: namespace: e2e-tests-emptydir-5gb9t, resource: bindings, ignored listing per whitelist May 11 18:51:17.754: INFO: namespace e2e-tests-emptydir-5gb9t deletion completed in 8.104463326s • [SLOW TEST:15.467 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:51:17.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-8h7bk [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-8h7bk STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-8h7bk STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-8h7bk STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-8h7bk May 11 18:51:24.394: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-8h7bk, name: ss-0, uid: 69a076f7-93b8-11ea-99e8-0242ac110002, status phase: Pending. Waiting for statefulset controller to delete. May 11 18:51:24.534: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-8h7bk, name: ss-0, uid: 69a076f7-93b8-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. May 11 18:51:24.664: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-8h7bk, name: ss-0, uid: 69a076f7-93b8-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. May 11 18:51:24.683: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-8h7bk STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-8h7bk STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-8h7bk and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 11 18:51:32.371: INFO: Deleting all statefulset in ns e2e-tests-statefulset-8h7bk May 11 18:51:32.432: INFO: Scaling statefulset ss to 0 May 11 18:51:42.738: INFO: Waiting for statefulset status.replicas updated to 0 May 11 18:51:43.359: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:51:43.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-8h7bk" for this suite. May 11 18:51:51.746: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:51:51.800: INFO: namespace: e2e-tests-statefulset-8h7bk, resource: bindings, ignored listing per whitelist May 11 18:51:51.822: INFO: namespace e2e-tests-statefulset-8h7bk deletion completed in 8.433832117s • [SLOW TEST:34.067 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:51:51.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info May 11 18:51:52.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 11 18:51:58.540: INFO: stderr: "" May 11 18:51:58.540: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:51:58.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-dsbq6" for this suite. May 11 18:52:04.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:52:04.652: INFO: namespace: e2e-tests-kubectl-dsbq6, resource: bindings, ignored listing per whitelist May 11 18:52:04.720: INFO: namespace e2e-tests-kubectl-dsbq6 deletion completed in 6.176852897s • [SLOW TEST:12.898 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:52:04.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token May 11 18:52:05.821: INFO: Waiting up to 5m0s for pod "pod-service-account-8255e317-93b8-11ea-b832-0242ac110018-hpzb7" in namespace "e2e-tests-svcaccounts-n8bzj" to be "success or failure" May 11 18:52:05.940: INFO: Pod "pod-service-account-8255e317-93b8-11ea-b832-0242ac110018-hpzb7": Phase="Pending", Reason="", readiness=false. Elapsed: 119.384612ms May 11 18:52:08.036: INFO: Pod "pod-service-account-8255e317-93b8-11ea-b832-0242ac110018-hpzb7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215188831s May 11 18:52:10.039: INFO: Pod "pod-service-account-8255e317-93b8-11ea-b832-0242ac110018-hpzb7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.218020689s May 11 18:52:12.043: INFO: Pod "pod-service-account-8255e317-93b8-11ea-b832-0242ac110018-hpzb7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.221752833s May 11 18:52:14.222: INFO: Pod "pod-service-account-8255e317-93b8-11ea-b832-0242ac110018-hpzb7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.401159163s May 11 18:52:16.635: INFO: Pod "pod-service-account-8255e317-93b8-11ea-b832-0242ac110018-hpzb7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.814246554s May 11 18:52:18.638: INFO: Pod "pod-service-account-8255e317-93b8-11ea-b832-0242ac110018-hpzb7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.816985163s STEP: Saw pod success May 11 18:52:18.638: INFO: Pod "pod-service-account-8255e317-93b8-11ea-b832-0242ac110018-hpzb7" satisfied condition "success or failure" May 11 18:52:18.640: INFO: Trying to get logs from node hunter-worker pod pod-service-account-8255e317-93b8-11ea-b832-0242ac110018-hpzb7 container token-test: STEP: delete the pod May 11 18:52:18.733: INFO: Waiting for pod pod-service-account-8255e317-93b8-11ea-b832-0242ac110018-hpzb7 to disappear May 11 18:52:18.844: INFO: Pod pod-service-account-8255e317-93b8-11ea-b832-0242ac110018-hpzb7 no longer exists STEP: Creating a pod to test consume service account root CA May 11 18:52:18.848: INFO: Waiting up to 5m0s for pod "pod-service-account-8255e317-93b8-11ea-b832-0242ac110018-tfnp4" in namespace "e2e-tests-svcaccounts-n8bzj" to be "success or failure" May 11 18:52:18.881: INFO: Pod "pod-service-account-8255e317-93b8-11ea-b832-0242ac110018-tfnp4": Phase="Pending", Reason="", readiness=false. Elapsed: 32.776864ms May 11 18:52:21.147: INFO: Pod "pod-service-account-8255e317-93b8-11ea-b832-0242ac110018-tfnp4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.298817011s May 11 18:52:24.248: INFO: Pod "pod-service-account-8255e317-93b8-11ea-b832-0242ac110018-tfnp4": Phase="Pending", Reason="", readiness=false. Elapsed: 5.399187625s May 11 18:52:26.420: INFO: Pod "pod-service-account-8255e317-93b8-11ea-b832-0242ac110018-tfnp4": Phase="Pending", Reason="", readiness=false. Elapsed: 7.571547774s May 11 18:52:28.468: INFO: Pod "pod-service-account-8255e317-93b8-11ea-b832-0242ac110018-tfnp4": Phase="Pending", Reason="", readiness=false. Elapsed: 9.619971428s May 11 18:52:30.472: INFO: Pod "pod-service-account-8255e317-93b8-11ea-b832-0242ac110018-tfnp4": Phase="Pending", Reason="", readiness=false. Elapsed: 11.624009108s May 11 18:52:32.475: INFO: Pod "pod-service-account-8255e317-93b8-11ea-b832-0242ac110018-tfnp4": Phase="Running", Reason="", readiness=false. Elapsed: 13.626807497s May 11 18:52:34.479: INFO: Pod "pod-service-account-8255e317-93b8-11ea-b832-0242ac110018-tfnp4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.630509786s STEP: Saw pod success May 11 18:52:34.479: INFO: Pod "pod-service-account-8255e317-93b8-11ea-b832-0242ac110018-tfnp4" satisfied condition "success or failure" May 11 18:52:34.482: INFO: Trying to get logs from node hunter-worker pod pod-service-account-8255e317-93b8-11ea-b832-0242ac110018-tfnp4 container root-ca-test: STEP: delete the pod May 11 18:52:34.518: INFO: Waiting for pod pod-service-account-8255e317-93b8-11ea-b832-0242ac110018-tfnp4 to disappear May 11 18:52:34.570: INFO: Pod pod-service-account-8255e317-93b8-11ea-b832-0242ac110018-tfnp4 no longer exists STEP: Creating a pod to test consume service account namespace May 11 18:52:34.574: INFO: Waiting up to 5m0s for pod "pod-service-account-8255e317-93b8-11ea-b832-0242ac110018-rcxqh" in namespace "e2e-tests-svcaccounts-n8bzj" to be "success or failure" May 11 18:52:34.612: INFO: Pod "pod-service-account-8255e317-93b8-11ea-b832-0242ac110018-rcxqh": Phase="Pending", Reason="", readiness=false. Elapsed: 37.965739ms May 11 18:52:36.617: INFO: Pod "pod-service-account-8255e317-93b8-11ea-b832-0242ac110018-rcxqh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042697742s May 11 18:52:38.743: INFO: Pod "pod-service-account-8255e317-93b8-11ea-b832-0242ac110018-rcxqh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.169359988s May 11 18:52:40.748: INFO: Pod "pod-service-account-8255e317-93b8-11ea-b832-0242ac110018-rcxqh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.174006653s May 11 18:52:42.752: INFO: Pod "pod-service-account-8255e317-93b8-11ea-b832-0242ac110018-rcxqh": Phase="Pending", Reason="", readiness=false. Elapsed: 8.178163731s May 11 18:52:45.073: INFO: Pod "pod-service-account-8255e317-93b8-11ea-b832-0242ac110018-rcxqh": Phase="Pending", Reason="", readiness=false. Elapsed: 10.499058616s May 11 18:52:47.076: INFO: Pod "pod-service-account-8255e317-93b8-11ea-b832-0242ac110018-rcxqh": Phase="Pending", Reason="", readiness=false. Elapsed: 12.502568759s May 11 18:52:49.097: INFO: Pod "pod-service-account-8255e317-93b8-11ea-b832-0242ac110018-rcxqh": Phase="Running", Reason="", readiness=false. Elapsed: 14.522652091s May 11 18:52:51.100: INFO: Pod "pod-service-account-8255e317-93b8-11ea-b832-0242ac110018-rcxqh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.525924447s STEP: Saw pod success May 11 18:52:51.100: INFO: Pod "pod-service-account-8255e317-93b8-11ea-b832-0242ac110018-rcxqh" satisfied condition "success or failure" May 11 18:52:51.102: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-8255e317-93b8-11ea-b832-0242ac110018-rcxqh container namespace-test: STEP: delete the pod May 11 18:52:51.889: INFO: Waiting for pod pod-service-account-8255e317-93b8-11ea-b832-0242ac110018-rcxqh to disappear May 11 18:52:52.121: INFO: Pod pod-service-account-8255e317-93b8-11ea-b832-0242ac110018-rcxqh no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:52:52.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-n8bzj" for this suite. May 11 18:52:58.369: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:52:58.402: INFO: namespace: e2e-tests-svcaccounts-n8bzj, resource: bindings, ignored listing per whitelist May 11 18:52:58.441: INFO: namespace e2e-tests-svcaccounts-n8bzj deletion completed in 6.317084539s • [SLOW TEST:53.721 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:52:58.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs May 11 18:52:58.560: INFO: Waiting up to 5m0s for pod "pod-a1c4580b-93b8-11ea-b832-0242ac110018" in namespace "e2e-tests-emptydir-tldtf" to be "success or failure" May 11 18:52:58.564: INFO: Pod "pod-a1c4580b-93b8-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11476ms May 11 18:53:00.684: INFO: Pod "pod-a1c4580b-93b8-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124204262s May 11 18:53:02.687: INFO: Pod "pod-a1c4580b-93b8-11ea-b832-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.127553683s May 11 18:53:04.691: INFO: Pod "pod-a1c4580b-93b8-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.131271239s STEP: Saw pod success May 11 18:53:04.691: INFO: Pod "pod-a1c4580b-93b8-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 18:53:04.693: INFO: Trying to get logs from node hunter-worker pod pod-a1c4580b-93b8-11ea-b832-0242ac110018 container test-container: STEP: delete the pod May 11 18:53:04.762: INFO: Waiting for pod pod-a1c4580b-93b8-11ea-b832-0242ac110018 to disappear May 11 18:53:04.774: INFO: Pod pod-a1c4580b-93b8-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:53:04.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-tldtf" for this suite. May 11 18:53:10.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:53:10.859: INFO: namespace: e2e-tests-emptydir-tldtf, resource: bindings, ignored listing per whitelist May 11 18:53:10.883: INFO: namespace e2e-tests-emptydir-tldtf deletion completed in 6.105976085s • [SLOW TEST:12.441 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:53:10.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-a92b36bb-93b8-11ea-b832-0242ac110018 STEP: Creating a pod to test consume configMaps May 11 18:53:11.039: INFO: Waiting up to 5m0s for pod "pod-configmaps-a934dd69-93b8-11ea-b832-0242ac110018" in namespace "e2e-tests-configmap-8z7bc" to be "success or failure" May 11 18:53:11.102: INFO: Pod "pod-configmaps-a934dd69-93b8-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 63.609099ms May 11 18:53:13.191: INFO: Pod "pod-configmaps-a934dd69-93b8-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.152460271s May 11 18:53:15.415: INFO: Pod "pod-configmaps-a934dd69-93b8-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.375850694s May 11 18:53:17.582: INFO: Pod "pod-configmaps-a934dd69-93b8-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.543342713s May 11 18:53:20.409: INFO: Pod "pod-configmaps-a934dd69-93b8-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.369700558s STEP: Saw pod success May 11 18:53:20.409: INFO: Pod "pod-configmaps-a934dd69-93b8-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 18:53:20.470: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-a934dd69-93b8-11ea-b832-0242ac110018 container configmap-volume-test: STEP: delete the pod May 11 18:53:21.017: INFO: Waiting for pod pod-configmaps-a934dd69-93b8-11ea-b832-0242ac110018 to disappear May 11 18:53:21.246: INFO: Pod pod-configmaps-a934dd69-93b8-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:53:21.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-8z7bc" for this suite. May 11 18:53:27.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:53:27.347: INFO: namespace: e2e-tests-configmap-8z7bc, resource: bindings, ignored listing per whitelist May 11 18:53:27.375: INFO: namespace e2e-tests-configmap-8z7bc deletion completed in 6.126176837s • [SLOW TEST:16.492 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:53:27.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-b309bdbc-93b8-11ea-b832-0242ac110018 STEP: Creating configMap with name cm-test-opt-upd-b309bdf7-93b8-11ea-b832-0242ac110018 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-b309bdbc-93b8-11ea-b832-0242ac110018 STEP: Updating configmap cm-test-opt-upd-b309bdf7-93b8-11ea-b832-0242ac110018 STEP: Creating configMap with name cm-test-opt-create-b309be0c-93b8-11ea-b832-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 18:54:51.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-c8jkq" for this suite. May 11 18:55:18.251: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 18:55:18.259: INFO: namespace: e2e-tests-projected-c8jkq, resource: bindings, ignored listing per whitelist May 11 18:55:18.323: INFO: namespace e2e-tests-projected-c8jkq deletion completed in 26.515402628s • [SLOW TEST:110.948 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 18:55:18.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-tbznl [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-tbznl STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-tbznl May 11 18:55:19.064: INFO: Found 0 stateful pods, waiting for 1 May 11 18:55:29.070: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 11 18:55:29.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 11 18:55:29.395: INFO: stderr: "I0511 18:55:29.207028 2994 log.go:172] (0xc0001386e0) (0xc0007cb400) Create stream\nI0511 18:55:29.207091 2994 log.go:172] (0xc0001386e0) (0xc0007cb400) Stream added, broadcasting: 1\nI0511 18:55:29.212681 2994 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0511 18:55:29.212740 2994 log.go:172] (0xc0001386e0) (0xc0005f4000) Create stream\nI0511 18:55:29.212753 2994 log.go:172] (0xc0001386e0) (0xc0005f4000) Stream added, broadcasting: 3\nI0511 18:55:29.213990 2994 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0511 18:55:29.214054 2994 log.go:172] (0xc0001386e0) (0xc0005f40a0) Create stream\nI0511 18:55:29.214074 2994 log.go:172] (0xc0001386e0) (0xc0005f40a0) Stream added, broadcasting: 5\nI0511 18:55:29.214861 2994 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0511 18:55:29.386122 2994 log.go:172] (0xc0001386e0) Data frame received for 5\nI0511 18:55:29.386174 2994 log.go:172] (0xc0001386e0) Data frame received for 3\nI0511 18:55:29.386217 2994 log.go:172] (0xc0005f4000) (3) Data frame handling\nI0511 18:55:29.386230 2994 log.go:172] (0xc0005f4000) (3) Data frame sent\nI0511 18:55:29.386237 2994 log.go:172] (0xc0001386e0) Data frame received for 3\nI0511 18:55:29.386243 2994 log.go:172] (0xc0005f4000) (3) Data frame handling\nI0511 18:55:29.386283 2994 log.go:172] (0xc0005f40a0) (5) Data frame handling\nI0511 18:55:29.387827 2994 log.go:172] (0xc0001386e0) Data frame received for 1\nI0511 18:55:29.387848 2994 log.go:172] (0xc0007cb400) (1) Data frame handling\nI0511 18:55:29.387862 2994 log.go:172] (0xc0007cb400) (1) Data frame sent\nI0511 18:55:29.387982 2994 log.go:172] (0xc0001386e0) (0xc0007cb400) Stream removed, broadcasting: 1\nI0511 18:55:29.388018 2994 log.go:172] (0xc0001386e0) Go away received\nI0511 18:55:29.388210 2994 log.go:172] (0xc0001386e0) (0xc0007cb400) Stream removed, broadcasting: 1\nI0511 18:55:29.388243 2994 log.go:172] (0xc0001386e0) (0xc0005f4000) Stream removed, broadcasting: 3\nI0511 18:55:29.388260 2994 log.go:172] (0xc0001386e0) (0xc0005f40a0) Stream removed, broadcasting: 5\n" May 11 18:55:29.395: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 11 18:55:29.395: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 11 18:55:29.399: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 11 18:55:39.402: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 11 18:55:39.402: INFO: Waiting for statefulset status.replicas updated to 0 May 11 18:55:39.468: INFO: POD NODE PHASE GRACE CONDITIONS May 11 18:55:39.468: INFO: ss-0 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:19 +0000 UTC }] May 11 18:55:39.468: INFO: May 11 18:55:39.468: INFO: StatefulSet ss has not reached scale 3, at 1 May 11 18:55:40.563: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.940200513s May 11 18:55:41.752: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.844549826s May 11 18:55:42.812: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.655384174s May 11 18:55:43.943: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.596148494s May 11 18:55:44.985: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.465115628s May 11 18:55:46.047: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.421367s May 11 18:55:47.051: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.360724375s May 11 18:55:48.107: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.356627256s May 11 18:55:49.243: INFO: Verifying statefulset ss doesn't scale past 3 for another 300.951329ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-tbznl May 11 18:55:50.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 18:55:50.453: INFO: stderr: "I0511 18:55:50.384597 3016 log.go:172] (0xc00016c160) (0xc0005f45a0) Create stream\nI0511 18:55:50.384652 3016 log.go:172] (0xc00016c160) (0xc0005f45a0) Stream added, broadcasting: 1\nI0511 18:55:50.386597 3016 log.go:172] (0xc00016c160) Reply frame received for 1\nI0511 18:55:50.386643 3016 log.go:172] (0xc00016c160) (0xc0003e0dc0) Create stream\nI0511 18:55:50.386658 3016 log.go:172] (0xc00016c160) (0xc0003e0dc0) Stream added, broadcasting: 3\nI0511 18:55:50.387336 3016 log.go:172] (0xc00016c160) Reply frame received for 3\nI0511 18:55:50.387373 3016 log.go:172] (0xc00016c160) (0xc000850000) Create stream\nI0511 18:55:50.387390 3016 log.go:172] (0xc00016c160) (0xc000850000) Stream added, broadcasting: 5\nI0511 18:55:50.387995 3016 log.go:172] (0xc00016c160) Reply frame received for 5\nI0511 18:55:50.447076 3016 log.go:172] (0xc00016c160) Data frame received for 5\nI0511 18:55:50.447143 3016 log.go:172] (0xc000850000) (5) Data frame handling\nI0511 18:55:50.447183 3016 log.go:172] (0xc00016c160) Data frame received for 3\nI0511 18:55:50.447205 3016 log.go:172] (0xc0003e0dc0) (3) Data frame handling\nI0511 18:55:50.447233 3016 log.go:172] (0xc0003e0dc0) (3) Data frame sent\nI0511 18:55:50.447256 3016 log.go:172] (0xc00016c160) Data frame received for 3\nI0511 18:55:50.447289 3016 log.go:172] (0xc0003e0dc0) (3) Data frame handling\nI0511 18:55:50.448908 3016 log.go:172] (0xc00016c160) Data frame received for 1\nI0511 18:55:50.448935 3016 log.go:172] (0xc0005f45a0) (1) Data frame handling\nI0511 18:55:50.448956 3016 log.go:172] (0xc0005f45a0) (1) Data frame sent\nI0511 18:55:50.448969 3016 log.go:172] (0xc00016c160) (0xc0005f45a0) Stream removed, broadcasting: 1\nI0511 18:55:50.449097 3016 log.go:172] (0xc00016c160) (0xc0005f45a0) Stream removed, broadcasting: 1\nI0511 18:55:50.449241 3016 log.go:172] (0xc00016c160) (0xc0003e0dc0) Stream removed, broadcasting: 3\nI0511 18:55:50.449250 3016 log.go:172] (0xc00016c160) (0xc000850000) Stream removed, broadcasting: 5\n" May 11 18:55:50.453: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 11 18:55:50.453: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 11 18:55:50.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 18:55:50.638: INFO: stderr: "I0511 18:55:50.576030 3038 log.go:172] (0xc00015c790) (0xc0005f34a0) Create stream\nI0511 18:55:50.576076 3038 log.go:172] (0xc00015c790) (0xc0005f34a0) Stream added, broadcasting: 1\nI0511 18:55:50.578015 3038 log.go:172] (0xc00015c790) Reply frame received for 1\nI0511 18:55:50.578053 3038 log.go:172] (0xc00015c790) (0xc0000e4000) Create stream\nI0511 18:55:50.578063 3038 log.go:172] (0xc00015c790) (0xc0000e4000) Stream added, broadcasting: 3\nI0511 18:55:50.578711 3038 log.go:172] (0xc00015c790) Reply frame received for 3\nI0511 18:55:50.578745 3038 log.go:172] (0xc00015c790) (0xc0000ea000) Create stream\nI0511 18:55:50.578756 3038 log.go:172] (0xc00015c790) (0xc0000ea000) Stream added, broadcasting: 5\nI0511 18:55:50.579582 3038 log.go:172] (0xc00015c790) Reply frame received for 5\nI0511 18:55:50.633681 3038 log.go:172] (0xc00015c790) Data frame received for 3\nI0511 18:55:50.633725 3038 log.go:172] (0xc0000e4000) (3) Data frame handling\nI0511 18:55:50.633740 3038 log.go:172] (0xc0000e4000) (3) Data frame sent\nI0511 18:55:50.633766 3038 log.go:172] (0xc00015c790) Data frame received for 5\nI0511 18:55:50.633775 3038 log.go:172] (0xc0000ea000) (5) Data frame handling\nI0511 18:55:50.633783 3038 log.go:172] (0xc0000ea000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0511 18:55:50.633942 3038 log.go:172] (0xc00015c790) Data frame received for 5\nI0511 18:55:50.633963 3038 log.go:172] (0xc0000ea000) (5) Data frame handling\nI0511 18:55:50.633978 3038 log.go:172] (0xc00015c790) Data frame received for 3\nI0511 18:55:50.633985 3038 log.go:172] (0xc0000e4000) (3) Data frame handling\nI0511 18:55:50.635182 3038 log.go:172] (0xc00015c790) Data frame received for 1\nI0511 18:55:50.635195 3038 log.go:172] (0xc0005f34a0) (1) Data frame handling\nI0511 18:55:50.635202 3038 log.go:172] (0xc0005f34a0) (1) Data frame sent\nI0511 18:55:50.635209 3038 log.go:172] (0xc00015c790) (0xc0005f34a0) Stream removed, broadcasting: 1\nI0511 18:55:50.635219 3038 log.go:172] (0xc00015c790) Go away received\nI0511 18:55:50.635492 3038 log.go:172] (0xc00015c790) (0xc0005f34a0) Stream removed, broadcasting: 1\nI0511 18:55:50.635510 3038 log.go:172] (0xc00015c790) (0xc0000e4000) Stream removed, broadcasting: 3\nI0511 18:55:50.635520 3038 log.go:172] (0xc00015c790) (0xc0000ea000) Stream removed, broadcasting: 5\n" May 11 18:55:50.639: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 11 18:55:50.639: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 11 18:55:50.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 18:55:50.825: INFO: stderr: "I0511 18:55:50.748015 3060 log.go:172] (0xc000138630) (0xc000666780) Create stream\nI0511 18:55:50.748423 3060 log.go:172] (0xc000138630) (0xc000666780) Stream added, broadcasting: 1\nI0511 18:55:50.752528 3060 log.go:172] (0xc000138630) Reply frame received for 1\nI0511 18:55:50.752573 3060 log.go:172] (0xc000138630) (0xc000666000) Create stream\nI0511 18:55:50.752588 3060 log.go:172] (0xc000138630) (0xc000666000) Stream added, broadcasting: 3\nI0511 18:55:50.753941 3060 log.go:172] (0xc000138630) Reply frame received for 3\nI0511 18:55:50.753968 3060 log.go:172] (0xc000138630) (0xc0007d2280) Create stream\nI0511 18:55:50.753979 3060 log.go:172] (0xc000138630) (0xc0007d2280) Stream added, broadcasting: 5\nI0511 18:55:50.754953 3060 log.go:172] (0xc000138630) Reply frame received for 5\nI0511 18:55:50.820097 3060 log.go:172] (0xc000138630) Data frame received for 3\nI0511 18:55:50.820149 3060 log.go:172] (0xc000666000) (3) Data frame handling\nI0511 18:55:50.820174 3060 log.go:172] (0xc000666000) (3) Data frame sent\nI0511 18:55:50.820193 3060 log.go:172] (0xc000138630) Data frame received for 3\nI0511 18:55:50.820222 3060 log.go:172] (0xc000666000) (3) Data frame handling\nI0511 18:55:50.820260 3060 log.go:172] (0xc000138630) Data frame received for 5\nI0511 18:55:50.820287 3060 log.go:172] (0xc0007d2280) (5) Data frame handling\nI0511 18:55:50.820315 3060 log.go:172] (0xc0007d2280) (5) Data frame sent\nI0511 18:55:50.820337 3060 log.go:172] (0xc000138630) Data frame received for 5\nI0511 18:55:50.820355 3060 log.go:172] (0xc0007d2280) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0511 18:55:50.821968 3060 log.go:172] (0xc000138630) Data frame received for 1\nI0511 18:55:50.822003 3060 log.go:172] (0xc000666780) (1) Data frame handling\nI0511 18:55:50.822022 3060 log.go:172] (0xc000666780) (1) Data frame sent\nI0511 18:55:50.822066 3060 log.go:172] (0xc000138630) (0xc000666780) Stream removed, broadcasting: 1\nI0511 18:55:50.822188 3060 log.go:172] (0xc000138630) Go away received\nI0511 18:55:50.822275 3060 log.go:172] (0xc000138630) (0xc000666780) Stream removed, broadcasting: 1\nI0511 18:55:50.822319 3060 log.go:172] (0xc000138630) (0xc000666000) Stream removed, broadcasting: 3\nI0511 18:55:50.822351 3060 log.go:172] (0xc000138630) (0xc0007d2280) Stream removed, broadcasting: 5\n" May 11 18:55:50.826: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 11 18:55:50.826: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 11 18:55:50.942: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 11 18:55:50.942: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 11 18:55:50.942: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 11 18:55:50.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 11 18:55:52.097: INFO: stderr: "I0511 18:55:52.037414 3082 log.go:172] (0xc00015c840) (0xc000744640) Create stream\nI0511 18:55:52.037455 3082 log.go:172] (0xc00015c840) (0xc000744640) Stream added, broadcasting: 1\nI0511 18:55:52.039089 3082 log.go:172] (0xc00015c840) Reply frame received for 1\nI0511 18:55:52.039117 3082 log.go:172] (0xc00015c840) (0xc00065ac80) Create stream\nI0511 18:55:52.039126 3082 log.go:172] (0xc00015c840) (0xc00065ac80) Stream added, broadcasting: 3\nI0511 18:55:52.039751 3082 log.go:172] (0xc00015c840) Reply frame received for 3\nI0511 18:55:52.039776 3082 log.go:172] (0xc00015c840) (0xc0007446e0) Create stream\nI0511 18:55:52.039788 3082 log.go:172] (0xc00015c840) (0xc0007446e0) Stream added, broadcasting: 5\nI0511 18:55:52.040411 3082 log.go:172] (0xc00015c840) Reply frame received for 5\nI0511 18:55:52.092143 3082 log.go:172] (0xc00015c840) Data frame received for 5\nI0511 18:55:52.092172 3082 log.go:172] (0xc0007446e0) (5) Data frame handling\nI0511 18:55:52.092219 3082 log.go:172] (0xc00015c840) Data frame received for 3\nI0511 18:55:52.092252 3082 log.go:172] (0xc00065ac80) (3) Data frame handling\nI0511 18:55:52.092268 3082 log.go:172] (0xc00065ac80) (3) Data frame sent\nI0511 18:55:52.092276 3082 log.go:172] (0xc00015c840) Data frame received for 3\nI0511 18:55:52.092283 3082 log.go:172] (0xc00065ac80) (3) Data frame handling\nI0511 18:55:52.094131 3082 log.go:172] (0xc00015c840) Data frame received for 1\nI0511 18:55:52.094148 3082 log.go:172] (0xc000744640) (1) Data frame handling\nI0511 18:55:52.094157 3082 log.go:172] (0xc000744640) (1) Data frame sent\nI0511 18:55:52.094170 3082 log.go:172] (0xc00015c840) (0xc000744640) Stream removed, broadcasting: 1\nI0511 18:55:52.094187 3082 log.go:172] (0xc00015c840) Go away received\nI0511 18:55:52.094336 3082 log.go:172] (0xc00015c840) (0xc000744640) Stream removed, broadcasting: 1\nI0511 18:55:52.094358 3082 log.go:172] (0xc00015c840) (0xc00065ac80) Stream removed, broadcasting: 3\nI0511 18:55:52.094371 3082 log.go:172] (0xc00015c840) (0xc0007446e0) Stream removed, broadcasting: 5\n" May 11 18:55:52.097: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 11 18:55:52.097: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 11 18:55:52.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 11 18:55:53.388: INFO: stderr: "I0511 18:55:52.948568 3105 log.go:172] (0xc000716370) (0xc00073a640) Create stream\nI0511 18:55:52.948636 3105 log.go:172] (0xc000716370) (0xc00073a640) Stream added, broadcasting: 1\nI0511 18:55:52.950879 3105 log.go:172] (0xc000716370) Reply frame received for 1\nI0511 18:55:52.950927 3105 log.go:172] (0xc000716370) (0xc0005b6d20) Create stream\nI0511 18:55:52.950942 3105 log.go:172] (0xc000716370) (0xc0005b6d20) Stream added, broadcasting: 3\nI0511 18:55:52.951659 3105 log.go:172] (0xc000716370) Reply frame received for 3\nI0511 18:55:52.951681 3105 log.go:172] (0xc000716370) (0xc0005b4000) Create stream\nI0511 18:55:52.951691 3105 log.go:172] (0xc000716370) (0xc0005b4000) Stream added, broadcasting: 5\nI0511 18:55:52.952329 3105 log.go:172] (0xc000716370) Reply frame received for 5\nI0511 18:55:53.382382 3105 log.go:172] (0xc000716370) Data frame received for 3\nI0511 18:55:53.382461 3105 log.go:172] (0xc0005b6d20) (3) Data frame handling\nI0511 18:55:53.382497 3105 log.go:172] (0xc0005b6d20) (3) Data frame sent\nI0511 18:55:53.384210 3105 log.go:172] (0xc000716370) Data frame received for 3\nI0511 18:55:53.384226 3105 log.go:172] (0xc0005b6d20) (3) Data frame handling\nI0511 18:55:53.384276 3105 log.go:172] (0xc000716370) Data frame received for 5\nI0511 18:55:53.384290 3105 log.go:172] (0xc0005b4000) (5) Data frame handling\nI0511 18:55:53.385778 3105 log.go:172] (0xc000716370) Data frame received for 1\nI0511 18:55:53.385794 3105 log.go:172] (0xc00073a640) (1) Data frame handling\nI0511 18:55:53.385803 3105 log.go:172] (0xc00073a640) (1) Data frame sent\nI0511 18:55:53.385821 3105 log.go:172] (0xc000716370) (0xc00073a640) Stream removed, broadcasting: 1\nI0511 18:55:53.385888 3105 log.go:172] (0xc000716370) Go away received\nI0511 18:55:53.386032 3105 log.go:172] (0xc000716370) (0xc00073a640) Stream removed, broadcasting: 1\nI0511 18:55:53.386045 3105 log.go:172] (0xc000716370) (0xc0005b6d20) Stream removed, broadcasting: 3\nI0511 18:55:53.386082 3105 log.go:172] (0xc000716370) (0xc0005b4000) Stream removed, broadcasting: 5\n" May 11 18:55:53.388: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 11 18:55:53.388: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 11 18:55:53.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 11 18:55:53.904: INFO: stderr: "I0511 18:55:53.613515 3127 log.go:172] (0xc00077a160) (0xc000680780) Create stream\nI0511 18:55:53.613565 3127 log.go:172] (0xc00077a160) (0xc000680780) Stream added, broadcasting: 1\nI0511 18:55:53.615650 3127 log.go:172] (0xc00077a160) Reply frame received for 1\nI0511 18:55:53.615708 3127 log.go:172] (0xc00077a160) (0xc000212c80) Create stream\nI0511 18:55:53.615728 3127 log.go:172] (0xc00077a160) (0xc000212c80) Stream added, broadcasting: 3\nI0511 18:55:53.616595 3127 log.go:172] (0xc00077a160) Reply frame received for 3\nI0511 18:55:53.616634 3127 log.go:172] (0xc00077a160) (0xc000692000) Create stream\nI0511 18:55:53.616646 3127 log.go:172] (0xc00077a160) (0xc000692000) Stream added, broadcasting: 5\nI0511 18:55:53.617597 3127 log.go:172] (0xc00077a160) Reply frame received for 5\nI0511 18:55:53.897545 3127 log.go:172] (0xc00077a160) Data frame received for 5\nI0511 18:55:53.897593 3127 log.go:172] (0xc000692000) (5) Data frame handling\nI0511 18:55:53.897630 3127 log.go:172] (0xc00077a160) Data frame received for 3\nI0511 18:55:53.897653 3127 log.go:172] (0xc000212c80) (3) Data frame handling\nI0511 18:55:53.897675 3127 log.go:172] (0xc000212c80) (3) Data frame sent\nI0511 18:55:53.897694 3127 log.go:172] (0xc00077a160) Data frame received for 3\nI0511 18:55:53.897717 3127 log.go:172] (0xc000212c80) (3) Data frame handling\nI0511 18:55:53.899473 3127 log.go:172] (0xc00077a160) Data frame received for 1\nI0511 18:55:53.899493 3127 log.go:172] (0xc000680780) (1) Data frame handling\nI0511 18:55:53.899502 3127 log.go:172] (0xc000680780) (1) Data frame sent\nI0511 18:55:53.899512 3127 log.go:172] (0xc00077a160) (0xc000680780) Stream removed, broadcasting: 1\nI0511 18:55:53.899585 3127 log.go:172] (0xc00077a160) Go away received\nI0511 18:55:53.899661 3127 log.go:172] (0xc00077a160) (0xc000680780) Stream removed, broadcasting: 1\nI0511 18:55:53.899679 3127 log.go:172] (0xc00077a160) (0xc000212c80) Stream removed, broadcasting: 3\nI0511 18:55:53.899688 3127 log.go:172] (0xc00077a160) (0xc000692000) Stream removed, broadcasting: 5\n" May 11 18:55:53.904: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 11 18:55:53.904: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 11 18:55:53.904: INFO: Waiting for statefulset status.replicas updated to 0 May 11 18:55:53.925: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 May 11 18:56:03.983: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 11 18:56:03.983: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 11 18:56:03.983: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 11 18:56:04.304: INFO: POD NODE PHASE GRACE CONDITIONS May 11 18:56:04.304: INFO: ss-0 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:19 +0000 UTC }] May 11 18:56:04.304: INFO: ss-1 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:39 +0000 UTC }] May 11 18:56:04.304: INFO: ss-2 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:39 +0000 UTC }] May 11 18:56:04.304: INFO: May 11 18:56:04.304: INFO: StatefulSet ss has not reached scale 0, at 3 May 11 18:56:05.675: INFO: POD NODE PHASE GRACE CONDITIONS May 11 18:56:05.675: INFO: ss-0 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:19 +0000 UTC }] May 11 18:56:05.675: INFO: ss-1 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:39 +0000 UTC }] May 11 18:56:05.676: INFO: ss-2 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:39 +0000 UTC }] May 11 18:56:05.676: INFO: May 11 18:56:05.676: INFO: StatefulSet ss has not reached scale 0, at 3 May 11 18:56:06.998: INFO: POD NODE PHASE GRACE CONDITIONS May 11 18:56:06.998: INFO: ss-0 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:19 +0000 UTC }] May 11 18:56:06.998: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:39 +0000 UTC }] May 11 18:56:06.998: INFO: ss-2 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:39 +0000 UTC }] May 11 18:56:06.998: INFO: May 11 18:56:06.998: INFO: StatefulSet ss has not reached scale 0, at 3 May 11 18:56:08.274: INFO: POD NODE PHASE GRACE CONDITIONS May 11 18:56:08.274: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:19 +0000 UTC }] May 11 18:56:08.274: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:39 +0000 UTC }] May 11 18:56:08.274: INFO: ss-2 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:39 +0000 UTC }] May 11 18:56:08.274: INFO: May 11 18:56:08.275: INFO: StatefulSet ss has not reached scale 0, at 3 May 11 18:56:09.279: INFO: POD NODE PHASE GRACE CONDITIONS May 11 18:56:09.279: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:19 +0000 UTC }] May 11 18:56:09.279: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:39 +0000 UTC }] May 11 18:56:09.279: INFO: ss-2 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:39 +0000 UTC }] May 11 18:56:09.279: INFO: May 11 18:56:09.279: INFO: StatefulSet ss has not reached scale 0, at 3 May 11 18:56:10.562: INFO: POD NODE PHASE GRACE CONDITIONS May 11 18:56:10.562: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:19 +0000 UTC }] May 11 18:56:10.562: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:39 +0000 UTC }] May 11 18:56:10.562: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:39 +0000 UTC }] May 11 18:56:10.562: INFO: May 11 18:56:10.562: INFO: StatefulSet ss has not reached scale 0, at 3 May 11 18:56:11.590: INFO: POD NODE PHASE GRACE CONDITIONS May 11 18:56:11.590: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:19 +0000 UTC }] May 11 18:56:11.591: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:39 +0000 UTC }] May 11 18:56:11.591: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:39 +0000 UTC }] May 11 18:56:11.591: INFO: May 11 18:56:11.591: INFO: StatefulSet ss has not reached scale 0, at 3 May 11 18:56:12.752: INFO: POD NODE PHASE GRACE CONDITIONS May 11 18:56:12.752: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:19 +0000 UTC }] May 11 18:56:12.752: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:39 +0000 UTC }] May 11 18:56:12.752: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:39 +0000 UTC }] May 11 18:56:12.752: INFO: May 11 18:56:12.752: INFO: StatefulSet ss has not reached scale 0, at 3 May 11 18:56:13.756: INFO: POD NODE PHASE GRACE CONDITIONS May 11 18:56:13.756: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:19 +0000 UTC }] May 11 18:56:13.756: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:39 +0000 UTC }] May 11 18:56:13.756: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 18:55:39 +0000 UTC }] May 11 18:56:13.756: INFO: May 11 18:56:13.756: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-tbznl May 11 18:56:14.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 18:56:14.938: INFO: rc: 1 May 11 18:56:14.938: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc001200bd0 exit status 1 true [0xc001708710 0xc001708728 0xc001708740] [0xc001708710 0xc001708728 0xc001708740] [0xc001708720 0xc001708738] [0x935700 0x935700] 0xc000f79920 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 May 11 18:56:24.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 18:56:25.019: INFO: rc: 1 May 11 18:56:25.019: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000b969c0 exit status 1 true [0xc0019a45b0 0xc0019a45c8 0xc0019a45e0] [0xc0019a45b0 0xc0019a45c8 0xc0019a45e0] [0xc0019a45c0 0xc0019a45d8] [0x935700 0x935700] 0xc001d6a180 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 18:56:35.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 18:56:35.129: INFO: rc: 1 May 11 18:56:35.129: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a9af30 exit status 1 true [0xc000be27f0 0xc000be2808 0xc000be2820] [0xc000be27f0 0xc000be2808 0xc000be2820] [0xc000be2800 0xc000be2818] [0x935700 0x935700] 0xc0016a4f00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 18:56:45.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 18:56:45.250: INFO: rc: 1 May 11 18:56:45.251: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a3a300 exit status 1 true [0xc00020ac50 0xc00020acc8 0xc00020ad50] [0xc00020ac50 0xc00020acc8 0xc00020ad50] [0xc00020ac98 0xc00020ad30] [0x935700 0x935700] 0xc002293800 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 18:56:55.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 18:56:55.339: INFO: rc: 1 May 11 18:56:55.339: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001200e70 exit status 1 true [0xc001708748 0xc001708760 0xc001708778] [0xc001708748 0xc001708760 0xc001708778] [0xc001708758 0xc001708770] [0x935700 0x935700] 0xc000f79bc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 18:57:05.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 18:57:05.427: INFO: rc: 1 May 11 18:57:05.427: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00256ecf0 exit status 1 true [0xc001c00118 0xc001c00130 0xc001c00148] [0xc001c00118 0xc001c00130 0xc001c00148] [0xc001c00128 0xc001c00140] [0x935700 0x935700] 0xc0019f3260 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 18:57:15.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 18:57:15.628: INFO: rc: 1 May 11 18:57:15.628: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0023a6180 exit status 1 true [0xc00016e000 0xc000be2008 0xc000be2020] [0xc00016e000 0xc000be2008 0xc000be2020] [0xc000be2000 0xc000be2018] [0x935700 0x935700] 0xc001a73560 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 18:57:25.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 18:57:25.794: INFO: rc: 1 May 11 18:57:25.794: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001c04150 exit status 1 true [0xc001708008 0xc001708040 0xc001708058] [0xc001708008 0xc001708040 0xc001708058] [0xc001708038 0xc001708050] [0x935700 0x935700] 0xc00225c000 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 18:57:35.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 18:57:35.895: INFO: rc: 1 May 11 18:57:35.895: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001c042a0 exit status 1 true [0xc001708060 0xc001708098 0xc0017080b0] [0xc001708060 0xc001708098 0xc0017080b0] [0xc001708090 0xc0017080a8] [0x935700 0x935700] 0xc00225c4e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 18:57:45.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 18:57:46.050: INFO: rc: 1 May 11 18:57:46.050: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0014cc390 exit status 1 true [0xc000716138 0xc0007162f8 0xc0007166f8] [0xc000716138 0xc0007162f8 0xc0007166f8] [0xc000716200 0xc0007164a8] [0x935700 0x935700] 0xc0023902a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 18:57:56.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 18:57:56.137: INFO: rc: 1 May 11 18:57:56.138: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0014cc510 exit status 1 true [0xc000716758 0xc000716a30 0xc000717030] [0xc000716758 0xc000716a30 0xc000717030] [0xc0007168c8 0xc000716f30] [0x935700 0x935700] 0xc0023905a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 18:58:06.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 18:58:06.220: INFO: rc: 1 May 11 18:58:06.220: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000458930 exit status 1 true [0xc00124c000 0xc00124c018 0xc00124c030] [0xc00124c000 0xc00124c018 0xc00124c030] [0xc00124c010 0xc00124c028] [0x935700 0x935700] 0xc001fc1aa0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 18:58:16.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 18:58:16.307: INFO: rc: 1 May 11 18:58:16.307: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0014cc690 exit status 1 true [0xc000717130 0xc000717348 0xc0007173d0] [0xc000717130 0xc000717348 0xc0007173d0] [0xc0007172a0 0xc000717368] [0x935700 0x935700] 0xc0023908a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 18:58:26.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 18:58:26.494: INFO: rc: 1 May 11 18:58:26.495: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0014cc7e0 exit status 1 true [0xc000717448 0xc0007174b8 0xc0007174d0] [0xc000717448 0xc0007174b8 0xc0007174d0] [0xc0007174a0 0xc0007174c8] [0x935700 0x935700] 0xc002390b40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 18:58:36.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 18:58:36.949: INFO: rc: 1 May 11 18:58:36.949: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0014cc900 exit status 1 true [0xc000717508 0xc0007175b8 0xc000717618] [0xc000717508 0xc0007175b8 0xc000717618] [0xc000717578 0xc000717608] [0x935700 0x935700] 0xc002390e40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 18:58:46.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 18:58:48.421: INFO: rc: 1 May 11 18:58:48.421: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001c048d0 exit status 1 true [0xc0017080b8 0xc0017080d0 0xc0017080e8] [0xc0017080b8 0xc0017080d0 0xc0017080e8] [0xc0017080c8 0xc0017080e0] [0x935700 0x935700] 0xc00225c900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 18:58:58.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 18:58:58.587: INFO: rc: 1 May 11 18:58:58.587: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001c04ae0 exit status 1 true [0xc0017080f0 0xc001708108 0xc001708120] [0xc0017080f0 0xc001708108 0xc001708120] [0xc001708100 0xc001708118] [0x935700 0x935700] 0xc00225d7a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 18:59:08.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 18:59:08.681: INFO: rc: 1 May 11 18:59:08.681: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0023a6300 exit status 1 true [0xc000be2030 0xc000be2048 0xc000be2060] [0xc000be2030 0xc000be2048 0xc000be2060] [0xc000be2040 0xc000be2058] [0x935700 0x935700] 0xc001a73980 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 18:59:18.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 18:59:18.812: INFO: rc: 1 May 11 18:59:18.812: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001c04180 exit status 1 true [0xc00033c000 0xc001708018 0xc001708048] [0xc00033c000 0xc001708018 0xc001708048] [0xc001708008 0xc001708040] [0x935700 0x935700] 0xc00225c000 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 18:59:28.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 18:59:28.972: INFO: rc: 1 May 11 18:59:28.972: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0014cc360 exit status 1 true [0xc000716138 0xc0007162f8 0xc0007166f8] [0xc000716138 0xc0007162f8 0xc0007166f8] [0xc000716200 0xc0007164a8] [0x935700 0x935700] 0xc0023902a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 18:59:38.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 18:59:39.100: INFO: rc: 1 May 11 18:59:39.100: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0004582d0 exit status 1 true [0xc00124c000 0xc00124c018 0xc00124c030] [0xc00124c000 0xc00124c018 0xc00124c030] [0xc00124c010 0xc00124c028] [0x935700 0x935700] 0xc001fc1aa0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 18:59:49.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 18:59:49.186: INFO: rc: 1 May 11 18:59:49.186: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000458ab0 exit status 1 true [0xc00124c038 0xc00124c050 0xc00124c068] [0xc00124c038 0xc00124c050 0xc00124c068] [0xc00124c048 0xc00124c060] [0x935700 0x935700] 0xc001fc1ec0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 18:59:59.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 18:59:59.274: INFO: rc: 1 May 11 18:59:59.274: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0023a61e0 exit status 1 true [0xc000be2000 0xc000be2018 0xc000be2070] [0xc000be2000 0xc000be2018 0xc000be2070] [0xc000be2010 0xc000be2068] [0x935700 0x935700] 0xc001a73560 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 19:00:09.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 19:00:09.421: INFO: rc: 1 May 11 19:00:09.421: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001c04360 exit status 1 true [0xc001708050 0xc001708080 0xc0017080a0] [0xc001708050 0xc001708080 0xc0017080a0] [0xc001708060 0xc001708098] [0x935700 0x935700] 0xc00225c4e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 19:00:19.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 19:00:19.521: INFO: rc: 1 May 11 19:00:19.521: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000458bd0 exit status 1 true [0xc00124c070 0xc00124c088 0xc00124c0a0] [0xc00124c070 0xc00124c088 0xc00124c0a0] [0xc00124c080 0xc00124c098] [0x935700 0x935700] 0xc001d625a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 19:00:29.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 19:00:29.604: INFO: rc: 1 May 11 19:00:29.604: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0014cc570 exit status 1 true [0xc000716758 0xc000716a30 0xc000717030] [0xc000716758 0xc000716a30 0xc000717030] [0xc0007168c8 0xc000716f30] [0x935700 0x935700] 0xc0023905a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 19:00:39.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 19:00:39.690: INFO: rc: 1 May 11 19:00:39.690: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000458d20 exit status 1 true [0xc00124c0a8 0xc00124c0c0 0xc00124c0d8] [0xc00124c0a8 0xc00124c0c0 0xc00124c0d8] [0xc00124c0b8 0xc00124c0d0] [0x935700 0x935700] 0xc001d62b40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 19:00:49.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 19:00:49.795: INFO: rc: 1 May 11 19:00:49.795: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000458e70 exit status 1 true [0xc00124c0e0 0xc00124c0f8 0xc00124c110] [0xc00124c0e0 0xc00124c0f8 0xc00124c110] [0xc00124c0f0 0xc00124c108] [0x935700 0x935700] 0xc001d631a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 19:00:59.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 19:00:59.873: INFO: rc: 1 May 11 19:00:59.873: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001c049f0 exit status 1 true [0xc0017080a8 0xc0017080c0 0xc0017080d8] [0xc0017080a8 0xc0017080c0 0xc0017080d8] [0xc0017080b8 0xc0017080d0] [0x935700 0x935700] 0xc00225c900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 19:01:09.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 19:01:09.958: INFO: rc: 1 May 11 19:01:09.958: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001c04150 exit status 1 true [0xc00033c000 0xc001708038 0xc001708050] [0xc00033c000 0xc001708038 0xc001708050] [0xc001708018 0xc001708048] [0x935700 0x935700] 0xc001fc17a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 19:01:19.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-tbznl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 19:01:20.170: INFO: rc: 1 May 11 19:01:20.170: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: May 11 19:01:20.170: INFO: Scaling statefulset ss to 0 May 11 19:01:20.179: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 11 19:01:20.181: INFO: Deleting all statefulset in ns e2e-tests-statefulset-tbznl May 11 19:01:20.184: INFO: Scaling statefulset ss to 0 May 11 19:01:20.194: INFO: Waiting for statefulset status.replicas updated to 0 May 11 19:01:20.196: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 19:01:20.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-tbznl" for this suite. May 11 19:01:27.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:01:27.957: INFO: namespace: e2e-tests-statefulset-tbznl, resource: bindings, ignored listing per whitelist May 11 19:01:28.439: INFO: namespace e2e-tests-statefulset-tbznl deletion completed in 7.857760215s • [SLOW TEST:370.116 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 19:01:28.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 11 19:01:40.842: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 19:01:42.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-7kwlq" for this suite. May 11 19:04:40.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:04:40.590: INFO: namespace: e2e-tests-replicaset-7kwlq, resource: bindings, ignored listing per whitelist May 11 19:04:40.593: INFO: namespace e2e-tests-replicaset-7kwlq deletion completed in 2m58.403756265s • [SLOW TEST:192.154 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 19:04:40.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 11 19:04:41.097: INFO: Waiting up to 5m0s for pod "downward-api-446a3c53-93ba-11ea-b832-0242ac110018" in namespace "e2e-tests-downward-api-crvcq" to be "success or failure" May 11 19:04:41.116: INFO: Pod "downward-api-446a3c53-93ba-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 18.625754ms May 11 19:04:43.202: INFO: Pod "downward-api-446a3c53-93ba-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104380277s May 11 19:04:45.207: INFO: Pod "downward-api-446a3c53-93ba-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10898211s May 11 19:04:47.210: INFO: Pod "downward-api-446a3c53-93ba-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.112543995s STEP: Saw pod success May 11 19:04:47.210: INFO: Pod "downward-api-446a3c53-93ba-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 19:04:47.212: INFO: Trying to get logs from node hunter-worker pod downward-api-446a3c53-93ba-11ea-b832-0242ac110018 container dapi-container: STEP: delete the pod May 11 19:04:47.267: INFO: Waiting for pod downward-api-446a3c53-93ba-11ea-b832-0242ac110018 to disappear May 11 19:04:47.327: INFO: Pod downward-api-446a3c53-93ba-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 19:04:47.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-crvcq" for this suite. May 11 19:04:53.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:04:53.390: INFO: namespace: e2e-tests-downward-api-crvcq, resource: bindings, ignored listing per whitelist May 11 19:04:53.414: INFO: namespace e2e-tests-downward-api-crvcq deletion completed in 6.083871498s • [SLOW TEST:12.821 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 19:04:53.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs May 11 19:04:53.780: INFO: Waiting up to 5m0s for pod "pod-4bfd5aa2-93ba-11ea-b832-0242ac110018" in namespace "e2e-tests-emptydir-8gxdd" to be "success or failure" May 11 19:04:53.804: INFO: Pod "pod-4bfd5aa2-93ba-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 23.259792ms May 11 19:04:57.018: INFO: Pod "pod-4bfd5aa2-93ba-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.237446203s May 11 19:04:59.021: INFO: Pod "pod-4bfd5aa2-93ba-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 5.2410185s May 11 19:05:01.076: INFO: Pod "pod-4bfd5aa2-93ba-11ea-b832-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 7.295581126s May 11 19:05:03.080: INFO: Pod "pod-4bfd5aa2-93ba-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.299201173s STEP: Saw pod success May 11 19:05:03.080: INFO: Pod "pod-4bfd5aa2-93ba-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 19:05:03.083: INFO: Trying to get logs from node hunter-worker2 pod pod-4bfd5aa2-93ba-11ea-b832-0242ac110018 container test-container: STEP: delete the pod May 11 19:05:03.494: INFO: Waiting for pod pod-4bfd5aa2-93ba-11ea-b832-0242ac110018 to disappear May 11 19:05:03.711: INFO: Pod pod-4bfd5aa2-93ba-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 19:05:03.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-8gxdd" for this suite. May 11 19:05:11.806: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:05:11.828: INFO: namespace: e2e-tests-emptydir-8gxdd, resource: bindings, ignored listing per whitelist May 11 19:05:11.866: INFO: namespace e2e-tests-emptydir-8gxdd deletion completed in 8.152478054s • [SLOW TEST:18.452 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 19:05:11.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod May 11 19:05:12.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-dw9ks' May 11 19:05:17.226: INFO: stderr: "" May 11 19:05:17.226: INFO: stdout: "pod/pause created\n" May 11 19:05:17.226: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 11 19:05:17.226: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-dw9ks" to be "running and ready" May 11 19:05:17.248: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 22.218021ms May 11 19:05:19.574: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.348196808s May 11 19:05:21.897: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.671221979s May 11 19:05:21.897: INFO: Pod "pause" satisfied condition "running and ready" May 11 19:05:21.897: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod May 11 19:05:21.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-dw9ks' May 11 19:05:22.157: INFO: stderr: "" May 11 19:05:22.157: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 11 19:05:22.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-dw9ks' May 11 19:05:22.241: INFO: stderr: "" May 11 19:05:22.241: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod May 11 19:05:22.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-dw9ks' May 11 19:05:22.346: INFO: stderr: "" May 11 19:05:22.346: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 11 19:05:22.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-dw9ks' May 11 19:05:22.439: INFO: stderr: "" May 11 19:05:22.439: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources May 11 19:05:22.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-dw9ks' May 11 19:05:22.630: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 19:05:22.630: INFO: stdout: "pod \"pause\" force deleted\n" May 11 19:05:22.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-dw9ks' May 11 19:05:23.076: INFO: stderr: "No resources found.\n" May 11 19:05:23.076: INFO: stdout: "" May 11 19:05:23.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-dw9ks -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 11 19:05:23.237: INFO: stderr: "" May 11 19:05:23.237: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 19:05:23.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-dw9ks" for this suite. May 11 19:05:29.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:05:29.323: INFO: namespace: e2e-tests-kubectl-dw9ks, resource: bindings, ignored listing per whitelist May 11 19:05:29.334: INFO: namespace e2e-tests-kubectl-dw9ks deletion completed in 6.092647655s • [SLOW TEST:17.467 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 19:05:29.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs May 11 19:05:29.500: INFO: Waiting up to 5m0s for pod "pod-614ef605-93ba-11ea-b832-0242ac110018" in namespace "e2e-tests-emptydir-r8q9s" to be "success or failure" May 11 19:05:29.530: INFO: Pod "pod-614ef605-93ba-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 29.485818ms May 11 19:05:31.533: INFO: Pod "pod-614ef605-93ba-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033201981s May 11 19:05:33.541: INFO: Pod "pod-614ef605-93ba-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041290852s May 11 19:05:35.544: INFO: Pod "pod-614ef605-93ba-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.044068561s STEP: Saw pod success May 11 19:05:35.544: INFO: Pod "pod-614ef605-93ba-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 19:05:35.546: INFO: Trying to get logs from node hunter-worker pod pod-614ef605-93ba-11ea-b832-0242ac110018 container test-container: STEP: delete the pod May 11 19:05:35.652: INFO: Waiting for pod pod-614ef605-93ba-11ea-b832-0242ac110018 to disappear May 11 19:05:35.697: INFO: Pod pod-614ef605-93ba-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 19:05:35.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-r8q9s" for this suite. May 11 19:05:41.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:05:41.830: INFO: namespace: e2e-tests-emptydir-r8q9s, resource: bindings, ignored listing per whitelist May 11 19:05:41.868: INFO: namespace e2e-tests-emptydir-r8q9s deletion completed in 6.167923147s • [SLOW TEST:12.534 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 19:05:41.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 11 19:05:41.956: INFO: Waiting up to 5m0s for pod "downwardapi-volume-68ca0f2f-93ba-11ea-b832-0242ac110018" in namespace "e2e-tests-downward-api-vrv9r" to be "success or failure" May 11 19:05:41.961: INFO: Pod "downwardapi-volume-68ca0f2f-93ba-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.72225ms May 11 19:05:43.965: INFO: Pod "downwardapi-volume-68ca0f2f-93ba-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008344622s May 11 19:05:47.143: INFO: Pod "downwardapi-volume-68ca0f2f-93ba-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 5.186763851s May 11 19:05:50.179: INFO: Pod "downwardapi-volume-68ca0f2f-93ba-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.222578113s May 11 19:05:52.183: INFO: Pod "downwardapi-volume-68ca0f2f-93ba-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.226233639s STEP: Saw pod success May 11 19:05:52.183: INFO: Pod "downwardapi-volume-68ca0f2f-93ba-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 19:05:52.185: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-68ca0f2f-93ba-11ea-b832-0242ac110018 container client-container: STEP: delete the pod May 11 19:05:52.384: INFO: Waiting for pod downwardapi-volume-68ca0f2f-93ba-11ea-b832-0242ac110018 to disappear May 11 19:05:52.891: INFO: Pod downwardapi-volume-68ca0f2f-93ba-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 19:05:52.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-vrv9r" for this suite. May 11 19:06:02.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:06:04.042: INFO: namespace: e2e-tests-downward-api-vrv9r, resource: bindings, ignored listing per whitelist May 11 19:06:05.639: INFO: namespace e2e-tests-downward-api-vrv9r deletion completed in 12.743826822s • [SLOW TEST:23.770 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 19:06:05.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-q4f6k STEP: creating a selector STEP: Creating the service pods in kubernetes May 11 19:06:06.409: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 11 19:06:33.947: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.138:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-q4f6k PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 19:06:33.947: INFO: >>> kubeConfig: /root/.kube/config I0511 19:06:33.975734 6 log.go:172] (0xc000db1290) (0xc001b6ac80) Create stream I0511 19:06:33.975762 6 log.go:172] (0xc000db1290) (0xc001b6ac80) Stream added, broadcasting: 1 I0511 19:06:33.977605 6 log.go:172] (0xc000db1290) Reply frame received for 1 I0511 19:06:33.977629 6 log.go:172] (0xc000db1290) (0xc0027bc460) Create stream I0511 19:06:33.977638 6 log.go:172] (0xc000db1290) (0xc0027bc460) Stream added, broadcasting: 3 I0511 19:06:33.978343 6 log.go:172] (0xc000db1290) Reply frame received for 3 I0511 19:06:33.978390 6 log.go:172] (0xc000db1290) (0xc001b6ad20) Create stream I0511 19:06:33.978405 6 log.go:172] (0xc000db1290) (0xc001b6ad20) Stream added, broadcasting: 5 I0511 19:06:33.979012 6 log.go:172] (0xc000db1290) Reply frame received for 5 I0511 19:06:34.074920 6 log.go:172] (0xc000db1290) Data frame received for 5 I0511 19:06:34.074960 6 log.go:172] (0xc001b6ad20) (5) Data frame handling I0511 19:06:34.075015 6 log.go:172] (0xc000db1290) Data frame received for 3 I0511 19:06:34.075033 6 log.go:172] (0xc0027bc460) (3) Data frame handling I0511 19:06:34.075052 6 log.go:172] (0xc0027bc460) (3) Data frame sent I0511 19:06:34.075069 6 log.go:172] (0xc000db1290) Data frame received for 3 I0511 19:06:34.075097 6 log.go:172] (0xc0027bc460) (3) Data frame handling I0511 19:06:34.076469 6 log.go:172] (0xc000db1290) Data frame received for 1 I0511 19:06:34.076501 6 log.go:172] (0xc001b6ac80) (1) Data frame handling I0511 19:06:34.076519 6 log.go:172] (0xc001b6ac80) (1) Data frame sent I0511 19:06:34.076535 6 log.go:172] (0xc000db1290) (0xc001b6ac80) Stream removed, broadcasting: 1 I0511 19:06:34.076549 6 log.go:172] (0xc000db1290) Go away received I0511 19:06:34.076684 6 log.go:172] (0xc000db1290) (0xc001b6ac80) Stream removed, broadcasting: 1 I0511 19:06:34.076706 6 log.go:172] (0xc000db1290) (0xc0027bc460) Stream removed, broadcasting: 3 I0511 19:06:34.076720 6 log.go:172] (0xc000db1290) (0xc001b6ad20) Stream removed, broadcasting: 5 May 11 19:06:34.076: INFO: Found all expected endpoints: [netserver-0] May 11 19:06:34.079: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.152:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-q4f6k PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 19:06:34.079: INFO: >>> kubeConfig: /root/.kube/config I0511 19:06:34.107365 6 log.go:172] (0xc0015b8420) (0xc001fd9d60) Create stream I0511 19:06:34.107386 6 log.go:172] (0xc0015b8420) (0xc001fd9d60) Stream added, broadcasting: 1 I0511 19:06:34.108659 6 log.go:172] (0xc0015b8420) Reply frame received for 1 I0511 19:06:34.108689 6 log.go:172] (0xc0015b8420) (0xc001fd9e00) Create stream I0511 19:06:34.108700 6 log.go:172] (0xc0015b8420) (0xc001fd9e00) Stream added, broadcasting: 3 I0511 19:06:34.109382 6 log.go:172] (0xc0015b8420) Reply frame received for 3 I0511 19:06:34.109422 6 log.go:172] (0xc0015b8420) (0xc001fd9ea0) Create stream I0511 19:06:34.109440 6 log.go:172] (0xc0015b8420) (0xc001fd9ea0) Stream added, broadcasting: 5 I0511 19:06:34.110248 6 log.go:172] (0xc0015b8420) Reply frame received for 5 I0511 19:06:34.169105 6 log.go:172] (0xc0015b8420) Data frame received for 3 I0511 19:06:34.169254 6 log.go:172] (0xc001fd9e00) (3) Data frame handling I0511 19:06:34.169267 6 log.go:172] (0xc001fd9e00) (3) Data frame sent I0511 19:06:34.169277 6 log.go:172] (0xc0015b8420) Data frame received for 3 I0511 19:06:34.169297 6 log.go:172] (0xc001fd9e00) (3) Data frame handling I0511 19:06:34.169311 6 log.go:172] (0xc0015b8420) Data frame received for 5 I0511 19:06:34.169321 6 log.go:172] (0xc001fd9ea0) (5) Data frame handling I0511 19:06:34.170772 6 log.go:172] (0xc0015b8420) Data frame received for 1 I0511 19:06:34.170797 6 log.go:172] (0xc001fd9d60) (1) Data frame handling I0511 19:06:34.170819 6 log.go:172] (0xc001fd9d60) (1) Data frame sent I0511 19:06:34.170939 6 log.go:172] (0xc0015b8420) (0xc001fd9d60) Stream removed, broadcasting: 1 I0511 19:06:34.171031 6 log.go:172] (0xc0015b8420) Go away received I0511 19:06:34.171134 6 log.go:172] (0xc0015b8420) (0xc001fd9d60) Stream removed, broadcasting: 1 I0511 19:06:34.171156 6 log.go:172] (0xc0015b8420) (0xc001fd9e00) Stream removed, broadcasting: 3 I0511 19:06:34.171166 6 log.go:172] (0xc0015b8420) (0xc001fd9ea0) Stream removed, broadcasting: 5 May 11 19:06:34.171: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 19:06:34.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-q4f6k" for this suite. May 11 19:07:02.263: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:07:02.316: INFO: namespace: e2e-tests-pod-network-test-q4f6k, resource: bindings, ignored listing per whitelist May 11 19:07:02.336: INFO: namespace e2e-tests-pod-network-test-q4f6k deletion completed in 28.161327789s • [SLOW TEST:56.697 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 19:07:02.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-pvkv9 STEP: creating a selector STEP: Creating the service pods in kubernetes May 11 19:07:02.600: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 11 19:07:30.986: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.141:8080/dial?request=hostName&protocol=http&host=10.244.1.153&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-pvkv9 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 19:07:30.986: INFO: >>> kubeConfig: /root/.kube/config I0511 19:07:31.014767 6 log.go:172] (0xc000db1080) (0xc0026660a0) Create stream I0511 19:07:31.014798 6 log.go:172] (0xc000db1080) (0xc0026660a0) Stream added, broadcasting: 1 I0511 19:07:31.016559 6 log.go:172] (0xc000db1080) Reply frame received for 1 I0511 19:07:31.016610 6 log.go:172] (0xc000db1080) (0xc001f76be0) Create stream I0511 19:07:31.016639 6 log.go:172] (0xc000db1080) (0xc001f76be0) Stream added, broadcasting: 3 I0511 19:07:31.017758 6 log.go:172] (0xc000db1080) Reply frame received for 3 I0511 19:07:31.017843 6 log.go:172] (0xc000db1080) (0xc00268c000) Create stream I0511 19:07:31.017876 6 log.go:172] (0xc000db1080) (0xc00268c000) Stream added, broadcasting: 5 I0511 19:07:31.018655 6 log.go:172] (0xc000db1080) Reply frame received for 5 I0511 19:07:31.120965 6 log.go:172] (0xc000db1080) Data frame received for 3 I0511 19:07:31.121046 6 log.go:172] (0xc001f76be0) (3) Data frame handling I0511 19:07:31.121106 6 log.go:172] (0xc001f76be0) (3) Data frame sent I0511 19:07:31.121528 6 log.go:172] (0xc000db1080) Data frame received for 5 I0511 19:07:31.121559 6 log.go:172] (0xc00268c000) (5) Data frame handling I0511 19:07:31.121666 6 log.go:172] (0xc000db1080) Data frame received for 3 I0511 19:07:31.121707 6 log.go:172] (0xc001f76be0) (3) Data frame handling I0511 19:07:31.123226 6 log.go:172] (0xc000db1080) Data frame received for 1 I0511 19:07:31.123262 6 log.go:172] (0xc0026660a0) (1) Data frame handling I0511 19:07:31.123300 6 log.go:172] (0xc0026660a0) (1) Data frame sent I0511 19:07:31.123332 6 log.go:172] (0xc000db1080) (0xc0026660a0) Stream removed, broadcasting: 1 I0511 19:07:31.123438 6 log.go:172] (0xc000db1080) Go away received I0511 19:07:31.123468 6 log.go:172] (0xc000db1080) (0xc0026660a0) Stream removed, broadcasting: 1 I0511 19:07:31.123481 6 log.go:172] (0xc000db1080) (0xc001f76be0) Stream removed, broadcasting: 3 I0511 19:07:31.123489 6 log.go:172] (0xc000db1080) (0xc00268c000) Stream removed, broadcasting: 5 May 11 19:07:31.123: INFO: Waiting for endpoints: map[] May 11 19:07:31.126: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.141:8080/dial?request=hostName&protocol=http&host=10.244.2.140&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-pvkv9 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 19:07:31.126: INFO: >>> kubeConfig: /root/.kube/config I0511 19:07:31.158359 6 log.go:172] (0xc001320420) (0xc0023fd360) Create stream I0511 19:07:31.158382 6 log.go:172] (0xc001320420) (0xc0023fd360) Stream added, broadcasting: 1 I0511 19:07:31.160654 6 log.go:172] (0xc001320420) Reply frame received for 1 I0511 19:07:31.160685 6 log.go:172] (0xc001320420) (0xc001f76c80) Create stream I0511 19:07:31.160696 6 log.go:172] (0xc001320420) (0xc001f76c80) Stream added, broadcasting: 3 I0511 19:07:31.162103 6 log.go:172] (0xc001320420) Reply frame received for 3 I0511 19:07:31.162161 6 log.go:172] (0xc001320420) (0xc001f76d20) Create stream I0511 19:07:31.162196 6 log.go:172] (0xc001320420) (0xc001f76d20) Stream added, broadcasting: 5 I0511 19:07:31.163319 6 log.go:172] (0xc001320420) Reply frame received for 5 I0511 19:07:31.230414 6 log.go:172] (0xc001320420) Data frame received for 3 I0511 19:07:31.230441 6 log.go:172] (0xc001f76c80) (3) Data frame handling I0511 19:07:31.230465 6 log.go:172] (0xc001f76c80) (3) Data frame sent I0511 19:07:31.231100 6 log.go:172] (0xc001320420) Data frame received for 5 I0511 19:07:31.231151 6 log.go:172] (0xc001f76d20) (5) Data frame handling I0511 19:07:31.231249 6 log.go:172] (0xc001320420) Data frame received for 3 I0511 19:07:31.231308 6 log.go:172] (0xc001f76c80) (3) Data frame handling I0511 19:07:31.232751 6 log.go:172] (0xc001320420) Data frame received for 1 I0511 19:07:31.232778 6 log.go:172] (0xc0023fd360) (1) Data frame handling I0511 19:07:31.232794 6 log.go:172] (0xc0023fd360) (1) Data frame sent I0511 19:07:31.232810 6 log.go:172] (0xc001320420) (0xc0023fd360) Stream removed, broadcasting: 1 I0511 19:07:31.232898 6 log.go:172] (0xc001320420) (0xc0023fd360) Stream removed, broadcasting: 1 I0511 19:07:31.232924 6 log.go:172] (0xc001320420) (0xc001f76c80) Stream removed, broadcasting: 3 I0511 19:07:31.233082 6 log.go:172] (0xc001320420) (0xc001f76d20) Stream removed, broadcasting: 5 I0511 19:07:31.233474 6 log.go:172] (0xc001320420) Go away received May 11 19:07:31.233: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 19:07:31.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-pvkv9" for this suite. May 11 19:07:59.624: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:07:59.656: INFO: namespace: e2e-tests-pod-network-test-pvkv9, resource: bindings, ignored listing per whitelist May 11 19:07:59.678: INFO: namespace e2e-tests-pod-network-test-pvkv9 deletion completed in 28.441845081s • [SLOW TEST:57.342 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 19:07:59.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 19:08:00.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-fs2rv" for this suite. May 11 19:08:08.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:08:08.227: INFO: namespace: e2e-tests-services-fs2rv, resource: bindings, ignored listing per whitelist May 11 19:08:08.231: INFO: namespace e2e-tests-services-fs2rv deletion completed in 7.205444381s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:8.552 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 19:08:08.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 11 19:08:09.830: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"c0c4d509-93ba-11ea-99e8-0242ac110002", Controller:(*bool)(0xc00189473a), BlockOwnerDeletion:(*bool)(0xc00189473b)}} May 11 19:08:09.875: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"c0a38420-93ba-11ea-99e8-0242ac110002", Controller:(*bool)(0xc001716052), BlockOwnerDeletion:(*bool)(0xc001716053)}} May 11 19:08:09.890: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"c0a3e9c3-93ba-11ea-99e8-0242ac110002", Controller:(*bool)(0xc00189499a), BlockOwnerDeletion:(*bool)(0xc00189499b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 19:08:14.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-lpkn6" for this suite. May 11 19:08:26.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:08:27.037: INFO: namespace: e2e-tests-gc-lpkn6, resource: bindings, ignored listing per whitelist May 11 19:08:27.060: INFO: namespace e2e-tests-gc-lpkn6 deletion completed in 12.079716615s • [SLOW TEST:18.829 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 19:08:27.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token May 11 19:08:27.827: INFO: created pod pod-service-account-defaultsa May 11 19:08:27.827: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 11 19:08:27.900: INFO: created pod pod-service-account-mountsa May 11 19:08:27.900: INFO: pod pod-service-account-mountsa service account token volume mount: true May 11 19:08:27.915: INFO: created pod pod-service-account-nomountsa May 11 19:08:27.915: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 11 19:08:27.947: INFO: created pod pod-service-account-defaultsa-mountspec May 11 19:08:27.947: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 11 19:08:28.050: INFO: created pod pod-service-account-mountsa-mountspec May 11 19:08:28.050: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 11 19:08:28.060: INFO: created pod pod-service-account-nomountsa-mountspec May 11 19:08:28.060: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 11 19:08:28.100: INFO: created pod pod-service-account-defaultsa-nomountspec May 11 19:08:28.100: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 11 19:08:28.231: INFO: created pod pod-service-account-mountsa-nomountspec May 11 19:08:28.231: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 11 19:08:28.313: INFO: created pod pod-service-account-nomountsa-nomountspec May 11 19:08:28.313: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 19:08:28.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-zb64k" for this suite. May 11 19:09:14.104: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:09:14.163: INFO: namespace: e2e-tests-svcaccounts-zb64k, resource: bindings, ignored listing per whitelist May 11 19:09:14.166: INFO: namespace e2e-tests-svcaccounts-zb64k deletion completed in 45.696678974s • [SLOW TEST:47.106 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 19:09:14.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 11 19:09:15.190: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e7bc2284-93ba-11ea-b832-0242ac110018" in namespace "e2e-tests-projected-fsvqz" to be "success or failure" May 11 19:09:15.207: INFO: Pod "downwardapi-volume-e7bc2284-93ba-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 16.375273ms May 11 19:09:17.210: INFO: Pod "downwardapi-volume-e7bc2284-93ba-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019800915s May 11 19:09:19.278: INFO: Pod "downwardapi-volume-e7bc2284-93ba-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087530096s May 11 19:09:21.416: INFO: Pod "downwardapi-volume-e7bc2284-93ba-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.225346712s May 11 19:09:23.419: INFO: Pod "downwardapi-volume-e7bc2284-93ba-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.228775308s STEP: Saw pod success May 11 19:09:23.419: INFO: Pod "downwardapi-volume-e7bc2284-93ba-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 19:09:23.421: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-e7bc2284-93ba-11ea-b832-0242ac110018 container client-container: STEP: delete the pod May 11 19:09:24.029: INFO: Waiting for pod downwardapi-volume-e7bc2284-93ba-11ea-b832-0242ac110018 to disappear May 11 19:09:24.062: INFO: Pod downwardapi-volume-e7bc2284-93ba-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 19:09:24.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-fsvqz" for this suite. May 11 19:09:32.170: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:09:32.226: INFO: namespace: e2e-tests-projected-fsvqz, resource: bindings, ignored listing per whitelist May 11 19:09:32.237: INFO: namespace e2e-tests-projected-fsvqz deletion completed in 8.170913452s • [SLOW TEST:18.071 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 19:09:32.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 11 19:09:34.539: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f34de1b0-93ba-11ea-b832-0242ac110018" in namespace "e2e-tests-projected-d2255" to be "success or failure" May 11 19:09:34.609: INFO: Pod "downwardapi-volume-f34de1b0-93ba-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 69.786477ms May 11 19:09:36.612: INFO: Pod "downwardapi-volume-f34de1b0-93ba-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073195005s May 11 19:09:38.615: INFO: Pod "downwardapi-volume-f34de1b0-93ba-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075891188s May 11 19:09:40.705: INFO: Pod "downwardapi-volume-f34de1b0-93ba-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.165829953s May 11 19:09:42.708: INFO: Pod "downwardapi-volume-f34de1b0-93ba-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.168898369s May 11 19:09:44.793: INFO: Pod "downwardapi-volume-f34de1b0-93ba-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.254311503s STEP: Saw pod success May 11 19:09:44.793: INFO: Pod "downwardapi-volume-f34de1b0-93ba-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 19:09:44.796: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-f34de1b0-93ba-11ea-b832-0242ac110018 container client-container: STEP: delete the pod May 11 19:09:45.108: INFO: Waiting for pod downwardapi-volume-f34de1b0-93ba-11ea-b832-0242ac110018 to disappear May 11 19:09:45.374: INFO: Pod downwardapi-volume-f34de1b0-93ba-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 19:09:45.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-d2255" for this suite. May 11 19:09:53.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:09:53.834: INFO: namespace: e2e-tests-projected-d2255, resource: bindings, ignored listing per whitelist May 11 19:09:53.856: INFO: namespace e2e-tests-projected-d2255 deletion completed in 8.478001336s • [SLOW TEST:21.619 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 19:09:53.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 11 19:10:07.787: INFO: Pod name wrapped-volume-race-06ab9ca9-93bb-11ea-b832-0242ac110018: Found 0 pods out of 5 May 11 19:10:12.870: INFO: Pod name wrapped-volume-race-06ab9ca9-93bb-11ea-b832-0242ac110018: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-06ab9ca9-93bb-11ea-b832-0242ac110018 in namespace e2e-tests-emptydir-wrapper-874dk, will wait for the garbage collector to delete the pods May 11 19:12:24.263: INFO: Deleting ReplicationController wrapped-volume-race-06ab9ca9-93bb-11ea-b832-0242ac110018 took: 7.776019ms May 11 19:12:28.264: INFO: Terminating ReplicationController wrapped-volume-race-06ab9ca9-93bb-11ea-b832-0242ac110018 pods took: 4.000210671s STEP: Creating RC which spawns configmap-volume pods May 11 19:13:13.891: INFO: Pod name wrapped-volume-race-75da9f7b-93bb-11ea-b832-0242ac110018: Found 0 pods out of 5 May 11 19:13:18.898: INFO: Pod name wrapped-volume-race-75da9f7b-93bb-11ea-b832-0242ac110018: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-75da9f7b-93bb-11ea-b832-0242ac110018 in namespace e2e-tests-emptydir-wrapper-874dk, will wait for the garbage collector to delete the pods May 11 19:15:25.365: INFO: Deleting ReplicationController wrapped-volume-race-75da9f7b-93bb-11ea-b832-0242ac110018 took: 198.970777ms May 11 19:15:25.665: INFO: Terminating ReplicationController wrapped-volume-race-75da9f7b-93bb-11ea-b832-0242ac110018 pods took: 300.292579ms STEP: Creating RC which spawns configmap-volume pods May 11 19:16:12.684: INFO: Pod name wrapped-volume-race-e0aa8152-93bb-11ea-b832-0242ac110018: Found 0 pods out of 5 May 11 19:16:17.692: INFO: Pod name wrapped-volume-race-e0aa8152-93bb-11ea-b832-0242ac110018: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-e0aa8152-93bb-11ea-b832-0242ac110018 in namespace e2e-tests-emptydir-wrapper-874dk, will wait for the garbage collector to delete the pods May 11 19:18:51.268: INFO: Deleting ReplicationController wrapped-volume-race-e0aa8152-93bb-11ea-b832-0242ac110018 took: 7.324367ms May 11 19:18:52.168: INFO: Terminating ReplicationController wrapped-volume-race-e0aa8152-93bb-11ea-b832-0242ac110018 pods took: 900.238242ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 19:19:45.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-874dk" for this suite. May 11 19:20:01.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:20:01.397: INFO: namespace: e2e-tests-emptydir-wrapper-874dk, resource: bindings, ignored listing per whitelist May 11 19:20:01.413: INFO: namespace e2e-tests-emptydir-wrapper-874dk deletion completed in 16.192258801s • [SLOW TEST:607.556 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 19:20:01.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 19:20:03.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-8bsvb" for this suite. May 11 19:20:11.857: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:20:11.867: INFO: namespace: e2e-tests-kubelet-test-8bsvb, resource: bindings, ignored listing per whitelist May 11 19:20:11.913: INFO: namespace e2e-tests-kubelet-test-8bsvb deletion completed in 8.327160089s • [SLOW TEST:10.499 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 19:20:11.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-lprd4 I0511 19:20:13.201621 6 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-lprd4, replica count: 1 I0511 19:20:14.252020 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 19:20:15.252185 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 19:20:16.252351 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 19:20:17.252547 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 19:20:18.252744 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 19:20:19.252937 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 19:20:20.253072 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 19:20:21.253377 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 11 19:20:21.939: INFO: Created: latency-svc-6b8jb May 11 19:20:22.854: INFO: Got endpoints: latency-svc-6b8jb [1.500618282s] May 11 19:20:23.292: INFO: Created: latency-svc-6pfv9 May 11 19:20:23.550: INFO: Got endpoints: latency-svc-6pfv9 [695.975048ms] May 11 19:20:23.552: INFO: Created: latency-svc-8bjtt May 11 19:20:24.054: INFO: Got endpoints: latency-svc-8bjtt [1.19988957s] May 11 19:20:24.856: INFO: Created: latency-svc-c9hqx May 11 19:20:25.424: INFO: Got endpoints: latency-svc-c9hqx [2.570103168s] May 11 19:20:25.928: INFO: Created: latency-svc-66jsk May 11 19:20:26.272: INFO: Got endpoints: latency-svc-66jsk [3.417904595s] May 11 19:20:26.892: INFO: Created: latency-svc-q9ltg May 11 19:20:27.550: INFO: Got endpoints: latency-svc-q9ltg [4.695899678s] May 11 19:20:27.882: INFO: Created: latency-svc-5cwjl May 11 19:20:28.490: INFO: Got endpoints: latency-svc-5cwjl [5.635361636s] May 11 19:20:29.296: INFO: Created: latency-svc-xbfq2 May 11 19:20:29.712: INFO: Got endpoints: latency-svc-xbfq2 [6.857089879s] May 11 19:20:29.714: INFO: Created: latency-svc-8zdrb May 11 19:20:30.216: INFO: Got endpoints: latency-svc-8zdrb [7.361308833s] May 11 19:20:30.503: INFO: Created: latency-svc-lggnb May 11 19:20:31.149: INFO: Got endpoints: latency-svc-lggnb [8.294624025s] May 11 19:20:32.487: INFO: Created: latency-svc-l684h May 11 19:20:33.378: INFO: Got endpoints: latency-svc-l684h [10.52305567s] May 11 19:20:33.399: INFO: Created: latency-svc-jrsxp May 11 19:20:33.910: INFO: Got endpoints: latency-svc-jrsxp [11.055703424s] May 11 19:20:34.351: INFO: Created: latency-svc-8cfkr May 11 19:20:34.628: INFO: Got endpoints: latency-svc-8cfkr [11.77337615s] May 11 19:20:34.914: INFO: Created: latency-svc-5qgdw May 11 19:20:35.641: INFO: Got endpoints: latency-svc-5qgdw [12.786020765s] May 11 19:20:35.702: INFO: Created: latency-svc-926z2 May 11 19:20:35.893: INFO: Got endpoints: latency-svc-926z2 [13.037654176s] May 11 19:20:36.271: INFO: Created: latency-svc-slzvh May 11 19:20:36.641: INFO: Got endpoints: latency-svc-slzvh [13.786107228s] May 11 19:20:37.525: INFO: Created: latency-svc-qqdxk May 11 19:20:37.766: INFO: Got endpoints: latency-svc-qqdxk [14.216402535s] May 11 19:20:38.006: INFO: Created: latency-svc-r7c2h May 11 19:20:38.009: INFO: Got endpoints: latency-svc-r7c2h [13.954660492s] May 11 19:20:38.408: INFO: Created: latency-svc-2dcwn May 11 19:20:38.411: INFO: Got endpoints: latency-svc-2dcwn [12.986803608s] May 11 19:20:38.886: INFO: Created: latency-svc-vjrcj May 11 19:20:39.047: INFO: Got endpoints: latency-svc-vjrcj [12.774871929s] May 11 19:20:39.132: INFO: Created: latency-svc-xjt5t May 11 19:20:39.257: INFO: Got endpoints: latency-svc-xjt5t [11.706523498s] May 11 19:20:39.592: INFO: Created: latency-svc-x78ml May 11 19:20:39.802: INFO: Got endpoints: latency-svc-x78ml [11.311663772s] May 11 19:20:39.859: INFO: Created: latency-svc-bpz6l May 11 19:20:40.029: INFO: Got endpoints: latency-svc-bpz6l [10.317510064s] May 11 19:20:40.683: INFO: Created: latency-svc-hwc6v May 11 19:20:40.688: INFO: Got endpoints: latency-svc-hwc6v [10.472372919s] May 11 19:20:41.108: INFO: Created: latency-svc-4tdd8 May 11 19:20:41.113: INFO: Got endpoints: latency-svc-4tdd8 [9.964041984s] May 11 19:20:41.775: INFO: Created: latency-svc-8kgc2 May 11 19:20:42.167: INFO: Got endpoints: latency-svc-8kgc2 [8.789397381s] May 11 19:20:42.472: INFO: Created: latency-svc-6jpvc May 11 19:20:42.472: INFO: Created: latency-svc-fkd7n May 11 19:20:42.706: INFO: Got endpoints: latency-svc-fkd7n [8.795535418s] May 11 19:20:42.706: INFO: Got endpoints: latency-svc-6jpvc [8.07799685s] May 11 19:20:43.036: INFO: Created: latency-svc-79ltt May 11 19:20:43.508: INFO: Got endpoints: latency-svc-79ltt [7.867091311s] May 11 19:20:43.582: INFO: Created: latency-svc-q7srr May 11 19:20:44.102: INFO: Got endpoints: latency-svc-q7srr [8.209066072s] May 11 19:20:44.197: INFO: Created: latency-svc-l8htp May 11 19:20:44.294: INFO: Got endpoints: latency-svc-l8htp [7.652757301s] May 11 19:20:44.327: INFO: Created: latency-svc-cvlk8 May 11 19:20:44.348: INFO: Got endpoints: latency-svc-cvlk8 [6.581471205s] May 11 19:20:44.467: INFO: Created: latency-svc-cct8d May 11 19:20:44.470: INFO: Got endpoints: latency-svc-cct8d [6.461216799s] May 11 19:20:44.539: INFO: Created: latency-svc-lcvkn May 11 19:20:44.558: INFO: Got endpoints: latency-svc-lcvkn [6.147311785s] May 11 19:20:44.623: INFO: Created: latency-svc-kmhcb May 11 19:20:44.643: INFO: Got endpoints: latency-svc-kmhcb [5.595712518s] May 11 19:20:44.709: INFO: Created: latency-svc-l4vzd May 11 19:20:44.721: INFO: Got endpoints: latency-svc-l4vzd [5.464227161s] May 11 19:20:44.805: INFO: Created: latency-svc-2mcr2 May 11 19:20:44.824: INFO: Got endpoints: latency-svc-2mcr2 [5.022077077s] May 11 19:20:44.947: INFO: Created: latency-svc-rwttx May 11 19:20:44.956: INFO: Got endpoints: latency-svc-rwttx [4.926744286s] May 11 19:20:45.096: INFO: Created: latency-svc-5bpzm May 11 19:20:45.110: INFO: Got endpoints: latency-svc-5bpzm [4.421555665s] May 11 19:20:45.171: INFO: Created: latency-svc-9nx52 May 11 19:20:45.249: INFO: Got endpoints: latency-svc-9nx52 [4.135974021s] May 11 19:20:45.302: INFO: Created: latency-svc-9fb4z May 11 19:20:45.316: INFO: Got endpoints: latency-svc-9fb4z [3.148749841s] May 11 19:20:45.425: INFO: Created: latency-svc-fwrts May 11 19:20:45.437: INFO: Got endpoints: latency-svc-fwrts [2.730650914s] May 11 19:20:45.524: INFO: Created: latency-svc-vbtq8 May 11 19:20:45.592: INFO: Got endpoints: latency-svc-vbtq8 [2.885707448s] May 11 19:20:45.658: INFO: Created: latency-svc-mfvv7 May 11 19:20:45.689: INFO: Got endpoints: latency-svc-mfvv7 [2.181238431s] May 11 19:20:45.791: INFO: Created: latency-svc-6jkv7 May 11 19:20:45.822: INFO: Got endpoints: latency-svc-6jkv7 [1.719839698s] May 11 19:20:45.958: INFO: Created: latency-svc-5rs85 May 11 19:20:45.972: INFO: Got endpoints: latency-svc-5rs85 [1.677613604s] May 11 19:20:46.155: INFO: Created: latency-svc-znhr7 May 11 19:20:46.170: INFO: Got endpoints: latency-svc-znhr7 [1.821501393s] May 11 19:20:46.365: INFO: Created: latency-svc-vdj24 May 11 19:20:46.422: INFO: Got endpoints: latency-svc-vdj24 [1.95203422s] May 11 19:20:46.577: INFO: Created: latency-svc-7qspg May 11 19:20:46.602: INFO: Got endpoints: latency-svc-7qspg [2.043921852s] May 11 19:20:46.778: INFO: Created: latency-svc-ns2p7 May 11 19:20:46.806: INFO: Got endpoints: latency-svc-ns2p7 [2.163033906s] May 11 19:20:46.939: INFO: Created: latency-svc-gd29q May 11 19:20:46.951: INFO: Got endpoints: latency-svc-gd29q [2.229246365s] May 11 19:20:46.995: INFO: Created: latency-svc-xfb4x May 11 19:20:47.029: INFO: Got endpoints: latency-svc-xfb4x [2.204604515s] May 11 19:20:47.149: INFO: Created: latency-svc-lmvzt May 11 19:20:47.167: INFO: Got endpoints: latency-svc-lmvzt [216.424394ms] May 11 19:20:47.245: INFO: Created: latency-svc-xttlw May 11 19:20:47.365: INFO: Created: latency-svc-qzx9s May 11 19:20:47.365: INFO: Got endpoints: latency-svc-xttlw [2.409090759s] May 11 19:20:47.370: INFO: Got endpoints: latency-svc-qzx9s [2.260195036s] May 11 19:20:47.398: INFO: Created: latency-svc-jz8lz May 11 19:20:47.418: INFO: Got endpoints: latency-svc-jz8lz [2.168839476s] May 11 19:20:47.479: INFO: Created: latency-svc-4fc9t May 11 19:20:47.482: INFO: Got endpoints: latency-svc-4fc9t [2.165975926s] May 11 19:20:47.538: INFO: Created: latency-svc-qt4h8 May 11 19:20:47.540: INFO: Got endpoints: latency-svc-qt4h8 [2.10324316s] May 11 19:20:47.629: INFO: Created: latency-svc-5wptb May 11 19:20:47.631: INFO: Got endpoints: latency-svc-5wptb [2.038645025s] May 11 19:20:47.684: INFO: Created: latency-svc-qvbkv May 11 19:20:47.703: INFO: Got endpoints: latency-svc-qvbkv [2.013790578s] May 11 19:20:47.772: INFO: Created: latency-svc-ljkss May 11 19:20:47.806: INFO: Got endpoints: latency-svc-ljkss [1.984809069s] May 11 19:20:47.807: INFO: Created: latency-svc-trf4v May 11 19:20:47.836: INFO: Got endpoints: latency-svc-trf4v [1.8641447s] May 11 19:20:47.939: INFO: Created: latency-svc-mk886 May 11 19:20:47.942: INFO: Got endpoints: latency-svc-mk886 [1.772131364s] May 11 19:20:48.028: INFO: Created: latency-svc-nthfm May 11 19:20:48.131: INFO: Got endpoints: latency-svc-nthfm [1.708394903s] May 11 19:20:48.132: INFO: Created: latency-svc-bwmhb May 11 19:20:48.322: INFO: Got endpoints: latency-svc-bwmhb [1.719905182s] May 11 19:20:48.353: INFO: Created: latency-svc-wkrwj May 11 19:20:48.394: INFO: Got endpoints: latency-svc-wkrwj [1.588029983s] May 11 19:20:48.466: INFO: Created: latency-svc-dkfz9 May 11 19:20:48.478: INFO: Got endpoints: latency-svc-dkfz9 [1.449506364s] May 11 19:20:48.514: INFO: Created: latency-svc-dmm2k May 11 19:20:48.527: INFO: Got endpoints: latency-svc-dmm2k [1.35968203s] May 11 19:20:48.551: INFO: Created: latency-svc-zgbrq May 11 19:20:48.604: INFO: Got endpoints: latency-svc-zgbrq [1.238528538s] May 11 19:20:48.636: INFO: Created: latency-svc-gb68s May 11 19:20:48.665: INFO: Got endpoints: latency-svc-gb68s [1.295040597s] May 11 19:20:48.790: INFO: Created: latency-svc-7fpsb May 11 19:20:48.822: INFO: Got endpoints: latency-svc-7fpsb [1.403180586s] May 11 19:20:49.354: INFO: Created: latency-svc-8j7p5 May 11 19:20:49.391: INFO: Got endpoints: latency-svc-8j7p5 [1.908941826s] May 11 19:20:49.676: INFO: Created: latency-svc-s4gv9 May 11 19:20:49.680: INFO: Got endpoints: latency-svc-s4gv9 [2.139403146s] May 11 19:20:50.013: INFO: Created: latency-svc-lbbvc May 11 19:20:50.341: INFO: Got endpoints: latency-svc-lbbvc [2.710532769s] May 11 19:20:50.343: INFO: Created: latency-svc-qmz5b May 11 19:20:50.683: INFO: Got endpoints: latency-svc-qmz5b [2.980252665s] May 11 19:20:50.684: INFO: Created: latency-svc-tx779 May 11 19:20:50.687: INFO: Got endpoints: latency-svc-tx779 [2.880247139s] May 11 19:20:51.649: INFO: Created: latency-svc-pnbqc May 11 19:20:52.563: INFO: Got endpoints: latency-svc-pnbqc [4.727502317s] May 11 19:20:52.568: INFO: Created: latency-svc-57v85 May 11 19:20:52.640: INFO: Got endpoints: latency-svc-57v85 [4.697933981s] May 11 19:20:52.923: INFO: Created: latency-svc-phkt9 May 11 19:20:53.281: INFO: Got endpoints: latency-svc-phkt9 [5.15020071s] May 11 19:20:53.695: INFO: Created: latency-svc-h2bbq May 11 19:20:53.707: INFO: Got endpoints: latency-svc-h2bbq [5.384534123s] May 11 19:20:54.985: INFO: Created: latency-svc-prslj May 11 19:20:55.751: INFO: Got endpoints: latency-svc-prslj [7.356166114s] May 11 19:20:56.273: INFO: Created: latency-svc-m2mh4 May 11 19:20:56.761: INFO: Got endpoints: latency-svc-m2mh4 [8.28262433s] May 11 19:20:57.105: INFO: Created: latency-svc-k8dfp May 11 19:20:57.454: INFO: Got endpoints: latency-svc-k8dfp [8.927587051s] May 11 19:20:57.862: INFO: Created: latency-svc-2v8qt May 11 19:20:57.865: INFO: Got endpoints: latency-svc-2v8qt [9.260763325s] May 11 19:20:58.152: INFO: Created: latency-svc-z4vfx May 11 19:20:58.156: INFO: Got endpoints: latency-svc-z4vfx [9.4907843s] May 11 19:20:58.295: INFO: Created: latency-svc-fd82v May 11 19:20:58.298: INFO: Got endpoints: latency-svc-fd82v [9.476701611s] May 11 19:20:58.483: INFO: Created: latency-svc-86dwx May 11 19:20:58.553: INFO: Got endpoints: latency-svc-86dwx [9.161967263s] May 11 19:20:58.555: INFO: Created: latency-svc-25fnf May 11 19:20:58.713: INFO: Got endpoints: latency-svc-25fnf [9.033127975s] May 11 19:20:58.715: INFO: Created: latency-svc-h47sk May 11 19:20:58.765: INFO: Got endpoints: latency-svc-h47sk [8.42324082s] May 11 19:20:58.919: INFO: Created: latency-svc-b85p4 May 11 19:20:58.946: INFO: Got endpoints: latency-svc-b85p4 [8.261994452s] May 11 19:20:59.066: INFO: Created: latency-svc-5n22f May 11 19:20:59.084: INFO: Got endpoints: latency-svc-5n22f [8.396781757s] May 11 19:20:59.173: INFO: Created: latency-svc-rw2ps May 11 19:20:59.192: INFO: Got endpoints: latency-svc-rw2ps [6.628577512s] May 11 19:20:59.223: INFO: Created: latency-svc-gtxqz May 11 19:20:59.240: INFO: Got endpoints: latency-svc-gtxqz [6.600302261s] May 11 19:20:59.323: INFO: Created: latency-svc-hdj48 May 11 19:20:59.325: INFO: Got endpoints: latency-svc-hdj48 [6.044105254s] May 11 19:20:59.862: INFO: Created: latency-svc-54l2k May 11 19:21:00.174: INFO: Got endpoints: latency-svc-54l2k [6.466529769s] May 11 19:21:00.237: INFO: Created: latency-svc-n77wx May 11 19:21:00.414: INFO: Got endpoints: latency-svc-n77wx [4.663150615s] May 11 19:21:00.427: INFO: Created: latency-svc-6zf69 May 11 19:21:00.683: INFO: Got endpoints: latency-svc-6zf69 [3.922292871s] May 11 19:21:00.689: INFO: Created: latency-svc-fvn88 May 11 19:21:00.741: INFO: Got endpoints: latency-svc-fvn88 [3.286198208s] May 11 19:21:01.240: INFO: Created: latency-svc-8cw5x May 11 19:21:01.298: INFO: Got endpoints: latency-svc-8cw5x [3.433758933s] May 11 19:21:01.504: INFO: Created: latency-svc-tsqgt May 11 19:21:01.647: INFO: Got endpoints: latency-svc-tsqgt [3.490519826s] May 11 19:21:01.671: INFO: Created: latency-svc-t87df May 11 19:21:01.879: INFO: Got endpoints: latency-svc-t87df [3.58096593s] May 11 19:21:01.927: INFO: Created: latency-svc-b5j2k May 11 19:21:01.978: INFO: Got endpoints: latency-svc-b5j2k [3.42422939s] May 11 19:21:02.113: INFO: Created: latency-svc-v6zw6 May 11 19:21:02.126: INFO: Got endpoints: latency-svc-v6zw6 [3.412904987s] May 11 19:21:02.317: INFO: Created: latency-svc-hlqz9 May 11 19:21:02.320: INFO: Got endpoints: latency-svc-hlqz9 [3.555463982s] May 11 19:21:02.532: INFO: Created: latency-svc-t5hwk May 11 19:21:02.564: INFO: Got endpoints: latency-svc-t5hwk [3.618800489s] May 11 19:21:02.724: INFO: Created: latency-svc-mk264 May 11 19:21:02.727: INFO: Got endpoints: latency-svc-mk264 [3.64340138s] May 11 19:21:02.812: INFO: Created: latency-svc-ph5ml May 11 19:21:02.823: INFO: Got endpoints: latency-svc-ph5ml [3.630637718s] May 11 19:21:02.891: INFO: Created: latency-svc-s2xd9 May 11 19:21:02.935: INFO: Got endpoints: latency-svc-s2xd9 [3.69482095s] May 11 19:21:03.001: INFO: Created: latency-svc-l27tl May 11 19:21:03.016: INFO: Got endpoints: latency-svc-l27tl [3.690727982s] May 11 19:21:03.059: INFO: Created: latency-svc-4csbm May 11 19:21:03.082: INFO: Got endpoints: latency-svc-4csbm [2.908216004s] May 11 19:21:03.167: INFO: Created: latency-svc-s95pm May 11 19:21:03.171: INFO: Got endpoints: latency-svc-s95pm [2.7572898s] May 11 19:21:03.311: INFO: Created: latency-svc-kh27j May 11 19:21:03.328: INFO: Got endpoints: latency-svc-kh27j [2.644587938s] May 11 19:21:03.385: INFO: Created: latency-svc-59qtx May 11 19:21:03.490: INFO: Got endpoints: latency-svc-59qtx [2.749643118s] May 11 19:21:03.506: INFO: Created: latency-svc-nxj55 May 11 19:21:03.975: INFO: Got endpoints: latency-svc-nxj55 [2.676956963s] May 11 19:21:03.978: INFO: Created: latency-svc-7j4cv May 11 19:21:04.167: INFO: Created: latency-svc-nmskk May 11 19:21:04.168: INFO: Got endpoints: latency-svc-7j4cv [2.521348583s] May 11 19:21:04.198: INFO: Got endpoints: latency-svc-nmskk [2.318983288s] May 11 19:21:04.265: INFO: Created: latency-svc-rq2w5 May 11 19:21:04.448: INFO: Got endpoints: latency-svc-rq2w5 [2.470817706s] May 11 19:21:04.547: INFO: Created: latency-svc-w46wj May 11 19:21:04.730: INFO: Got endpoints: latency-svc-w46wj [2.604123772s] May 11 19:21:04.812: INFO: Created: latency-svc-spdlf May 11 19:21:04.909: INFO: Got endpoints: latency-svc-spdlf [2.589245479s] May 11 19:21:04.980: INFO: Created: latency-svc-dxjw5 May 11 19:21:05.008: INFO: Got endpoints: latency-svc-dxjw5 [2.443257759s] May 11 19:21:05.131: INFO: Created: latency-svc-njdcc May 11 19:21:05.134: INFO: Got endpoints: latency-svc-njdcc [2.40695123s] May 11 19:21:05.288: INFO: Created: latency-svc-s7gdn May 11 19:21:05.290: INFO: Got endpoints: latency-svc-s7gdn [2.467631469s] May 11 19:21:05.514: INFO: Created: latency-svc-5fxz4 May 11 19:21:05.554: INFO: Got endpoints: latency-svc-5fxz4 [2.618537564s] May 11 19:21:05.554: INFO: Created: latency-svc-n8wl5 May 11 19:21:05.597: INFO: Got endpoints: latency-svc-n8wl5 [2.581110483s] May 11 19:21:05.740: INFO: Created: latency-svc-v8jxr May 11 19:21:05.759: INFO: Got endpoints: latency-svc-v8jxr [2.677100876s] May 11 19:21:05.808: INFO: Created: latency-svc-qz4q4 May 11 19:21:05.915: INFO: Got endpoints: latency-svc-qz4q4 [2.744106819s] May 11 19:21:05.917: INFO: Created: latency-svc-77zjh May 11 19:21:05.986: INFO: Got endpoints: latency-svc-77zjh [2.658320017s] May 11 19:21:06.464: INFO: Created: latency-svc-8pdk2 May 11 19:21:06.898: INFO: Got endpoints: latency-svc-8pdk2 [3.407550026s] May 11 19:21:07.168: INFO: Created: latency-svc-n2dg4 May 11 19:21:07.211: INFO: Got endpoints: latency-svc-n2dg4 [3.235609627s] May 11 19:21:07.421: INFO: Created: latency-svc-cxnsx May 11 19:21:07.456: INFO: Got endpoints: latency-svc-cxnsx [3.28815475s] May 11 19:21:07.665: INFO: Created: latency-svc-5tlx4 May 11 19:21:07.703: INFO: Got endpoints: latency-svc-5tlx4 [3.504431361s] May 11 19:21:07.947: INFO: Created: latency-svc-tlrwc May 11 19:21:07.959: INFO: Got endpoints: latency-svc-tlrwc [3.510616429s] May 11 19:21:08.133: INFO: Created: latency-svc-fqvlj May 11 19:21:08.158: INFO: Got endpoints: latency-svc-fqvlj [3.428183285s] May 11 19:21:08.360: INFO: Created: latency-svc-qclzv May 11 19:21:08.392: INFO: Got endpoints: latency-svc-qclzv [3.482748841s] May 11 19:21:08.431: INFO: Created: latency-svc-7pdcx May 11 19:21:08.551: INFO: Got endpoints: latency-svc-7pdcx [3.54275587s] May 11 19:21:08.605: INFO: Created: latency-svc-vsk7t May 11 19:21:08.633: INFO: Got endpoints: latency-svc-vsk7t [3.498966644s] May 11 19:21:08.731: INFO: Created: latency-svc-xw6hn May 11 19:21:08.733: INFO: Got endpoints: latency-svc-xw6hn [3.443040945s] May 11 19:21:08.809: INFO: Created: latency-svc-7tf9p May 11 19:21:08.825: INFO: Got endpoints: latency-svc-7tf9p [3.271519188s] May 11 19:21:08.928: INFO: Created: latency-svc-zb9gg May 11 19:21:08.945: INFO: Got endpoints: latency-svc-zb9gg [3.348123882s] May 11 19:21:08.979: INFO: Created: latency-svc-tn7fn May 11 19:21:09.000: INFO: Got endpoints: latency-svc-tn7fn [3.240681317s] May 11 19:21:09.082: INFO: Created: latency-svc-hpmvv May 11 19:21:09.082: INFO: Got endpoints: latency-svc-hpmvv [3.166834592s] May 11 19:21:09.118: INFO: Created: latency-svc-fs9hd May 11 19:21:09.132: INFO: Got endpoints: latency-svc-fs9hd [3.146003087s] May 11 19:21:09.232: INFO: Created: latency-svc-fhzms May 11 19:21:09.247: INFO: Got endpoints: latency-svc-fhzms [2.348413398s] May 11 19:21:09.277: INFO: Created: latency-svc-m4q72 May 11 19:21:09.313: INFO: Got endpoints: latency-svc-m4q72 [2.101818285s] May 11 19:21:09.465: INFO: Created: latency-svc-bjckt May 11 19:21:09.481: INFO: Got endpoints: latency-svc-bjckt [2.024850914s] May 11 19:21:09.524: INFO: Created: latency-svc-9f8rb May 11 19:21:09.554: INFO: Got endpoints: latency-svc-9f8rb [1.850726863s] May 11 19:21:09.732: INFO: Created: latency-svc-9cg9h May 11 19:21:09.739: INFO: Got endpoints: latency-svc-9cg9h [1.780165851s] May 11 19:21:09.790: INFO: Created: latency-svc-svkd6 May 11 19:21:09.957: INFO: Got endpoints: latency-svc-svkd6 [1.799162081s] May 11 19:21:09.985: INFO: Created: latency-svc-m5b4v May 11 19:21:10.047: INFO: Got endpoints: latency-svc-m5b4v [1.654300222s] May 11 19:21:10.167: INFO: Created: latency-svc-s4wq7 May 11 19:21:10.170: INFO: Got endpoints: latency-svc-s4wq7 [1.619141236s] May 11 19:21:10.341: INFO: Created: latency-svc-c2znm May 11 19:21:10.343: INFO: Got endpoints: latency-svc-c2znm [1.710088861s] May 11 19:21:10.593: INFO: Created: latency-svc-kxffj May 11 19:21:10.596: INFO: Got endpoints: latency-svc-kxffj [1.862277421s] May 11 19:21:10.690: INFO: Created: latency-svc-c5qf2 May 11 19:21:10.754: INFO: Got endpoints: latency-svc-c5qf2 [1.928586421s] May 11 19:21:10.790: INFO: Created: latency-svc-9pdkp May 11 19:21:10.833: INFO: Got endpoints: latency-svc-9pdkp [1.887764394s] May 11 19:21:11.114: INFO: Created: latency-svc-c46nb May 11 19:21:11.119: INFO: Got endpoints: latency-svc-c46nb [2.119132482s] May 11 19:21:11.306: INFO: Created: latency-svc-r2swg May 11 19:21:11.331: INFO: Got endpoints: latency-svc-r2swg [2.248901391s] May 11 19:21:11.368: INFO: Created: latency-svc-9zjsk May 11 19:21:11.514: INFO: Got endpoints: latency-svc-9zjsk [2.3819671s] May 11 19:21:11.536: INFO: Created: latency-svc-7rnv4 May 11 19:21:11.571: INFO: Got endpoints: latency-svc-7rnv4 [2.324368272s] May 11 19:21:11.613: INFO: Created: latency-svc-876db May 11 19:21:11.688: INFO: Got endpoints: latency-svc-876db [2.375032128s] May 11 19:21:11.783: INFO: Created: latency-svc-tzj25 May 11 19:21:11.868: INFO: Got endpoints: latency-svc-tzj25 [2.38636816s] May 11 19:21:11.927: INFO: Created: latency-svc-7wxmn May 11 19:21:11.944: INFO: Got endpoints: latency-svc-7wxmn [2.390502534s] May 11 19:21:12.099: INFO: Created: latency-svc-g5nmj May 11 19:21:12.124: INFO: Got endpoints: latency-svc-g5nmj [2.385006545s] May 11 19:21:12.279: INFO: Created: latency-svc-qpjbd May 11 19:21:12.317: INFO: Got endpoints: latency-svc-qpjbd [2.359483426s] May 11 19:21:12.527: INFO: Created: latency-svc-h5xqr May 11 19:21:12.584: INFO: Got endpoints: latency-svc-h5xqr [2.537289404s] May 11 19:21:12.586: INFO: Created: latency-svc-w4h78 May 11 19:21:12.599: INFO: Got endpoints: latency-svc-w4h78 [2.428845021s] May 11 19:21:12.627: INFO: Created: latency-svc-cst4b May 11 19:21:12.707: INFO: Got endpoints: latency-svc-cst4b [2.364087652s] May 11 19:21:12.708: INFO: Created: latency-svc-6rblt May 11 19:21:12.738: INFO: Got endpoints: latency-svc-6rblt [2.141825766s] May 11 19:21:12.850: INFO: Created: latency-svc-gdbrm May 11 19:21:12.875: INFO: Got endpoints: latency-svc-gdbrm [2.12136056s] May 11 19:21:12.911: INFO: Created: latency-svc-md265 May 11 19:21:13.018: INFO: Got endpoints: latency-svc-md265 [2.184449274s] May 11 19:21:13.048: INFO: Created: latency-svc-s8ppg May 11 19:21:13.062: INFO: Got endpoints: latency-svc-s8ppg [1.942912036s] May 11 19:21:13.089: INFO: Created: latency-svc-shb4l May 11 19:21:13.099: INFO: Got endpoints: latency-svc-shb4l [1.76732877s] May 11 19:21:13.155: INFO: Created: latency-svc-hs7w5 May 11 19:21:13.187: INFO: Got endpoints: latency-svc-hs7w5 [1.672036443s] May 11 19:21:13.224: INFO: Created: latency-svc-2knqb May 11 19:21:13.237: INFO: Got endpoints: latency-svc-2knqb [1.666216873s] May 11 19:21:13.305: INFO: Created: latency-svc-jgqp4 May 11 19:21:13.308: INFO: Got endpoints: latency-svc-jgqp4 [1.619889038s] May 11 19:21:13.358: INFO: Created: latency-svc-xxjfw May 11 19:21:13.382: INFO: Got endpoints: latency-svc-xxjfw [1.514709144s] May 11 19:21:13.455: INFO: Created: latency-svc-p524q May 11 19:21:13.466: INFO: Got endpoints: latency-svc-p524q [1.522163928s] May 11 19:21:13.493: INFO: Created: latency-svc-6t29x May 11 19:21:13.509: INFO: Got endpoints: latency-svc-6t29x [1.384275089s] May 11 19:21:13.532: INFO: Created: latency-svc-tr47t May 11 19:21:13.549: INFO: Got endpoints: latency-svc-tr47t [1.231466938s] May 11 19:21:13.599: INFO: Created: latency-svc-tjwmz May 11 19:21:13.610: INFO: Got endpoints: latency-svc-tjwmz [1.025753289s] May 11 19:21:13.661: INFO: Created: latency-svc-v2rt5 May 11 19:21:13.694: INFO: Got endpoints: latency-svc-v2rt5 [1.095461012s] May 11 19:21:13.784: INFO: Created: latency-svc-g8hkc May 11 19:21:13.787: INFO: Got endpoints: latency-svc-g8hkc [1.079731654s] May 11 19:21:13.821: INFO: Created: latency-svc-k8cpm May 11 19:21:13.846: INFO: Got endpoints: latency-svc-k8cpm [1.107973136s] May 11 19:21:13.875: INFO: Created: latency-svc-7jnv5 May 11 19:21:13.957: INFO: Got endpoints: latency-svc-7jnv5 [1.081990928s] May 11 19:21:13.960: INFO: Created: latency-svc-m6vr8 May 11 19:21:13.972: INFO: Got endpoints: latency-svc-m6vr8 [954.457591ms] May 11 19:21:14.003: INFO: Created: latency-svc-qsgd6 May 11 19:21:14.019: INFO: Got endpoints: latency-svc-qsgd6 [957.496383ms] May 11 19:21:14.058: INFO: Created: latency-svc-j7fpt May 11 19:21:14.138: INFO: Got endpoints: latency-svc-j7fpt [1.039162048s] May 11 19:21:14.151: INFO: Created: latency-svc-9sttd May 11 19:21:14.164: INFO: Got endpoints: latency-svc-9sttd [977.560903ms] May 11 19:21:14.201: INFO: Created: latency-svc-xkrpg May 11 19:21:14.287: INFO: Created: latency-svc-dcvnh May 11 19:21:14.318: INFO: Created: latency-svc-g7ck2 May 11 19:21:14.318: INFO: Got endpoints: latency-svc-xkrpg [1.081155091s] May 11 19:21:14.334: INFO: Got endpoints: latency-svc-g7ck2 [951.183724ms] May 11 19:21:14.360: INFO: Got endpoints: latency-svc-dcvnh [1.052395683s] May 11 19:21:14.363: INFO: Created: latency-svc-fqs7n May 11 19:21:14.376: INFO: Got endpoints: latency-svc-fqs7n [909.499897ms] May 11 19:21:14.425: INFO: Created: latency-svc-nbvk9 May 11 19:21:14.427: INFO: Got endpoints: latency-svc-nbvk9 [918.259149ms] May 11 19:21:14.505: INFO: Created: latency-svc-p7lwf May 11 19:21:14.646: INFO: Got endpoints: latency-svc-p7lwf [1.097489976s] May 11 19:21:14.649: INFO: Created: latency-svc-rmtgv May 11 19:21:14.695: INFO: Got endpoints: latency-svc-rmtgv [1.085307173s] May 11 19:21:14.740: INFO: Created: latency-svc-zn8bc May 11 19:21:14.862: INFO: Got endpoints: latency-svc-zn8bc [1.167503911s] May 11 19:21:14.864: INFO: Created: latency-svc-j6br9 May 11 19:21:14.877: INFO: Got endpoints: latency-svc-j6br9 [1.090197674s] May 11 19:21:14.922: INFO: Created: latency-svc-x468d May 11 19:21:14.961: INFO: Got endpoints: latency-svc-x468d [1.115616535s] May 11 19:21:15.043: INFO: Created: latency-svc-fv5wx May 11 19:21:15.088: INFO: Got endpoints: latency-svc-fv5wx [1.130101054s] May 11 19:21:15.227: INFO: Created: latency-svc-cn92x May 11 19:21:15.244: INFO: Got endpoints: latency-svc-cn92x [1.271964162s] May 11 19:21:15.325: INFO: Created: latency-svc-86pr7 May 11 19:21:15.377: INFO: Got endpoints: latency-svc-86pr7 [1.357017265s] May 11 19:21:15.390: INFO: Created: latency-svc-zlkvv May 11 19:21:15.419: INFO: Got endpoints: latency-svc-zlkvv [1.280917957s] May 11 19:21:15.419: INFO: Latencies: [216.424394ms 695.975048ms 909.499897ms 918.259149ms 951.183724ms 954.457591ms 957.496383ms 977.560903ms 1.025753289s 1.039162048s 1.052395683s 1.079731654s 1.081155091s 1.081990928s 1.085307173s 1.090197674s 1.095461012s 1.097489976s 1.107973136s 1.115616535s 1.130101054s 1.167503911s 1.19988957s 1.231466938s 1.238528538s 1.271964162s 1.280917957s 1.295040597s 1.357017265s 1.35968203s 1.384275089s 1.403180586s 1.449506364s 1.514709144s 1.522163928s 1.588029983s 1.619141236s 1.619889038s 1.654300222s 1.666216873s 1.672036443s 1.677613604s 1.708394903s 1.710088861s 1.719839698s 1.719905182s 1.76732877s 1.772131364s 1.780165851s 1.799162081s 1.821501393s 1.850726863s 1.862277421s 1.8641447s 1.887764394s 1.908941826s 1.928586421s 1.942912036s 1.95203422s 1.984809069s 2.013790578s 2.024850914s 2.038645025s 2.043921852s 2.101818285s 2.10324316s 2.119132482s 2.12136056s 2.139403146s 2.141825766s 2.163033906s 2.165975926s 2.168839476s 2.181238431s 2.184449274s 2.204604515s 2.229246365s 2.248901391s 2.260195036s 2.318983288s 2.324368272s 2.348413398s 2.359483426s 2.364087652s 2.375032128s 2.3819671s 2.385006545s 2.38636816s 2.390502534s 2.40695123s 2.409090759s 2.428845021s 2.443257759s 2.467631469s 2.470817706s 2.521348583s 2.537289404s 2.570103168s 2.581110483s 2.589245479s 2.604123772s 2.618537564s 2.644587938s 2.658320017s 2.676956963s 2.677100876s 2.710532769s 2.730650914s 2.744106819s 2.749643118s 2.7572898s 2.880247139s 2.885707448s 2.908216004s 2.980252665s 3.146003087s 3.148749841s 3.166834592s 3.235609627s 3.240681317s 3.271519188s 3.286198208s 3.28815475s 3.348123882s 3.407550026s 3.412904987s 3.417904595s 3.42422939s 3.428183285s 3.433758933s 3.443040945s 3.482748841s 3.490519826s 3.498966644s 3.504431361s 3.510616429s 3.54275587s 3.555463982s 3.58096593s 3.618800489s 3.630637718s 3.64340138s 3.690727982s 3.69482095s 3.922292871s 4.135974021s 4.421555665s 4.663150615s 4.695899678s 4.697933981s 4.727502317s 4.926744286s 5.022077077s 5.15020071s 5.384534123s 5.464227161s 5.595712518s 5.635361636s 6.044105254s 6.147311785s 6.461216799s 6.466529769s 6.581471205s 6.600302261s 6.628577512s 6.857089879s 7.356166114s 7.361308833s 7.652757301s 7.867091311s 8.07799685s 8.209066072s 8.261994452s 8.28262433s 8.294624025s 8.396781757s 8.42324082s 8.789397381s 8.795535418s 8.927587051s 9.033127975s 9.161967263s 9.260763325s 9.476701611s 9.4907843s 9.964041984s 10.317510064s 10.472372919s 10.52305567s 11.055703424s 11.311663772s 11.706523498s 11.77337615s 12.774871929s 12.786020765s 12.986803608s 13.037654176s 13.786107228s 13.954660492s 14.216402535s] May 11 19:21:15.419: INFO: 50 %ile: 2.604123772s May 11 19:21:15.419: INFO: 90 %ile: 9.033127975s May 11 19:21:15.419: INFO: 99 %ile: 13.954660492s May 11 19:21:15.419: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 19:21:15.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-lprd4" for this suite. May 11 19:22:49.451: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:22:49.463: INFO: namespace: e2e-tests-svc-latency-lprd4, resource: bindings, ignored listing per whitelist May 11 19:22:49.505: INFO: namespace e2e-tests-svc-latency-lprd4 deletion completed in 1m34.080139472s • [SLOW TEST:157.592 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 19:22:49.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-w4dcw/configmap-test-cd4e9758-93bc-11ea-b832-0242ac110018 STEP: Creating a pod to test consume configMaps May 11 19:22:49.616: INFO: Waiting up to 5m0s for pod "pod-configmaps-cd52daec-93bc-11ea-b832-0242ac110018" in namespace "e2e-tests-configmap-w4dcw" to be "success or failure" May 11 19:22:49.639: INFO: Pod "pod-configmaps-cd52daec-93bc-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 23.364275ms May 11 19:22:51.864: INFO: Pod "pod-configmaps-cd52daec-93bc-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.247678556s May 11 19:22:53.866: INFO: Pod "pod-configmaps-cd52daec-93bc-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.250237194s May 11 19:22:55.870: INFO: Pod "pod-configmaps-cd52daec-93bc-11ea-b832-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 6.254026502s May 11 19:22:57.873: INFO: Pod "pod-configmaps-cd52daec-93bc-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.256984628s STEP: Saw pod success May 11 19:22:57.873: INFO: Pod "pod-configmaps-cd52daec-93bc-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 19:22:57.875: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-cd52daec-93bc-11ea-b832-0242ac110018 container env-test: STEP: delete the pod May 11 19:22:57.950: INFO: Waiting for pod pod-configmaps-cd52daec-93bc-11ea-b832-0242ac110018 to disappear May 11 19:22:57.963: INFO: Pod pod-configmaps-cd52daec-93bc-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 19:22:57.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-w4dcw" for this suite. May 11 19:23:04.025: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:23:04.049: INFO: namespace: e2e-tests-configmap-w4dcw, resource: bindings, ignored listing per whitelist May 11 19:23:04.076: INFO: namespace e2e-tests-configmap-w4dcw deletion completed in 6.109362642s • [SLOW TEST:14.570 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 19:23:04.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-82lp STEP: Creating a pod to test atomic-volume-subpath May 11 19:23:04.676: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-82lp" in namespace "e2e-tests-subpath-zddd8" to be "success or failure" May 11 19:23:04.786: INFO: Pod "pod-subpath-test-downwardapi-82lp": Phase="Pending", Reason="", readiness=false. Elapsed: 110.005224ms May 11 19:23:07.068: INFO: Pod "pod-subpath-test-downwardapi-82lp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.391791029s May 11 19:23:09.071: INFO: Pod "pod-subpath-test-downwardapi-82lp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.3952345s May 11 19:23:11.145: INFO: Pod "pod-subpath-test-downwardapi-82lp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.468869467s May 11 19:23:13.194: INFO: Pod "pod-subpath-test-downwardapi-82lp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.517687819s May 11 19:23:15.198: INFO: Pod "pod-subpath-test-downwardapi-82lp": Phase="Running", Reason="", readiness=false. Elapsed: 10.521647255s May 11 19:23:17.201: INFO: Pod "pod-subpath-test-downwardapi-82lp": Phase="Running", Reason="", readiness=false. Elapsed: 12.525488542s May 11 19:23:19.205: INFO: Pod "pod-subpath-test-downwardapi-82lp": Phase="Running", Reason="", readiness=false. Elapsed: 14.529135178s May 11 19:23:21.209: INFO: Pod "pod-subpath-test-downwardapi-82lp": Phase="Running", Reason="", readiness=false. Elapsed: 16.532604337s May 11 19:23:23.218: INFO: Pod "pod-subpath-test-downwardapi-82lp": Phase="Running", Reason="", readiness=false. Elapsed: 18.541641672s May 11 19:23:25.221: INFO: Pod "pod-subpath-test-downwardapi-82lp": Phase="Running", Reason="", readiness=false. Elapsed: 20.545209117s May 11 19:23:27.225: INFO: Pod "pod-subpath-test-downwardapi-82lp": Phase="Running", Reason="", readiness=false. Elapsed: 22.549045374s May 11 19:23:29.227: INFO: Pod "pod-subpath-test-downwardapi-82lp": Phase="Running", Reason="", readiness=false. Elapsed: 24.551412349s May 11 19:23:31.231: INFO: Pod "pod-subpath-test-downwardapi-82lp": Phase="Running", Reason="", readiness=false. Elapsed: 26.555015958s May 11 19:23:33.469: INFO: Pod "pod-subpath-test-downwardapi-82lp": Phase="Running", Reason="", readiness=false. Elapsed: 28.793454052s May 11 19:23:35.474: INFO: Pod "pod-subpath-test-downwardapi-82lp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.797957597s STEP: Saw pod success May 11 19:23:35.474: INFO: Pod "pod-subpath-test-downwardapi-82lp" satisfied condition "success or failure" May 11 19:23:35.477: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-downwardapi-82lp container test-container-subpath-downwardapi-82lp: STEP: delete the pod May 11 19:23:35.501: INFO: Waiting for pod pod-subpath-test-downwardapi-82lp to disappear May 11 19:23:35.505: INFO: Pod pod-subpath-test-downwardapi-82lp no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-82lp May 11 19:23:35.505: INFO: Deleting pod "pod-subpath-test-downwardapi-82lp" in namespace "e2e-tests-subpath-zddd8" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 19:23:35.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-zddd8" for this suite. May 11 19:23:43.623: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:23:43.671: INFO: namespace: e2e-tests-subpath-zddd8, resource: bindings, ignored listing per whitelist May 11 19:23:43.685: INFO: namespace e2e-tests-subpath-zddd8 deletion completed in 8.128270126s • [SLOW TEST:39.609 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 19:23:43.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 11 19:23:43.790: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 11 19:23:43.870: INFO: Waiting for terminating namespaces to be deleted... May 11 19:23:43.873: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 11 19:23:43.877: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 11 19:23:43.877: INFO: Container kube-proxy ready: true, restart count 0 May 11 19:23:43.877: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 11 19:23:43.877: INFO: Container kindnet-cni ready: true, restart count 0 May 11 19:23:43.877: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 11 19:23:43.877: INFO: Container coredns ready: true, restart count 0 May 11 19:23:43.877: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 11 19:23:43.882: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 11 19:23:43.882: INFO: Container kindnet-cni ready: true, restart count 0 May 11 19:23:43.882: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 11 19:23:43.882: INFO: Container coredns ready: true, restart count 0 May 11 19:23:43.882: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 11 19:23:43.882: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160e0fbd3956cd39], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 19:23:44.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-9vfmw" for this suite. May 11 19:23:50.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:23:50.945: INFO: namespace: e2e-tests-sched-pred-9vfmw, resource: bindings, ignored listing per whitelist May 11 19:23:50.978: INFO: namespace e2e-tests-sched-pred-9vfmw deletion completed in 6.079116039s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:7.293 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 19:23:50.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-rxxtd May 11 19:23:55.071: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-rxxtd STEP: checking the pod's current state and verifying that restartCount is present May 11 19:23:55.073: INFO: Initial restart count of pod liveness-exec is 0 May 11 19:24:43.355: INFO: Restart count of pod e2e-tests-container-probe-rxxtd/liveness-exec is now 1 (48.281149976s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 19:24:43.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-rxxtd" for this suite. May 11 19:24:49.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:24:49.463: INFO: namespace: e2e-tests-container-probe-rxxtd, resource: bindings, ignored listing per whitelist May 11 19:24:49.506: INFO: namespace e2e-tests-container-probe-rxxtd deletion completed in 6.124665909s • [SLOW TEST:58.528 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 19:24:49.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod May 11 19:24:58.026: INFO: Pod pod-hostip-14e5928d-93bd-11ea-b832-0242ac110018 has hostIP: 172.17.0.3 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 19:24:58.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-jf6wd" for this suite. May 11 19:25:22.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:25:22.635: INFO: namespace: e2e-tests-pods-jf6wd, resource: bindings, ignored listing per whitelist May 11 19:25:22.641: INFO: namespace e2e-tests-pods-jf6wd deletion completed in 24.610707669s • [SLOW TEST:33.135 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 19:25:22.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 11 19:25:51.192: INFO: Container started at 2020-05-11 19:25:26 +0000 UTC, pod became ready at 2020-05-11 19:25:49 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 19:25:51.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-l2cbq" for this suite. May 11 19:26:15.246: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:26:15.364: INFO: namespace: e2e-tests-container-probe-l2cbq, resource: bindings, ignored listing per whitelist May 11 19:26:15.370: INFO: namespace e2e-tests-container-probe-l2cbq deletion completed in 24.171361143s • [SLOW TEST:52.728 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 19:26:15.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 11 19:26:15.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' May 11 19:26:15.605: INFO: stderr: "" May 11 19:26:15.605: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:06Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" May 11 19:26:15.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ppd7x' May 11 19:26:21.932: INFO: stderr: "" May 11 19:26:21.932: INFO: stdout: "replicationcontroller/redis-master created\n" May 11 19:26:21.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ppd7x' May 11 19:26:22.742: INFO: stderr: "" May 11 19:26:22.742: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. May 11 19:26:23.745: INFO: Selector matched 1 pods for map[app:redis] May 11 19:26:23.745: INFO: Found 0 / 1 May 11 19:26:24.746: INFO: Selector matched 1 pods for map[app:redis] May 11 19:26:24.746: INFO: Found 0 / 1 May 11 19:26:25.962: INFO: Selector matched 1 pods for map[app:redis] May 11 19:26:25.962: INFO: Found 0 / 1 May 11 19:26:26.746: INFO: Selector matched 1 pods for map[app:redis] May 11 19:26:26.746: INFO: Found 0 / 1 May 11 19:26:27.801: INFO: Selector matched 1 pods for map[app:redis] May 11 19:26:27.801: INFO: Found 0 / 1 May 11 19:26:28.788: INFO: Selector matched 1 pods for map[app:redis] May 11 19:26:28.788: INFO: Found 1 / 1 May 11 19:26:28.788: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 11 19:26:28.791: INFO: Selector matched 1 pods for map[app:redis] May 11 19:26:28.791: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 11 19:26:28.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-n2znc --namespace=e2e-tests-kubectl-ppd7x' May 11 19:26:28.918: INFO: stderr: "" May 11 19:26:28.918: INFO: stdout: "Name: redis-master-n2znc\nNamespace: e2e-tests-kubectl-ppd7x\nPriority: 0\nPriorityClassName: \nNode: hunter-worker/172.17.0.3\nStart Time: Mon, 11 May 2020 19:26:22 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.177\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://20a300eb04ff1b56aaeae7d21e964754ca8f5af472e46943c78a8558a9aa2de5\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 11 May 2020 19:26:27 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-tjg5m (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-tjg5m:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-tjg5m\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 6s default-scheduler Successfully assigned e2e-tests-kubectl-ppd7x/redis-master-n2znc to hunter-worker\n Normal Pulled 4s kubelet, hunter-worker Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 1s kubelet, hunter-worker Created container\n Normal Started 1s kubelet, hunter-worker Started container\n" May 11 19:26:28.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-ppd7x' May 11 19:26:29.035: INFO: stderr: "" May 11 19:26:29.035: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-ppd7x\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 7s replication-controller Created pod: redis-master-n2znc\n" May 11 19:26:29.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-ppd7x' May 11 19:26:29.131: INFO: stderr: "" May 11 19:26:29.131: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-ppd7x\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.96.122.202\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.1.177:6379\nSession Affinity: None\nEvents: \n" May 11 19:26:29.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-control-plane' May 11 19:26:29.241: INFO: stderr: "" May 11 19:26:29.241: INFO: stdout: "Name: hunter-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/hostname=hunter-control-plane\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:22:50 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Mon, 11 May 2020 19:26:27 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 11 May 2020 19:26:27 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 11 May 2020 19:26:27 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 11 May 2020 19:26:27 +0000 Sun, 15 Mar 2020 18:23:41 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.2\n Hostname: hunter-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3c4716968dac483293a23c2100ad64a5\n System UUID: 683417f7-64ca-431d-b8ac-22e73b26255e\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.13.12\n Kube-Proxy Version: v1.13.12\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-hunter-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 57d\n kube-system kindnet-l2xm6 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 57d\n kube-system kube-apiserver-hunter-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 57d\n kube-system kube-controller-manager-hunter-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 57d\n kube-system kube-proxy-mmppc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 57d\n kube-system kube-scheduler-hunter-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 57d\n local-path-storage local-path-provisioner-77cfdd744c-q47vg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 57d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 11 19:26:29.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-ppd7x' May 11 19:26:29.340: INFO: stderr: "" May 11 19:26:29.340: INFO: stdout: "Name: e2e-tests-kubectl-ppd7x\nLabels: e2e-framework=kubectl\n e2e-run=772453d2-93ab-11ea-b832-0242ac110018\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 19:26:29.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-ppd7x" for this suite. May 11 19:26:53.355: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:26:53.410: INFO: namespace: e2e-tests-kubectl-ppd7x, resource: bindings, ignored listing per whitelist May 11 19:26:53.430: INFO: namespace e2e-tests-kubectl-ppd7x deletion completed in 24.087268927s • [SLOW TEST:38.060 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 19:26:53.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 11 19:26:53.800: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5edc9495-93bd-11ea-b832-0242ac110018" in namespace "e2e-tests-projected-725gv" to be "success or failure" May 11 19:26:53.816: INFO: Pod "downwardapi-volume-5edc9495-93bd-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 16.154194ms May 11 19:26:57.283: INFO: Pod "downwardapi-volume-5edc9495-93bd-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.483529201s May 11 19:26:59.286: INFO: Pod "downwardapi-volume-5edc9495-93bd-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 5.486225153s May 11 19:27:01.290: INFO: Pod "downwardapi-volume-5edc9495-93bd-11ea-b832-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 7.489790967s May 11 19:27:03.293: INFO: Pod "downwardapi-volume-5edc9495-93bd-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.493435201s STEP: Saw pod success May 11 19:27:03.293: INFO: Pod "downwardapi-volume-5edc9495-93bd-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 19:27:03.295: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-5edc9495-93bd-11ea-b832-0242ac110018 container client-container: STEP: delete the pod May 11 19:27:03.346: INFO: Waiting for pod downwardapi-volume-5edc9495-93bd-11ea-b832-0242ac110018 to disappear May 11 19:27:03.400: INFO: Pod downwardapi-volume-5edc9495-93bd-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 19:27:03.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-725gv" for this suite. May 11 19:27:11.502: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:27:11.557: INFO: namespace: e2e-tests-projected-725gv, resource: bindings, ignored listing per whitelist May 11 19:27:11.571: INFO: namespace e2e-tests-projected-725gv deletion completed in 8.167335947s • [SLOW TEST:18.141 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 19:27:11.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 11 19:27:11.772: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6994a16e-93bd-11ea-b832-0242ac110018" in namespace "e2e-tests-downward-api-wzzcz" to be "success or failure" May 11 19:27:11.838: INFO: Pod "downwardapi-volume-6994a16e-93bd-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 65.688953ms May 11 19:27:13.987: INFO: Pod "downwardapi-volume-6994a16e-93bd-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214767213s May 11 19:27:16.066: INFO: Pod "downwardapi-volume-6994a16e-93bd-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.293368871s May 11 19:27:18.307: INFO: Pod "downwardapi-volume-6994a16e-93bd-11ea-b832-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 6.534235626s May 11 19:27:20.310: INFO: Pod "downwardapi-volume-6994a16e-93bd-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.537929477s STEP: Saw pod success May 11 19:27:20.310: INFO: Pod "downwardapi-volume-6994a16e-93bd-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 19:27:20.312: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-6994a16e-93bd-11ea-b832-0242ac110018 container client-container: STEP: delete the pod May 11 19:27:20.840: INFO: Waiting for pod downwardapi-volume-6994a16e-93bd-11ea-b832-0242ac110018 to disappear May 11 19:27:20.868: INFO: Pod downwardapi-volume-6994a16e-93bd-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 19:27:20.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-wzzcz" for this suite. May 11 19:27:26.931: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:27:26.966: INFO: namespace: e2e-tests-downward-api-wzzcz, resource: bindings, ignored listing per whitelist May 11 19:27:26.990: INFO: namespace e2e-tests-downward-api-wzzcz deletion completed in 6.118228542s • [SLOW TEST:15.419 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 19:27:26.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-72cde269-93bd-11ea-b832-0242ac110018 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-72cde269-93bd-11ea-b832-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 19:28:40.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-2dn8c" for this suite. May 11 19:29:04.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:29:04.532: INFO: namespace: e2e-tests-projected-2dn8c, resource: bindings, ignored listing per whitelist May 11 19:29:04.572: INFO: namespace e2e-tests-projected-2dn8c deletion completed in 24.17992351s • [SLOW TEST:97.582 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 19:29:04.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-qv96 STEP: Creating a pod to test atomic-volume-subpath May 11 19:29:04.708: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-qv96" in namespace "e2e-tests-subpath-b24pl" to be "success or failure" May 11 19:29:04.720: INFO: Pod "pod-subpath-test-projected-qv96": Phase="Pending", Reason="", readiness=false. Elapsed: 12.522003ms May 11 19:29:06.723: INFO: Pod "pod-subpath-test-projected-qv96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015592369s May 11 19:29:08.761: INFO: Pod "pod-subpath-test-projected-qv96": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053616108s May 11 19:29:10.809: INFO: Pod "pod-subpath-test-projected-qv96": Phase="Pending", Reason="", readiness=false. Elapsed: 6.101620303s May 11 19:29:12.813: INFO: Pod "pod-subpath-test-projected-qv96": Phase="Pending", Reason="", readiness=false. Elapsed: 8.105420267s May 11 19:29:14.816: INFO: Pod "pod-subpath-test-projected-qv96": Phase="Running", Reason="", readiness=true. Elapsed: 10.108665334s May 11 19:29:16.822: INFO: Pod "pod-subpath-test-projected-qv96": Phase="Running", Reason="", readiness=false. Elapsed: 12.113792879s May 11 19:29:18.826: INFO: Pod "pod-subpath-test-projected-qv96": Phase="Running", Reason="", readiness=false. Elapsed: 14.117825898s May 11 19:29:20.830: INFO: Pod "pod-subpath-test-projected-qv96": Phase="Running", Reason="", readiness=false. Elapsed: 16.122241422s May 11 19:29:22.834: INFO: Pod "pod-subpath-test-projected-qv96": Phase="Running", Reason="", readiness=false. Elapsed: 18.126038484s May 11 19:29:24.837: INFO: Pod "pod-subpath-test-projected-qv96": Phase="Running", Reason="", readiness=false. Elapsed: 20.129411239s May 11 19:29:26.841: INFO: Pod "pod-subpath-test-projected-qv96": Phase="Running", Reason="", readiness=false. Elapsed: 22.133611791s May 11 19:29:28.845: INFO: Pod "pod-subpath-test-projected-qv96": Phase="Running", Reason="", readiness=false. Elapsed: 24.137364871s May 11 19:29:30.848: INFO: Pod "pod-subpath-test-projected-qv96": Phase="Running", Reason="", readiness=false. Elapsed: 26.140045246s May 11 19:29:32.852: INFO: Pod "pod-subpath-test-projected-qv96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.14400634s STEP: Saw pod success May 11 19:29:32.852: INFO: Pod "pod-subpath-test-projected-qv96" satisfied condition "success or failure" May 11 19:29:32.855: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-projected-qv96 container test-container-subpath-projected-qv96: STEP: delete the pod May 11 19:29:33.538: INFO: Waiting for pod pod-subpath-test-projected-qv96 to disappear May 11 19:29:33.834: INFO: Pod pod-subpath-test-projected-qv96 no longer exists STEP: Deleting pod pod-subpath-test-projected-qv96 May 11 19:29:33.834: INFO: Deleting pod "pod-subpath-test-projected-qv96" in namespace "e2e-tests-subpath-b24pl" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 19:29:33.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-b24pl" for this suite. May 11 19:29:39.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:29:39.915: INFO: namespace: e2e-tests-subpath-b24pl, resource: bindings, ignored listing per whitelist May 11 19:29:40.084: INFO: namespace e2e-tests-subpath-b24pl deletion completed in 6.215942408s • [SLOW TEST:35.512 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 19:29:40.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 11 19:29:40.340: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 11 19:29:40.352: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:29:40.354: INFO: Number of nodes with available pods: 0 May 11 19:29:40.354: INFO: Node hunter-worker is running more than one daemon pod May 11 19:29:41.357: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:29:41.360: INFO: Number of nodes with available pods: 0 May 11 19:29:41.360: INFO: Node hunter-worker is running more than one daemon pod May 11 19:29:42.358: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:29:42.361: INFO: Number of nodes with available pods: 0 May 11 19:29:42.361: INFO: Node hunter-worker is running more than one daemon pod May 11 19:29:45.009: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:29:45.012: INFO: Number of nodes with available pods: 0 May 11 19:29:45.012: INFO: Node hunter-worker is running more than one daemon pod May 11 19:29:45.685: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:29:45.688: INFO: Number of nodes with available pods: 0 May 11 19:29:45.688: INFO: Node hunter-worker is running more than one daemon pod May 11 19:29:46.359: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:29:46.362: INFO: Number of nodes with available pods: 0 May 11 19:29:46.362: INFO: Node hunter-worker is running more than one daemon pod May 11 19:29:48.900: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:29:49.140: INFO: Number of nodes with available pods: 1 May 11 19:29:49.140: INFO: Node hunter-worker is running more than one daemon pod May 11 19:29:49.849: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:29:50.138: INFO: Number of nodes with available pods: 2 May 11 19:29:50.138: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 11 19:29:51.109: INFO: Wrong image for pod: daemon-set-7h52m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:29:51.109: INFO: Wrong image for pod: daemon-set-l696p. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:29:51.111: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:29:52.272: INFO: Wrong image for pod: daemon-set-7h52m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:29:52.272: INFO: Wrong image for pod: daemon-set-l696p. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:29:52.276: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:29:53.464: INFO: Wrong image for pod: daemon-set-7h52m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:29:53.464: INFO: Wrong image for pod: daemon-set-l696p. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:29:53.948: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:29:54.145: INFO: Wrong image for pod: daemon-set-7h52m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:29:54.145: INFO: Wrong image for pod: daemon-set-l696p. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:29:54.148: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:29:55.335: INFO: Wrong image for pod: daemon-set-7h52m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:29:55.335: INFO: Pod daemon-set-7h52m is not available May 11 19:29:55.335: INFO: Wrong image for pod: daemon-set-l696p. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:29:55.339: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:29:56.464: INFO: Wrong image for pod: daemon-set-l696p. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:29:56.464: INFO: Pod daemon-set-pvnmb is not available May 11 19:29:56.531: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:29:57.116: INFO: Wrong image for pod: daemon-set-l696p. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:29:57.116: INFO: Pod daemon-set-pvnmb is not available May 11 19:29:57.120: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:29:58.517: INFO: Wrong image for pod: daemon-set-l696p. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:29:58.517: INFO: Pod daemon-set-pvnmb is not available May 11 19:29:58.520: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:29:59.175: INFO: Wrong image for pod: daemon-set-l696p. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:29:59.175: INFO: Pod daemon-set-pvnmb is not available May 11 19:29:59.178: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:30:00.115: INFO: Wrong image for pod: daemon-set-l696p. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:30:00.115: INFO: Pod daemon-set-pvnmb is not available May 11 19:30:00.118: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:30:01.164: INFO: Wrong image for pod: daemon-set-l696p. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:30:01.164: INFO: Pod daemon-set-pvnmb is not available May 11 19:30:01.236: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:30:02.146: INFO: Wrong image for pod: daemon-set-l696p. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:30:02.146: INFO: Pod daemon-set-pvnmb is not available May 11 19:30:02.150: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:30:03.954: INFO: Wrong image for pod: daemon-set-l696p. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:30:03.958: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:30:04.249: INFO: Wrong image for pod: daemon-set-l696p. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:30:04.333: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:30:05.136: INFO: Wrong image for pod: daemon-set-l696p. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:30:05.139: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:30:06.117: INFO: Wrong image for pod: daemon-set-l696p. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:30:06.154: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:30:07.200: INFO: Wrong image for pod: daemon-set-l696p. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:30:07.200: INFO: Pod daemon-set-l696p is not available May 11 19:30:07.205: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:30:08.121: INFO: Wrong image for pod: daemon-set-l696p. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 19:30:08.121: INFO: Pod daemon-set-l696p is not available May 11 19:30:08.344: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:30:09.117: INFO: Pod daemon-set-x8xr4 is not available May 11 19:30:09.121: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 11 19:30:09.126: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:30:09.128: INFO: Number of nodes with available pods: 1 May 11 19:30:09.128: INFO: Node hunter-worker2 is running more than one daemon pod May 11 19:30:10.133: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:30:10.136: INFO: Number of nodes with available pods: 1 May 11 19:30:10.136: INFO: Node hunter-worker2 is running more than one daemon pod May 11 19:30:11.373: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:30:11.684: INFO: Number of nodes with available pods: 1 May 11 19:30:11.684: INFO: Node hunter-worker2 is running more than one daemon pod May 11 19:30:12.132: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:30:12.135: INFO: Number of nodes with available pods: 1 May 11 19:30:12.135: INFO: Node hunter-worker2 is running more than one daemon pod May 11 19:30:13.140: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:30:13.492: INFO: Number of nodes with available pods: 1 May 11 19:30:13.492: INFO: Node hunter-worker2 is running more than one daemon pod May 11 19:30:14.548: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 19:30:14.551: INFO: Number of nodes with available pods: 2 May 11 19:30:14.551: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-8zfkw, will wait for the garbage collector to delete the pods May 11 19:30:16.682: INFO: Deleting DaemonSet.extensions daemon-set took: 195.304248ms May 11 19:30:17.282: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.200883ms May 11 19:30:31.846: INFO: Number of nodes with available pods: 0 May 11 19:30:31.846: INFO: Number of running nodes: 0, number of available pods: 0 May 11 19:30:31.848: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-8zfkw/daemonsets","resourceVersion":"10009140"},"items":null} May 11 19:30:31.851: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-8zfkw/pods","resourceVersion":"10009140"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 19:30:31.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-8zfkw" for this suite. May 11 19:30:37.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:30:37.934: INFO: namespace: e2e-tests-daemonsets-8zfkw, resource: bindings, ignored listing per whitelist May 11 19:30:37.971: INFO: namespace e2e-tests-daemonsets-8zfkw deletion completed in 6.109153907s • [SLOW TEST:57.886 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 19:30:37.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 11 19:30:38.070: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 19:30:51.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-r7jgz" for this suite. May 11 19:31:17.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:31:17.167: INFO: namespace: e2e-tests-init-container-r7jgz, resource: bindings, ignored listing per whitelist May 11 19:31:17.221: INFO: namespace e2e-tests-init-container-r7jgz deletion completed in 26.158522189s • [SLOW TEST:39.250 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 19:31:17.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 11 19:31:18.425: INFO: Waiting up to 5m0s for pod "downward-api-fc3aa8a6-93bd-11ea-b832-0242ac110018" in namespace "e2e-tests-downward-api-sj4k7" to be "success or failure" May 11 19:31:18.748: INFO: Pod "downward-api-fc3aa8a6-93bd-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 322.634155ms May 11 19:31:20.751: INFO: Pod "downward-api-fc3aa8a6-93bd-11ea-b832-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.32551451s May 11 19:31:22.973: INFO: Pod "downward-api-fc3aa8a6-93bd-11ea-b832-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.548005475s May 11 19:31:24.977: INFO: Pod "downward-api-fc3aa8a6-93bd-11ea-b832-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.551760935s STEP: Saw pod success May 11 19:31:24.977: INFO: Pod "downward-api-fc3aa8a6-93bd-11ea-b832-0242ac110018" satisfied condition "success or failure" May 11 19:31:24.979: INFO: Trying to get logs from node hunter-worker pod downward-api-fc3aa8a6-93bd-11ea-b832-0242ac110018 container dapi-container: STEP: delete the pod May 11 19:31:25.038: INFO: Waiting for pod downward-api-fc3aa8a6-93bd-11ea-b832-0242ac110018 to disappear May 11 19:31:25.043: INFO: Pod downward-api-fc3aa8a6-93bd-11ea-b832-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 19:31:25.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-sj4k7" for this suite. May 11 19:31:33.059: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:31:33.661: INFO: namespace: e2e-tests-downward-api-sj4k7, resource: bindings, ignored listing per whitelist May 11 19:31:33.666: INFO: namespace e2e-tests-downward-api-sj4k7 deletion completed in 8.619282552s • [SLOW TEST:16.445 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 19:31:33.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 11 19:31:35.312: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-q7qlv,SelfLink:/api/v1/namespaces/e2e-tests-watch-q7qlv/configmaps/e2e-watch-test-resource-version,UID:064c8064-93be-11ea-99e8-0242ac110002,ResourceVersion:10009355,Generation:0,CreationTimestamp:2020-05-11 19:31:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 11 19:31:35.312: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-q7qlv,SelfLink:/api/v1/namespaces/e2e-tests-watch-q7qlv/configmaps/e2e-watch-test-resource-version,UID:064c8064-93be-11ea-99e8-0242ac110002,ResourceVersion:10009357,Generation:0,CreationTimestamp:2020-05-11 19:31:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 19:31:35.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-q7qlv" for this suite. May 11 19:31:41.729: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:31:41.860: INFO: namespace: e2e-tests-watch-q7qlv, resource: bindings, ignored listing per whitelist May 11 19:31:41.862: INFO: namespace e2e-tests-watch-q7qlv deletion completed in 6.475202216s • [SLOW TEST:8.196 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 19:31:41.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-0ab743e6-93be-11ea-b832-0242ac110018 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 19:31:50.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-hjwmc" for this suite. May 11 19:32:16.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:32:16.434: INFO: namespace: e2e-tests-configmap-hjwmc, resource: bindings, ignored listing per whitelist May 11 19:32:16.624: INFO: namespace e2e-tests-configmap-hjwmc deletion completed in 26.396695862s • [SLOW TEST:34.762 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 19:32:16.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 19:32:25.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-kxqvv" for this suite. May 11 19:32:34.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:32:34.338: INFO: namespace: e2e-tests-emptydir-wrapper-kxqvv, resource: bindings, ignored listing per whitelist May 11 19:32:34.354: INFO: namespace e2e-tests-emptydir-wrapper-kxqvv deletion completed in 8.403677347s • [SLOW TEST:17.730 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 19:32:34.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted May 11 19:32:48.060: INFO: 5 pods remaining May 11 19:32:48.060: INFO: 5 pods has nil DeletionTimestamp May 11 19:32:48.060: INFO: STEP: Gathering metrics W0511 19:32:52.477244 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 11 19:32:52.477: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 19:32:52.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-8fz6v" for this suite. May 11 19:33:22.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:33:22.521: INFO: namespace: e2e-tests-gc-8fz6v, resource: bindings, ignored listing per whitelist May 11 19:33:22.557: INFO: namespace e2e-tests-gc-8fz6v deletion completed in 30.075986917s • [SLOW TEST:48.203 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 11 19:33:22.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-vqdsk May 11 19:33:37.789: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-vqdsk STEP: checking the pod's current state and verifying that restartCount is present May 11 19:33:37.792: INFO: Initial restart count of pod liveness-http is 0 May 11 19:33:59.098: INFO: Restart count of pod e2e-tests-container-probe-vqdsk/liveness-http is now 1 (21.305965617s elapsed) May 11 19:34:21.314: INFO: Restart count of pod e2e-tests-container-probe-vqdsk/liveness-http is now 2 (43.521941732s elapsed) May 11 19:34:35.338: INFO: Restart count of pod e2e-tests-container-probe-vqdsk/liveness-http is now 3 (57.546454678s elapsed) May 11 19:34:57.407: INFO: Restart count of pod e2e-tests-container-probe-vqdsk/liveness-http is now 4 (1m19.615407098s elapsed) May 11 19:35:59.941: INFO: Restart count of pod e2e-tests-container-probe-vqdsk/liveness-http is now 5 (2m22.149195269s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 11 19:35:59.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-vqdsk" for this suite. May 11 19:36:10.617: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 19:36:10.712: INFO: namespace: e2e-tests-container-probe-vqdsk, resource: bindings, ignored listing per whitelist May 11 19:36:10.919: INFO: namespace e2e-tests-container-probe-vqdsk deletion completed in 10.768657774s • [SLOW TEST:168.361 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSMay 11 19:36:10.919: INFO: Running AfterSuite actions on all nodes May 11 19:36:10.919: INFO: Running AfterSuite actions on node 1 May 11 19:36:10.919: INFO: Skipping dumping logs from cluster Ran 200 of 2164 Specs in 8246.575 seconds SUCCESS! -- 200 Passed | 0 Failed | 0 Pending | 1964 Skipped PASS