I0519 12:55:44.162924 6 e2e.go:243] Starting e2e run "69bc9db8-c6b7-4504-81b1-cd1210366734" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1589892943 - Will randomize all specs Will run 215 of 4412 specs May 19 12:55:44.356: INFO: >>> kubeConfig: /root/.kube/config May 19 12:55:44.360: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 19 12:55:44.378: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 19 12:55:44.409: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 19 12:55:44.409: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 19 12:55:44.409: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 19 12:55:44.428: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 19 12:55:44.428: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 19 12:55:44.428: INFO: e2e test version: v1.15.11 May 19 12:55:44.429: INFO: kube-apiserver version: v1.15.7 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 12:55:44.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers May 19 12:55:44.491: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments May 19 12:55:44.500: INFO: Waiting up to 5m0s for pod "client-containers-87bf227b-1f7c-4614-a2be-1973ffa4cc2a" in namespace "containers-9099" to be "success or failure" May 19 12:55:44.503: INFO: Pod "client-containers-87bf227b-1f7c-4614-a2be-1973ffa4cc2a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.237341ms May 19 12:55:46.507: INFO: Pod "client-containers-87bf227b-1f7c-4614-a2be-1973ffa4cc2a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007272401s May 19 12:55:48.511: INFO: Pod "client-containers-87bf227b-1f7c-4614-a2be-1973ffa4cc2a": Phase="Running", Reason="", readiness=true. Elapsed: 4.010967026s May 19 12:55:50.515: INFO: Pod "client-containers-87bf227b-1f7c-4614-a2be-1973ffa4cc2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01556328s STEP: Saw pod success May 19 12:55:50.515: INFO: Pod "client-containers-87bf227b-1f7c-4614-a2be-1973ffa4cc2a" satisfied condition "success or failure" May 19 12:55:50.518: INFO: Trying to get logs from node iruya-worker2 pod client-containers-87bf227b-1f7c-4614-a2be-1973ffa4cc2a container test-container: STEP: delete the pod May 19 12:55:50.566: INFO: Waiting for pod client-containers-87bf227b-1f7c-4614-a2be-1973ffa4cc2a to disappear May 19 12:55:50.569: INFO: Pod client-containers-87bf227b-1f7c-4614-a2be-1973ffa4cc2a no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 12:55:50.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9099" for this suite. May 19 12:55:56.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:55:56.694: INFO: namespace containers-9099 deletion completed in 6.121795388s • [SLOW TEST:12.265 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 12:55:56.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium May 19 12:55:56.733: INFO: Waiting up to 5m0s for pod "pod-ff637546-27db-4c23-bdf3-9cc8862fe8d4" in namespace "emptydir-5723" to be "success or failure" May 19 12:55:56.751: INFO: Pod "pod-ff637546-27db-4c23-bdf3-9cc8862fe8d4": Phase="Pending", Reason="", readiness=false. Elapsed: 18.149984ms May 19 12:55:58.893: INFO: Pod "pod-ff637546-27db-4c23-bdf3-9cc8862fe8d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.160147776s May 19 12:56:00.898: INFO: Pod "pod-ff637546-27db-4c23-bdf3-9cc8862fe8d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.164983881s STEP: Saw pod success May 19 12:56:00.898: INFO: Pod "pod-ff637546-27db-4c23-bdf3-9cc8862fe8d4" satisfied condition "success or failure" May 19 12:56:00.901: INFO: Trying to get logs from node iruya-worker pod pod-ff637546-27db-4c23-bdf3-9cc8862fe8d4 container test-container: STEP: delete the pod May 19 12:56:01.012: INFO: Waiting for pod pod-ff637546-27db-4c23-bdf3-9cc8862fe8d4 to disappear May 19 12:56:01.022: INFO: Pod pod-ff637546-27db-4c23-bdf3-9cc8862fe8d4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 12:56:01.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5723" for this suite. May 19 12:56:07.081: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:56:07.154: INFO: namespace emptydir-5723 deletion completed in 6.127046473s • [SLOW TEST:10.459 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 12:56:07.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server May 19 12:56:07.240: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 12:56:07.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7489" for this suite. May 19 12:56:13.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:56:13.435: INFO: namespace kubectl-7489 deletion completed in 6.089412682s • [SLOW TEST:6.281 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 12:56:13.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium May 19 12:56:13.522: INFO: Waiting up to 5m0s for pod "pod-6cbb0051-59d5-4c9c-a8f8-704589ffa388" in namespace "emptydir-5330" to be "success or failure" May 19 12:56:13.526: INFO: Pod "pod-6cbb0051-59d5-4c9c-a8f8-704589ffa388": Phase="Pending", Reason="", readiness=false. Elapsed: 3.59458ms May 19 12:56:15.575: INFO: Pod "pod-6cbb0051-59d5-4c9c-a8f8-704589ffa388": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053495918s May 19 12:56:17.579: INFO: Pod "pod-6cbb0051-59d5-4c9c-a8f8-704589ffa388": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057011688s STEP: Saw pod success May 19 12:56:17.579: INFO: Pod "pod-6cbb0051-59d5-4c9c-a8f8-704589ffa388" satisfied condition "success or failure" May 19 12:56:17.581: INFO: Trying to get logs from node iruya-worker pod pod-6cbb0051-59d5-4c9c-a8f8-704589ffa388 container test-container: STEP: delete the pod May 19 12:56:17.622: INFO: Waiting for pod pod-6cbb0051-59d5-4c9c-a8f8-704589ffa388 to disappear May 19 12:56:17.634: INFO: Pod pod-6cbb0051-59d5-4c9c-a8f8-704589ffa388 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 12:56:17.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5330" for this suite. May 19 12:56:23.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:56:23.726: INFO: namespace emptydir-5330 deletion completed in 6.088475868s • [SLOW TEST:10.290 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 12:56:23.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-1189 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1189 to expose endpoints map[] May 19 12:56:23.863: INFO: Get endpoints failed (19.369541ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 19 12:56:24.866: INFO: successfully validated that service multi-endpoint-test in namespace services-1189 exposes endpoints map[] (1.022964279s elapsed) STEP: Creating pod pod1 in namespace services-1189 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1189 to expose endpoints map[pod1:[100]] May 19 12:56:27.950: INFO: successfully validated that service multi-endpoint-test in namespace services-1189 exposes endpoints map[pod1:[100]] (3.07628314s elapsed) STEP: Creating pod pod2 in namespace services-1189 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1189 to expose endpoints map[pod1:[100] pod2:[101]] May 19 12:56:32.043: INFO: successfully validated that service multi-endpoint-test in namespace services-1189 exposes endpoints map[pod1:[100] pod2:[101]] (4.089626209s elapsed) STEP: Deleting pod pod1 in namespace services-1189 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1189 to expose endpoints map[pod2:[101]] May 19 12:56:32.079: INFO: successfully validated that service multi-endpoint-test in namespace services-1189 exposes endpoints map[pod2:[101]] (31.275939ms elapsed) STEP: Deleting pod pod2 in namespace services-1189 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1189 to expose endpoints map[] May 19 12:56:33.119: INFO: successfully validated that service multi-endpoint-test in namespace services-1189 exposes endpoints map[] (1.036492551s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 12:56:33.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1189" for this suite. May 19 12:56:55.172: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:56:55.256: INFO: namespace services-1189 deletion completed in 22.092953391s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:31.530 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 12:56:55.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 19 12:56:55.333: INFO: PodSpec: initContainers in spec.initContainers May 19 12:57:44.940: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-b204de39-8f93-42b3-8bd6-ae512f5fe366", GenerateName:"", Namespace:"init-container-9779", SelfLink:"/api/v1/namespaces/init-container-9779/pods/pod-init-b204de39-8f93-42b3-8bd6-ae512f5fe366", UID:"615a13c8-9f5f-4413-8941-354161fc88bb", ResourceVersion:"11749774", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63725489815, loc:(*time.Location)(0x7ead8c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"333297847"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-klvrz", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00200bd80), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-klvrz", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-klvrz", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-klvrz", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00235a7d8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0021cf080), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00235a860)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00235a880)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00235a888), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00235a88c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725489815, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725489815, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725489815, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725489815, loc:(*time.Location)(0x7ead8c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.6", PodIP:"10.244.2.6", StartTime:(*v1.Time)(0xc002c1bb40), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc002c1bbc0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00207af50)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://d329f3c8656f8addd0af82bd8e58d73d92933c71364f026c78acfde90aa76c5b"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002c1bc00), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002c1bb80), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 12:57:44.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9779" for this suite. May 19 12:58:06.980: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:58:07.084: INFO: namespace init-container-9779 deletion completed in 22.13809761s • [SLOW TEST:71.827 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 12:58:07.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 19 12:58:07.211: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"af1066b0-9674-4cc2-b7cf-cdb62a95ba2e", Controller:(*bool)(0xc00289f782), BlockOwnerDeletion:(*bool)(0xc00289f783)}} May 19 12:58:07.228: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"0aeb7f0c-ff2e-4520-993b-949a633d7662", Controller:(*bool)(0xc0023df4a2), BlockOwnerDeletion:(*bool)(0xc0023df4a3)}} May 19 12:58:07.281: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"ae27f791-cbfa-43b8-8bca-924e51c92a10", Controller:(*bool)(0xc0027116fa), BlockOwnerDeletion:(*bool)(0xc0027116fb)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 12:58:12.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5023" for this suite. May 19 12:58:18.333: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:58:18.423: INFO: namespace gc-5023 deletion completed in 6.102778064s • [SLOW TEST:11.339 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 12:58:18.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-7e876485-d88f-4bf7-b1ca-27a299be3ff1 in namespace container-probe-7391 May 19 12:58:22.523: INFO: Started pod busybox-7e876485-d88f-4bf7-b1ca-27a299be3ff1 in namespace container-probe-7391 STEP: checking the pod's current state and verifying that restartCount is present May 19 12:58:22.526: INFO: Initial restart count of pod busybox-7e876485-d88f-4bf7-b1ca-27a299be3ff1 is 0 May 19 12:59:14.660: INFO: Restart count of pod container-probe-7391/busybox-7e876485-d88f-4bf7-b1ca-27a299be3ff1 is now 1 (52.133561134s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 12:59:14.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7391" for this suite. May 19 12:59:20.771: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:59:20.847: INFO: namespace container-probe-7391 deletion completed in 6.13213433s • [SLOW TEST:62.424 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 12:59:20.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller May 19 12:59:20.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-183' May 19 12:59:23.629: INFO: stderr: "" May 19 12:59:23.629: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 19 12:59:23.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-183' May 19 12:59:23.773: INFO: stderr: "" May 19 12:59:23.773: INFO: stdout: "update-demo-nautilus-9z4vd update-demo-nautilus-x4pqt " May 19 12:59:23.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9z4vd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-183' May 19 12:59:23.861: INFO: stderr: "" May 19 12:59:23.861: INFO: stdout: "" May 19 12:59:23.861: INFO: update-demo-nautilus-9z4vd is created but not running May 19 12:59:28.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-183' May 19 12:59:28.970: INFO: stderr: "" May 19 12:59:28.970: INFO: stdout: "update-demo-nautilus-9z4vd update-demo-nautilus-x4pqt " May 19 12:59:28.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9z4vd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-183' May 19 12:59:29.071: INFO: stderr: "" May 19 12:59:29.071: INFO: stdout: "true" May 19 12:59:29.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9z4vd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-183' May 19 12:59:29.166: INFO: stderr: "" May 19 12:59:29.166: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 19 12:59:29.166: INFO: validating pod update-demo-nautilus-9z4vd May 19 12:59:29.170: INFO: got data: { "image": "nautilus.jpg" } May 19 12:59:29.171: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 19 12:59:29.171: INFO: update-demo-nautilus-9z4vd is verified up and running May 19 12:59:29.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x4pqt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-183' May 19 12:59:29.269: INFO: stderr: "" May 19 12:59:29.269: INFO: stdout: "true" May 19 12:59:29.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x4pqt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-183' May 19 12:59:29.360: INFO: stderr: "" May 19 12:59:29.360: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 19 12:59:29.360: INFO: validating pod update-demo-nautilus-x4pqt May 19 12:59:29.372: INFO: got data: { "image": "nautilus.jpg" } May 19 12:59:29.372: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 19 12:59:29.372: INFO: update-demo-nautilus-x4pqt is verified up and running STEP: scaling down the replication controller May 19 12:59:29.374: INFO: scanned /root for discovery docs: May 19 12:59:29.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-183' May 19 12:59:30.510: INFO: stderr: "" May 19 12:59:30.510: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 19 12:59:30.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-183' May 19 12:59:30.647: INFO: stderr: "" May 19 12:59:30.647: INFO: stdout: "update-demo-nautilus-9z4vd update-demo-nautilus-x4pqt " STEP: Replicas for name=update-demo: expected=1 actual=2 May 19 12:59:35.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-183' May 19 12:59:35.747: INFO: stderr: "" May 19 12:59:35.747: INFO: stdout: "update-demo-nautilus-9z4vd update-demo-nautilus-x4pqt " STEP: Replicas for name=update-demo: expected=1 actual=2 May 19 12:59:40.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-183' May 19 12:59:40.863: INFO: stderr: "" May 19 12:59:40.863: INFO: stdout: "update-demo-nautilus-9z4vd update-demo-nautilus-x4pqt " STEP: Replicas for name=update-demo: expected=1 actual=2 May 19 12:59:45.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-183' May 19 12:59:45.963: INFO: stderr: "" May 19 12:59:45.963: INFO: stdout: "update-demo-nautilus-x4pqt " May 19 12:59:45.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x4pqt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-183' May 19 12:59:46.048: INFO: stderr: "" May 19 12:59:46.048: INFO: stdout: "true" May 19 12:59:46.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x4pqt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-183' May 19 12:59:46.138: INFO: stderr: "" May 19 12:59:46.138: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 19 12:59:46.138: INFO: validating pod update-demo-nautilus-x4pqt May 19 12:59:46.141: INFO: got data: { "image": "nautilus.jpg" } May 19 12:59:46.141: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 19 12:59:46.141: INFO: update-demo-nautilus-x4pqt is verified up and running STEP: scaling up the replication controller May 19 12:59:46.143: INFO: scanned /root for discovery docs: May 19 12:59:46.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-183' May 19 12:59:47.268: INFO: stderr: "" May 19 12:59:47.268: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 19 12:59:47.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-183' May 19 12:59:47.360: INFO: stderr: "" May 19 12:59:47.360: INFO: stdout: "update-demo-nautilus-2pcmt update-demo-nautilus-x4pqt " May 19 12:59:47.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2pcmt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-183' May 19 12:59:47.437: INFO: stderr: "" May 19 12:59:47.437: INFO: stdout: "" May 19 12:59:47.437: INFO: update-demo-nautilus-2pcmt is created but not running May 19 12:59:52.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-183' May 19 12:59:52.540: INFO: stderr: "" May 19 12:59:52.540: INFO: stdout: "update-demo-nautilus-2pcmt update-demo-nautilus-x4pqt " May 19 12:59:52.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2pcmt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-183' May 19 12:59:52.638: INFO: stderr: "" May 19 12:59:52.638: INFO: stdout: "true" May 19 12:59:52.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2pcmt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-183' May 19 12:59:52.726: INFO: stderr: "" May 19 12:59:52.726: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 19 12:59:52.726: INFO: validating pod update-demo-nautilus-2pcmt May 19 12:59:52.729: INFO: got data: { "image": "nautilus.jpg" } May 19 12:59:52.729: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 19 12:59:52.729: INFO: update-demo-nautilus-2pcmt is verified up and running May 19 12:59:52.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x4pqt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-183' May 19 12:59:52.820: INFO: stderr: "" May 19 12:59:52.820: INFO: stdout: "true" May 19 12:59:52.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x4pqt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-183' May 19 12:59:52.915: INFO: stderr: "" May 19 12:59:52.915: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 19 12:59:52.915: INFO: validating pod update-demo-nautilus-x4pqt May 19 12:59:52.919: INFO: got data: { "image": "nautilus.jpg" } May 19 12:59:52.919: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 19 12:59:52.919: INFO: update-demo-nautilus-x4pqt is verified up and running STEP: using delete to clean up resources May 19 12:59:52.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-183' May 19 12:59:53.026: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 19 12:59:53.026: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 19 12:59:53.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-183' May 19 12:59:53.128: INFO: stderr: "No resources found.\n" May 19 12:59:53.128: INFO: stdout: "" May 19 12:59:53.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-183 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 19 12:59:53.283: INFO: stderr: "" May 19 12:59:53.283: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 12:59:53.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-183" for this suite. May 19 12:59:59.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 12:59:59.447: INFO: namespace kubectl-183 deletion completed in 6.140143268s • [SLOW TEST:38.599 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 12:59:59.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-172de652-43c0-4f25-8575-b5928b140868 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 12:59:59.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4457" for this suite. May 19 13:00:05.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:00:05.598: INFO: namespace secrets-4457 deletion completed in 6.092195291s • [SLOW TEST:6.151 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:00:05.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-118 STEP: creating a selector STEP: Creating the service pods in kubernetes May 19 13:00:05.651: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 19 13:00:31.802: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.132 8081 | grep -v '^\s*$'] Namespace:pod-network-test-118 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 13:00:31.803: INFO: >>> kubeConfig: /root/.kube/config I0519 13:00:31.839487 6 log.go:172] (0xc001c58420) (0xc001633ae0) Create stream I0519 13:00:31.839531 6 log.go:172] (0xc001c58420) (0xc001633ae0) Stream added, broadcasting: 1 I0519 13:00:31.842308 6 log.go:172] (0xc001c58420) Reply frame received for 1 I0519 13:00:31.842349 6 log.go:172] (0xc001c58420) (0xc001c78820) Create stream I0519 13:00:31.842361 6 log.go:172] (0xc001c58420) (0xc001c78820) Stream added, broadcasting: 3 I0519 13:00:31.843587 6 log.go:172] (0xc001c58420) Reply frame received for 3 I0519 13:00:31.843634 6 log.go:172] (0xc001c58420) (0xc001633b80) Create stream I0519 13:00:31.843651 6 log.go:172] (0xc001c58420) (0xc001633b80) Stream added, broadcasting: 5 I0519 13:00:31.844898 6 log.go:172] (0xc001c58420) Reply frame received for 5 I0519 13:00:32.958873 6 log.go:172] (0xc001c58420) Data frame received for 5 I0519 13:00:32.958950 6 log.go:172] (0xc001633b80) (5) Data frame handling I0519 13:00:32.958997 6 log.go:172] (0xc001c58420) Data frame received for 3 I0519 13:00:32.959034 6 log.go:172] (0xc001c78820) (3) Data frame handling I0519 13:00:32.959075 6 log.go:172] (0xc001c78820) (3) Data frame sent I0519 13:00:32.959118 6 log.go:172] (0xc001c58420) Data frame received for 3 I0519 13:00:32.959151 6 log.go:172] (0xc001c78820) (3) Data frame handling I0519 13:00:32.960861 6 log.go:172] (0xc001c58420) Data frame received for 1 I0519 13:00:32.960890 6 log.go:172] (0xc001633ae0) (1) Data frame handling I0519 13:00:32.960910 6 log.go:172] (0xc001633ae0) (1) Data frame sent I0519 13:00:32.960942 6 log.go:172] (0xc001c58420) (0xc001633ae0) Stream removed, broadcasting: 1 I0519 13:00:32.961104 6 log.go:172] (0xc001c58420) Go away received I0519 13:00:32.961385 6 log.go:172] (0xc001c58420) (0xc001633ae0) Stream removed, broadcasting: 1 I0519 13:00:32.961415 6 log.go:172] (0xc001c58420) (0xc001c78820) Stream removed, broadcasting: 3 I0519 13:00:32.961425 6 log.go:172] (0xc001c58420) (0xc001633b80) Stream removed, broadcasting: 5 May 19 13:00:32.961: INFO: Found all expected endpoints: [netserver-0] May 19 13:00:32.964: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.11 8081 | grep -v '^\s*$'] Namespace:pod-network-test-118 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 13:00:32.964: INFO: >>> kubeConfig: /root/.kube/config I0519 13:00:32.994583 6 log.go:172] (0xc0023a6420) (0xc0010159a0) Create stream I0519 13:00:32.994607 6 log.go:172] (0xc0023a6420) (0xc0010159a0) Stream added, broadcasting: 1 I0519 13:00:32.996492 6 log.go:172] (0xc0023a6420) Reply frame received for 1 I0519 13:00:32.996538 6 log.go:172] (0xc0023a6420) (0xc001633c20) Create stream I0519 13:00:32.996557 6 log.go:172] (0xc0023a6420) (0xc001633c20) Stream added, broadcasting: 3 I0519 13:00:32.997587 6 log.go:172] (0xc0023a6420) Reply frame received for 3 I0519 13:00:32.997636 6 log.go:172] (0xc0023a6420) (0xc001633cc0) Create stream I0519 13:00:32.997659 6 log.go:172] (0xc0023a6420) (0xc001633cc0) Stream added, broadcasting: 5 I0519 13:00:32.998690 6 log.go:172] (0xc0023a6420) Reply frame received for 5 I0519 13:00:34.073479 6 log.go:172] (0xc0023a6420) Data frame received for 3 I0519 13:00:34.073525 6 log.go:172] (0xc001633c20) (3) Data frame handling I0519 13:00:34.073545 6 log.go:172] (0xc0023a6420) Data frame received for 5 I0519 13:00:34.073582 6 log.go:172] (0xc001633cc0) (5) Data frame handling I0519 13:00:34.073622 6 log.go:172] (0xc001633c20) (3) Data frame sent I0519 13:00:34.073634 6 log.go:172] (0xc0023a6420) Data frame received for 3 I0519 13:00:34.073661 6 log.go:172] (0xc001633c20) (3) Data frame handling I0519 13:00:34.075606 6 log.go:172] (0xc0023a6420) Data frame received for 1 I0519 13:00:34.075632 6 log.go:172] (0xc0010159a0) (1) Data frame handling I0519 13:00:34.075645 6 log.go:172] (0xc0010159a0) (1) Data frame sent I0519 13:00:34.075659 6 log.go:172] (0xc0023a6420) (0xc0010159a0) Stream removed, broadcasting: 1 I0519 13:00:34.075671 6 log.go:172] (0xc0023a6420) Go away received I0519 13:00:34.075882 6 log.go:172] (0xc0023a6420) (0xc0010159a0) Stream removed, broadcasting: 1 I0519 13:00:34.075894 6 log.go:172] (0xc0023a6420) (0xc001633c20) Stream removed, broadcasting: 3 I0519 13:00:34.075903 6 log.go:172] (0xc0023a6420) (0xc001633cc0) Stream removed, broadcasting: 5 May 19 13:00:34.075: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:00:34.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-118" for this suite. May 19 13:00:56.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:00:56.184: INFO: namespace pod-network-test-118 deletion completed in 22.104474507s • [SLOW TEST:50.585 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:00:56.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-3356/configmap-test-09fcb382-8dcf-4955-8f2d-2de9d999f183 STEP: Creating a pod to test consume configMaps May 19 13:00:56.298: INFO: Waiting up to 5m0s for pod "pod-configmaps-b80db151-5897-42e5-bde1-a8de27d154c5" in namespace "configmap-3356" to be "success or failure" May 19 13:00:56.325: INFO: Pod "pod-configmaps-b80db151-5897-42e5-bde1-a8de27d154c5": Phase="Pending", Reason="", readiness=false. Elapsed: 27.27664ms May 19 13:00:58.329: INFO: Pod "pod-configmaps-b80db151-5897-42e5-bde1-a8de27d154c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031259113s May 19 13:01:00.333: INFO: Pod "pod-configmaps-b80db151-5897-42e5-bde1-a8de27d154c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034908508s STEP: Saw pod success May 19 13:01:00.333: INFO: Pod "pod-configmaps-b80db151-5897-42e5-bde1-a8de27d154c5" satisfied condition "success or failure" May 19 13:01:00.335: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-b80db151-5897-42e5-bde1-a8de27d154c5 container env-test: STEP: delete the pod May 19 13:01:00.355: INFO: Waiting for pod pod-configmaps-b80db151-5897-42e5-bde1-a8de27d154c5 to disappear May 19 13:01:00.374: INFO: Pod pod-configmaps-b80db151-5897-42e5-bde1-a8de27d154c5 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:01:00.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3356" for this suite. May 19 13:01:06.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:01:06.468: INFO: namespace configmap-3356 deletion completed in 6.090557899s • [SLOW TEST:10.283 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:01:06.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 19 13:01:10.551: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:01:10.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4182" for this suite. May 19 13:01:16.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:01:16.695: INFO: namespace container-runtime-4182 deletion completed in 6.121573386s • [SLOW TEST:10.227 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:01:16.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args May 19 13:01:16.758: INFO: Waiting up to 5m0s for pod "var-expansion-5564061e-82e0-4bc2-9fbc-9718963efa9c" in namespace "var-expansion-7540" to be "success or failure" May 19 13:01:16.762: INFO: Pod "var-expansion-5564061e-82e0-4bc2-9fbc-9718963efa9c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.766365ms May 19 13:01:18.766: INFO: Pod "var-expansion-5564061e-82e0-4bc2-9fbc-9718963efa9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00828942s May 19 13:01:20.771: INFO: Pod "var-expansion-5564061e-82e0-4bc2-9fbc-9718963efa9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012805214s STEP: Saw pod success May 19 13:01:20.771: INFO: Pod "var-expansion-5564061e-82e0-4bc2-9fbc-9718963efa9c" satisfied condition "success or failure" May 19 13:01:20.775: INFO: Trying to get logs from node iruya-worker pod var-expansion-5564061e-82e0-4bc2-9fbc-9718963efa9c container dapi-container: STEP: delete the pod May 19 13:01:20.802: INFO: Waiting for pod var-expansion-5564061e-82e0-4bc2-9fbc-9718963efa9c to disappear May 19 13:01:20.806: INFO: Pod var-expansion-5564061e-82e0-4bc2-9fbc-9718963efa9c no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:01:20.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7540" for this suite. May 19 13:01:26.822: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:01:26.893: INFO: namespace var-expansion-7540 deletion completed in 6.084450006s • [SLOW TEST:10.197 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:01:26.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-7f076086-3a4a-4258-b849-b99d2079ce9b STEP: Creating a pod to test consume secrets May 19 13:01:26.970: INFO: Waiting up to 5m0s for pod "pod-secrets-b6ce64b1-130c-478c-a2a7-3e932e8d7307" in namespace "secrets-6192" to be "success or failure" May 19 13:01:26.989: INFO: Pod "pod-secrets-b6ce64b1-130c-478c-a2a7-3e932e8d7307": Phase="Pending", Reason="", readiness=false. Elapsed: 18.808628ms May 19 13:01:28.993: INFO: Pod "pod-secrets-b6ce64b1-130c-478c-a2a7-3e932e8d7307": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022283938s May 19 13:01:30.997: INFO: Pod "pod-secrets-b6ce64b1-130c-478c-a2a7-3e932e8d7307": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026877063s STEP: Saw pod success May 19 13:01:30.997: INFO: Pod "pod-secrets-b6ce64b1-130c-478c-a2a7-3e932e8d7307" satisfied condition "success or failure" May 19 13:01:31.001: INFO: Trying to get logs from node iruya-worker pod pod-secrets-b6ce64b1-130c-478c-a2a7-3e932e8d7307 container secret-volume-test: STEP: delete the pod May 19 13:01:31.039: INFO: Waiting for pod pod-secrets-b6ce64b1-130c-478c-a2a7-3e932e8d7307 to disappear May 19 13:01:31.046: INFO: Pod pod-secrets-b6ce64b1-130c-478c-a2a7-3e932e8d7307 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:01:31.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6192" for this suite. May 19 13:01:37.061: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:01:37.137: INFO: namespace secrets-6192 deletion completed in 6.087842981s • [SLOW TEST:10.244 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:01:37.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-9952a5fb-f40e-4587-b3b8-aeb5ad729a8d STEP: Creating a pod to test consume configMaps May 19 13:01:37.273: INFO: Waiting up to 5m0s for pod "pod-configmaps-7e918977-c646-40b8-9ce9-dd8ceb151702" in namespace "configmap-1505" to be "success or failure" May 19 13:01:37.293: INFO: Pod "pod-configmaps-7e918977-c646-40b8-9ce9-dd8ceb151702": Phase="Pending", Reason="", readiness=false. Elapsed: 19.250404ms May 19 13:01:39.296: INFO: Pod "pod-configmaps-7e918977-c646-40b8-9ce9-dd8ceb151702": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022897257s May 19 13:01:41.301: INFO: Pod "pod-configmaps-7e918977-c646-40b8-9ce9-dd8ceb151702": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027660532s STEP: Saw pod success May 19 13:01:41.301: INFO: Pod "pod-configmaps-7e918977-c646-40b8-9ce9-dd8ceb151702" satisfied condition "success or failure" May 19 13:01:41.304: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-7e918977-c646-40b8-9ce9-dd8ceb151702 container configmap-volume-test: STEP: delete the pod May 19 13:01:41.348: INFO: Waiting for pod pod-configmaps-7e918977-c646-40b8-9ce9-dd8ceb151702 to disappear May 19 13:01:41.355: INFO: Pod pod-configmaps-7e918977-c646-40b8-9ce9-dd8ceb151702 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:01:41.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1505" for this suite. May 19 13:01:47.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:01:47.483: INFO: namespace configmap-1505 deletion completed in 6.124298846s • [SLOW TEST:10.345 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:01:47.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:01:51.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8722" for this suite. May 19 13:02:33.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:02:33.710: INFO: namespace kubelet-test-8722 deletion completed in 42.10601076s • [SLOW TEST:46.227 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:02:33.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode May 19 13:02:33.795: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-9783" to be "success or failure" May 19 13:02:33.813: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 17.794799ms May 19 13:02:35.818: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022073701s May 19 13:02:37.821: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02532231s May 19 13:02:39.825: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029899385s STEP: Saw pod success May 19 13:02:39.825: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 19 13:02:39.829: INFO: Trying to get logs from node iruya-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod May 19 13:02:39.862: INFO: Waiting for pod pod-host-path-test to disappear May 19 13:02:39.877: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:02:39.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-9783" for this suite. May 19 13:02:45.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:02:45.972: INFO: namespace hostpath-9783 deletion completed in 6.089893953s • [SLOW TEST:12.262 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:02:45.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-5832 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet May 19 13:02:46.109: INFO: Found 0 stateful pods, waiting for 3 May 19 13:02:56.115: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 19 13:02:56.115: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 19 13:02:56.115: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false May 19 13:03:06.114: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 19 13:03:06.114: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 19 13:03:06.114: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 19 13:03:06.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5832 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 19 13:03:06.400: INFO: stderr: "I0519 13:03:06.244863 586 log.go:172] (0xc00013ef20) (0xc000650b40) Create stream\nI0519 13:03:06.244913 586 log.go:172] (0xc00013ef20) (0xc000650b40) Stream added, broadcasting: 1\nI0519 13:03:06.247669 586 log.go:172] (0xc00013ef20) Reply frame received for 1\nI0519 13:03:06.247736 586 log.go:172] (0xc00013ef20) (0xc000862000) Create stream\nI0519 13:03:06.247757 586 log.go:172] (0xc00013ef20) (0xc000862000) Stream added, broadcasting: 3\nI0519 13:03:06.248863 586 log.go:172] (0xc00013ef20) Reply frame received for 3\nI0519 13:03:06.248905 586 log.go:172] (0xc00013ef20) (0xc0009c6000) Create stream\nI0519 13:03:06.248921 586 log.go:172] (0xc00013ef20) (0xc0009c6000) Stream added, broadcasting: 5\nI0519 13:03:06.250012 586 log.go:172] (0xc00013ef20) Reply frame received for 5\nI0519 13:03:06.334617 586 log.go:172] (0xc00013ef20) Data frame received for 5\nI0519 13:03:06.334647 586 log.go:172] (0xc0009c6000) (5) Data frame handling\nI0519 13:03:06.334668 586 log.go:172] (0xc0009c6000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0519 13:03:06.392608 586 log.go:172] (0xc00013ef20) Data frame received for 3\nI0519 13:03:06.392766 586 log.go:172] (0xc000862000) (3) Data frame handling\nI0519 13:03:06.392814 586 log.go:172] (0xc000862000) (3) Data frame sent\nI0519 13:03:06.392840 586 log.go:172] (0xc00013ef20) Data frame received for 3\nI0519 13:03:06.392865 586 log.go:172] (0xc000862000) (3) Data frame handling\nI0519 13:03:06.392884 586 log.go:172] (0xc00013ef20) Data frame received for 5\nI0519 13:03:06.392902 586 log.go:172] (0xc0009c6000) (5) Data frame handling\nI0519 13:03:06.394897 586 log.go:172] (0xc00013ef20) Data frame received for 1\nI0519 13:03:06.394938 586 log.go:172] (0xc000650b40) (1) Data frame handling\nI0519 13:03:06.394961 586 log.go:172] (0xc000650b40) (1) Data frame sent\nI0519 13:03:06.394978 586 log.go:172] (0xc00013ef20) (0xc000650b40) Stream removed, broadcasting: 1\nI0519 13:03:06.395380 586 log.go:172] (0xc00013ef20) (0xc000650b40) Stream removed, broadcasting: 1\nI0519 13:03:06.395400 586 log.go:172] (0xc00013ef20) (0xc000862000) Stream removed, broadcasting: 3\nI0519 13:03:06.395412 586 log.go:172] (0xc00013ef20) (0xc0009c6000) Stream removed, broadcasting: 5\n" May 19 13:03:06.400: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 19 13:03:06.400: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 19 13:03:16.434: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 19 13:03:26.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5832 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 19 13:03:26.675: INFO: stderr: "I0519 13:03:26.579490 608 log.go:172] (0xc0001166e0) (0xc000208e60) Create stream\nI0519 13:03:26.579566 608 log.go:172] (0xc0001166e0) (0xc000208e60) Stream added, broadcasting: 1\nI0519 13:03:26.582259 608 log.go:172] (0xc0001166e0) Reply frame received for 1\nI0519 13:03:26.582294 608 log.go:172] (0xc0001166e0) (0xc0009fa000) Create stream\nI0519 13:03:26.582304 608 log.go:172] (0xc0001166e0) (0xc0009fa000) Stream added, broadcasting: 3\nI0519 13:03:26.583200 608 log.go:172] (0xc0001166e0) Reply frame received for 3\nI0519 13:03:26.583244 608 log.go:172] (0xc0001166e0) (0xc0009fa0a0) Create stream\nI0519 13:03:26.583265 608 log.go:172] (0xc0001166e0) (0xc0009fa0a0) Stream added, broadcasting: 5\nI0519 13:03:26.584500 608 log.go:172] (0xc0001166e0) Reply frame received for 5\nI0519 13:03:26.667763 608 log.go:172] (0xc0001166e0) Data frame received for 3\nI0519 13:03:26.667826 608 log.go:172] (0xc0009fa000) (3) Data frame handling\nI0519 13:03:26.667843 608 log.go:172] (0xc0009fa000) (3) Data frame sent\nI0519 13:03:26.667854 608 log.go:172] (0xc0001166e0) Data frame received for 3\nI0519 13:03:26.667862 608 log.go:172] (0xc0009fa000) (3) Data frame handling\nI0519 13:03:26.667897 608 log.go:172] (0xc0001166e0) Data frame received for 5\nI0519 13:03:26.667908 608 log.go:172] (0xc0009fa0a0) (5) Data frame handling\nI0519 13:03:26.667925 608 log.go:172] (0xc0009fa0a0) (5) Data frame sent\nI0519 13:03:26.667936 608 log.go:172] (0xc0001166e0) Data frame received for 5\nI0519 13:03:26.667944 608 log.go:172] (0xc0009fa0a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0519 13:03:26.669814 608 log.go:172] (0xc0001166e0) Data frame received for 1\nI0519 13:03:26.669844 608 log.go:172] (0xc000208e60) (1) Data frame handling\nI0519 13:03:26.669875 608 log.go:172] (0xc000208e60) (1) Data frame sent\nI0519 13:03:26.669898 608 log.go:172] (0xc0001166e0) (0xc000208e60) Stream removed, broadcasting: 1\nI0519 13:03:26.669924 608 log.go:172] (0xc0001166e0) Go away received\nI0519 13:03:26.670470 608 log.go:172] (0xc0001166e0) (0xc000208e60) Stream removed, broadcasting: 1\nI0519 13:03:26.670489 608 log.go:172] (0xc0001166e0) (0xc0009fa000) Stream removed, broadcasting: 3\nI0519 13:03:26.670499 608 log.go:172] (0xc0001166e0) (0xc0009fa0a0) Stream removed, broadcasting: 5\n" May 19 13:03:26.675: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 19 13:03:26.675: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 19 13:03:46.695: INFO: Waiting for StatefulSet statefulset-5832/ss2 to complete update May 19 13:03:46.695: INFO: Waiting for Pod statefulset-5832/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision May 19 13:03:56.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5832 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 19 13:03:57.095: INFO: stderr: "I0519 13:03:56.841797 629 log.go:172] (0xc000984420) (0xc0004ce820) Create stream\nI0519 13:03:56.841865 629 log.go:172] (0xc000984420) (0xc0004ce820) Stream added, broadcasting: 1\nI0519 13:03:56.847146 629 log.go:172] (0xc000984420) Reply frame received for 1\nI0519 13:03:56.847193 629 log.go:172] (0xc000984420) (0xc0004ce000) Create stream\nI0519 13:03:56.847210 629 log.go:172] (0xc000984420) (0xc0004ce000) Stream added, broadcasting: 3\nI0519 13:03:56.848305 629 log.go:172] (0xc000984420) Reply frame received for 3\nI0519 13:03:56.848333 629 log.go:172] (0xc000984420) (0xc0005b6320) Create stream\nI0519 13:03:56.848345 629 log.go:172] (0xc000984420) (0xc0005b6320) Stream added, broadcasting: 5\nI0519 13:03:56.849521 629 log.go:172] (0xc000984420) Reply frame received for 5\nI0519 13:03:56.960499 629 log.go:172] (0xc000984420) Data frame received for 5\nI0519 13:03:56.960531 629 log.go:172] (0xc0005b6320) (5) Data frame handling\nI0519 13:03:56.960548 629 log.go:172] (0xc0005b6320) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0519 13:03:57.087336 629 log.go:172] (0xc000984420) Data frame received for 3\nI0519 13:03:57.087367 629 log.go:172] (0xc0004ce000) (3) Data frame handling\nI0519 13:03:57.087380 629 log.go:172] (0xc0004ce000) (3) Data frame sent\nI0519 13:03:57.087537 629 log.go:172] (0xc000984420) Data frame received for 5\nI0519 13:03:57.087558 629 log.go:172] (0xc0005b6320) (5) Data frame handling\nI0519 13:03:57.087586 629 log.go:172] (0xc000984420) Data frame received for 3\nI0519 13:03:57.087635 629 log.go:172] (0xc0004ce000) (3) Data frame handling\nI0519 13:03:57.089822 629 log.go:172] (0xc000984420) Data frame received for 1\nI0519 13:03:57.089849 629 log.go:172] (0xc0004ce820) (1) Data frame handling\nI0519 13:03:57.089883 629 log.go:172] (0xc0004ce820) (1) Data frame sent\nI0519 13:03:57.089937 629 log.go:172] (0xc000984420) (0xc0004ce820) Stream removed, broadcasting: 1\nI0519 13:03:57.089993 629 log.go:172] (0xc000984420) Go away received\nI0519 13:03:57.090378 629 log.go:172] (0xc000984420) (0xc0004ce820) Stream removed, broadcasting: 1\nI0519 13:03:57.090407 629 log.go:172] (0xc000984420) (0xc0004ce000) Stream removed, broadcasting: 3\nI0519 13:03:57.090422 629 log.go:172] (0xc000984420) (0xc0005b6320) Stream removed, broadcasting: 5\n" May 19 13:03:57.095: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 19 13:03:57.095: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 19 13:04:07.128: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 19 13:04:17.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5832 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 19 13:04:17.460: INFO: stderr: "I0519 13:04:17.330489 648 log.go:172] (0xc00013af20) (0xc00069eaa0) Create stream\nI0519 13:04:17.330554 648 log.go:172] (0xc00013af20) (0xc00069eaa0) Stream added, broadcasting: 1\nI0519 13:04:17.333657 648 log.go:172] (0xc00013af20) Reply frame received for 1\nI0519 13:04:17.333720 648 log.go:172] (0xc00013af20) (0xc000a5e000) Create stream\nI0519 13:04:17.333734 648 log.go:172] (0xc00013af20) (0xc000a5e000) Stream added, broadcasting: 3\nI0519 13:04:17.334595 648 log.go:172] (0xc00013af20) Reply frame received for 3\nI0519 13:04:17.334635 648 log.go:172] (0xc00013af20) (0xc00076c000) Create stream\nI0519 13:04:17.334653 648 log.go:172] (0xc00013af20) (0xc00076c000) Stream added, broadcasting: 5\nI0519 13:04:17.335565 648 log.go:172] (0xc00013af20) Reply frame received for 5\nI0519 13:04:17.453904 648 log.go:172] (0xc00013af20) Data frame received for 5\nI0519 13:04:17.453955 648 log.go:172] (0xc00076c000) (5) Data frame handling\nI0519 13:04:17.453974 648 log.go:172] (0xc00076c000) (5) Data frame sent\nI0519 13:04:17.453995 648 log.go:172] (0xc00013af20) Data frame received for 5\nI0519 13:04:17.454009 648 log.go:172] (0xc00076c000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0519 13:04:17.454038 648 log.go:172] (0xc00013af20) Data frame received for 3\nI0519 13:04:17.454063 648 log.go:172] (0xc000a5e000) (3) Data frame handling\nI0519 13:04:17.454080 648 log.go:172] (0xc000a5e000) (3) Data frame sent\nI0519 13:04:17.454092 648 log.go:172] (0xc00013af20) Data frame received for 3\nI0519 13:04:17.454100 648 log.go:172] (0xc000a5e000) (3) Data frame handling\nI0519 13:04:17.455150 648 log.go:172] (0xc00013af20) Data frame received for 1\nI0519 13:04:17.455161 648 log.go:172] (0xc00069eaa0) (1) Data frame handling\nI0519 13:04:17.455168 648 log.go:172] (0xc00069eaa0) (1) Data frame sent\nI0519 13:04:17.455176 648 log.go:172] (0xc00013af20) (0xc00069eaa0) Stream removed, broadcasting: 1\nI0519 13:04:17.455184 648 log.go:172] (0xc00013af20) Go away received\nI0519 13:04:17.455652 648 log.go:172] (0xc00013af20) (0xc00069eaa0) Stream removed, broadcasting: 1\nI0519 13:04:17.455676 648 log.go:172] (0xc00013af20) (0xc000a5e000) Stream removed, broadcasting: 3\nI0519 13:04:17.455697 648 log.go:172] (0xc00013af20) (0xc00076c000) Stream removed, broadcasting: 5\n" May 19 13:04:17.460: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 19 13:04:17.460: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 19 13:04:37.476: INFO: Waiting for StatefulSet statefulset-5832/ss2 to complete update May 19 13:04:37.476: INFO: Waiting for Pod statefulset-5832/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 19 13:04:47.483: INFO: Deleting all statefulset in ns statefulset-5832 May 19 13:04:47.486: INFO: Scaling statefulset ss2 to 0 May 19 13:05:07.516: INFO: Waiting for statefulset status.replicas updated to 0 May 19 13:05:07.519: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:05:07.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5832" for this suite. May 19 13:05:13.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:05:13.636: INFO: namespace statefulset-5832 deletion completed in 6.089766094s • [SLOW TEST:147.664 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:05:13.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 19 13:05:13.696: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 5.384322ms) May 19 13:05:13.699: INFO: (1) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.751104ms) May 19 13:05:13.728: INFO: (2) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 28.775711ms) May 19 13:05:13.731: INFO: (3) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.336908ms) May 19 13:05:13.735: INFO: (4) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.827732ms) May 19 13:05:13.739: INFO: (5) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.907383ms) May 19 13:05:13.742: INFO: (6) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.573367ms) May 19 13:05:13.746: INFO: (7) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.075499ms) May 19 13:05:13.748: INFO: (8) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.779427ms) May 19 13:05:13.751: INFO: (9) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.900783ms) May 19 13:05:13.754: INFO: (10) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.808661ms) May 19 13:05:13.757: INFO: (11) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.0845ms) May 19 13:05:13.760: INFO: (12) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.945507ms) May 19 13:05:13.764: INFO: (13) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.594554ms) May 19 13:05:13.767: INFO: (14) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.000073ms) May 19 13:05:13.770: INFO: (15) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.062433ms) May 19 13:05:13.773: INFO: (16) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.17899ms) May 19 13:05:13.776: INFO: (17) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.086246ms) May 19 13:05:13.786: INFO: (18) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 9.77259ms) May 19 13:05:13.789: INFO: (19) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.095859ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:05:13.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-3383" for this suite. May 19 13:05:19.815: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:05:19.906: INFO: namespace proxy-3383 deletion completed in 6.112196078s • [SLOW TEST:6.269 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:05:19.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 19 13:05:20.006: INFO: Waiting up to 5m0s for pod "downwardapi-volume-83ec7bec-cc1e-413a-a436-a1519eb49b91" in namespace "downward-api-2677" to be "success or failure" May 19 13:05:20.019: INFO: Pod "downwardapi-volume-83ec7bec-cc1e-413a-a436-a1519eb49b91": Phase="Pending", Reason="", readiness=false. Elapsed: 13.147664ms May 19 13:05:22.026: INFO: Pod "downwardapi-volume-83ec7bec-cc1e-413a-a436-a1519eb49b91": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020346111s May 19 13:05:24.031: INFO: Pod "downwardapi-volume-83ec7bec-cc1e-413a-a436-a1519eb49b91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025385338s STEP: Saw pod success May 19 13:05:24.031: INFO: Pod "downwardapi-volume-83ec7bec-cc1e-413a-a436-a1519eb49b91" satisfied condition "success or failure" May 19 13:05:24.035: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-83ec7bec-cc1e-413a-a436-a1519eb49b91 container client-container: STEP: delete the pod May 19 13:05:24.069: INFO: Waiting for pod downwardapi-volume-83ec7bec-cc1e-413a-a436-a1519eb49b91 to disappear May 19 13:05:24.104: INFO: Pod downwardapi-volume-83ec7bec-cc1e-413a-a436-a1519eb49b91 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:05:24.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2677" for this suite. May 19 13:05:30.118: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:05:30.193: INFO: namespace downward-api-2677 deletion completed in 6.085524069s • [SLOW TEST:10.287 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:05:30.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 19 13:05:30.271: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:05:34.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7500" for this suite. May 19 13:06:14.467: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:06:14.544: INFO: namespace pods-7500 deletion completed in 40.09378391s • [SLOW TEST:44.350 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:06:14.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-85765dcc-6ce4-498a-881a-383724ab33f5 STEP: Creating configMap with name cm-test-opt-upd-cf3b199b-7d84-47c3-9c3c-7307209a595a STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-85765dcc-6ce4-498a-881a-383724ab33f5 STEP: Updating configmap cm-test-opt-upd-cf3b199b-7d84-47c3-9c3c-7307209a595a STEP: Creating configMap with name cm-test-opt-create-8fdd6bfe-fbea-4fd6-a73f-c37318d783f2 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:06:22.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8017" for this suite. May 19 13:06:44.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:06:44.806: INFO: namespace projected-8017 deletion completed in 22.090552334s • [SLOW TEST:30.261 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:06:44.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 19 13:06:44.898: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 19 13:06:44.931: INFO: Number of nodes with available pods: 0 May 19 13:06:44.931: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 19 13:06:45.025: INFO: Number of nodes with available pods: 0 May 19 13:06:45.025: INFO: Node iruya-worker is running more than one daemon pod May 19 13:06:46.029: INFO: Number of nodes with available pods: 0 May 19 13:06:46.029: INFO: Node iruya-worker is running more than one daemon pod May 19 13:06:47.105: INFO: Number of nodes with available pods: 0 May 19 13:06:47.105: INFO: Node iruya-worker is running more than one daemon pod May 19 13:06:48.028: INFO: Number of nodes with available pods: 1 May 19 13:06:48.028: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 19 13:06:48.064: INFO: Number of nodes with available pods: 1 May 19 13:06:48.064: INFO: Number of running nodes: 0, number of available pods: 1 May 19 13:06:49.069: INFO: Number of nodes with available pods: 0 May 19 13:06:49.069: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 19 13:06:49.130: INFO: Number of nodes with available pods: 0 May 19 13:06:49.131: INFO: Node iruya-worker is running more than one daemon pod May 19 13:06:50.135: INFO: Number of nodes with available pods: 0 May 19 13:06:50.135: INFO: Node iruya-worker is running more than one daemon pod May 19 13:06:51.135: INFO: Number of nodes with available pods: 0 May 19 13:06:51.135: INFO: Node iruya-worker is running more than one daemon pod May 19 13:06:52.135: INFO: Number of nodes with available pods: 0 May 19 13:06:52.135: INFO: Node iruya-worker is running more than one daemon pod May 19 13:06:53.134: INFO: Number of nodes with available pods: 0 May 19 13:06:53.134: INFO: Node iruya-worker is running more than one daemon pod May 19 13:06:54.135: INFO: Number of nodes with available pods: 0 May 19 13:06:54.135: INFO: Node iruya-worker is running more than one daemon pod May 19 13:06:55.135: INFO: Number of nodes with available pods: 0 May 19 13:06:55.135: INFO: Node iruya-worker is running more than one daemon pod May 19 13:06:56.135: INFO: Number of nodes with available pods: 0 May 19 13:06:56.135: INFO: Node iruya-worker is running more than one daemon pod May 19 13:06:57.135: INFO: Number of nodes with available pods: 0 May 19 13:06:57.135: INFO: Node iruya-worker is running more than one daemon pod May 19 13:06:58.135: INFO: Number of nodes with available pods: 0 May 19 13:06:58.135: INFO: Node iruya-worker is running more than one daemon pod May 19 13:06:59.135: INFO: Number of nodes with available pods: 0 May 19 13:06:59.135: INFO: Node iruya-worker is running more than one daemon pod May 19 13:07:00.135: INFO: Number of nodes with available pods: 0 May 19 13:07:00.135: INFO: Node iruya-worker is running more than one daemon pod May 19 13:07:01.135: INFO: Number of nodes with available pods: 0 May 19 13:07:01.135: INFO: Node iruya-worker is running more than one daemon pod May 19 13:07:02.134: INFO: Number of nodes with available pods: 0 May 19 13:07:02.134: INFO: Node iruya-worker is running more than one daemon pod May 19 13:07:03.135: INFO: Number of nodes with available pods: 0 May 19 13:07:03.135: INFO: Node iruya-worker is running more than one daemon pod May 19 13:07:04.135: INFO: Number of nodes with available pods: 0 May 19 13:07:04.135: INFO: Node iruya-worker is running more than one daemon pod May 19 13:07:05.135: INFO: Number of nodes with available pods: 1 May 19 13:07:05.135: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4736, will wait for the garbage collector to delete the pods May 19 13:07:05.203: INFO: Deleting DaemonSet.extensions daemon-set took: 10.106011ms May 19 13:07:05.503: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.305572ms May 19 13:07:12.207: INFO: Number of nodes with available pods: 0 May 19 13:07:12.207: INFO: Number of running nodes: 0, number of available pods: 0 May 19 13:07:12.214: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4736/daemonsets","resourceVersion":"11751805"},"items":null} May 19 13:07:12.216: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4736/pods","resourceVersion":"11751805"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:07:12.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4736" for this suite. May 19 13:07:18.282: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:07:18.379: INFO: namespace daemonsets-4736 deletion completed in 6.115139174s • [SLOW TEST:33.573 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:07:18.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command May 19 13:07:18.456: INFO: Waiting up to 5m0s for pod "client-containers-53ae55a9-4f3a-4cdf-b536-446dc4f3b748" in namespace "containers-8769" to be "success or failure" May 19 13:07:18.477: INFO: Pod "client-containers-53ae55a9-4f3a-4cdf-b536-446dc4f3b748": Phase="Pending", Reason="", readiness=false. Elapsed: 21.743541ms May 19 13:07:20.628: INFO: Pod "client-containers-53ae55a9-4f3a-4cdf-b536-446dc4f3b748": Phase="Pending", Reason="", readiness=false. Elapsed: 2.172297237s May 19 13:07:22.633: INFO: Pod "client-containers-53ae55a9-4f3a-4cdf-b536-446dc4f3b748": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.177011402s STEP: Saw pod success May 19 13:07:22.633: INFO: Pod "client-containers-53ae55a9-4f3a-4cdf-b536-446dc4f3b748" satisfied condition "success or failure" May 19 13:07:22.636: INFO: Trying to get logs from node iruya-worker2 pod client-containers-53ae55a9-4f3a-4cdf-b536-446dc4f3b748 container test-container: STEP: delete the pod May 19 13:07:22.673: INFO: Waiting for pod client-containers-53ae55a9-4f3a-4cdf-b536-446dc4f3b748 to disappear May 19 13:07:22.676: INFO: Pod client-containers-53ae55a9-4f3a-4cdf-b536-446dc4f3b748 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:07:22.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8769" for this suite. May 19 13:07:28.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:07:28.781: INFO: namespace containers-8769 deletion completed in 6.099607767s • [SLOW TEST:10.402 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:07:28.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:08:02.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5785" for this suite. May 19 13:08:08.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:08:08.483: INFO: namespace container-runtime-5785 deletion completed in 6.198500465s • [SLOW TEST:39.702 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:08:08.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 19 13:08:08.564: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5df0e5ed-828a-469a-a321-17c99453afae" in namespace "downward-api-7545" to be "success or failure" May 19 13:08:08.568: INFO: Pod "downwardapi-volume-5df0e5ed-828a-469a-a321-17c99453afae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.27977ms May 19 13:08:10.577: INFO: Pod "downwardapi-volume-5df0e5ed-828a-469a-a321-17c99453afae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013439454s May 19 13:08:12.582: INFO: Pod "downwardapi-volume-5df0e5ed-828a-469a-a321-17c99453afae": Phase="Running", Reason="", readiness=true. Elapsed: 4.018043306s May 19 13:08:14.587: INFO: Pod "downwardapi-volume-5df0e5ed-828a-469a-a321-17c99453afae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.022867411s STEP: Saw pod success May 19 13:08:14.587: INFO: Pod "downwardapi-volume-5df0e5ed-828a-469a-a321-17c99453afae" satisfied condition "success or failure" May 19 13:08:14.590: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-5df0e5ed-828a-469a-a321-17c99453afae container client-container: STEP: delete the pod May 19 13:08:14.605: INFO: Waiting for pod downwardapi-volume-5df0e5ed-828a-469a-a321-17c99453afae to disappear May 19 13:08:14.609: INFO: Pod downwardapi-volume-5df0e5ed-828a-469a-a321-17c99453afae no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:08:14.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7545" for this suite. May 19 13:08:20.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:08:20.705: INFO: namespace downward-api-7545 deletion completed in 6.092782346s • [SLOW TEST:12.221 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:08:20.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 19 13:08:20.805: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 13:08:20.813: INFO: Number of nodes with available pods: 0 May 19 13:08:20.813: INFO: Node iruya-worker is running more than one daemon pod May 19 13:08:21.818: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 13:08:21.821: INFO: Number of nodes with available pods: 0 May 19 13:08:21.821: INFO: Node iruya-worker is running more than one daemon pod May 19 13:08:22.819: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 13:08:22.823: INFO: Number of nodes with available pods: 0 May 19 13:08:22.823: INFO: Node iruya-worker is running more than one daemon pod May 19 13:08:23.817: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 13:08:23.820: INFO: Number of nodes with available pods: 0 May 19 13:08:23.820: INFO: Node iruya-worker is running more than one daemon pod May 19 13:08:24.818: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 13:08:24.822: INFO: Number of nodes with available pods: 1 May 19 13:08:24.822: INFO: Node iruya-worker is running more than one daemon pod May 19 13:08:25.818: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 13:08:25.821: INFO: Number of nodes with available pods: 2 May 19 13:08:25.821: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 19 13:08:25.837: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 13:08:25.867: INFO: Number of nodes with available pods: 2 May 19 13:08:25.867: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8232, will wait for the garbage collector to delete the pods May 19 13:08:26.990: INFO: Deleting DaemonSet.extensions daemon-set took: 6.76929ms May 19 13:08:27.290: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.260703ms May 19 13:08:30.838: INFO: Number of nodes with available pods: 0 May 19 13:08:30.838: INFO: Number of running nodes: 0, number of available pods: 0 May 19 13:08:30.840: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8232/daemonsets","resourceVersion":"11752150"},"items":null} May 19 13:08:30.843: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8232/pods","resourceVersion":"11752150"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:08:30.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8232" for this suite. May 19 13:08:36.871: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:08:36.971: INFO: namespace daemonsets-8232 deletion completed in 6.117318805s • [SLOW TEST:16.266 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:08:36.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-3129fb7e-aa08-4b2a-86ba-e57ba33297a9 in namespace container-probe-8151 May 19 13:08:41.042: INFO: Started pod liveness-3129fb7e-aa08-4b2a-86ba-e57ba33297a9 in namespace container-probe-8151 STEP: checking the pod's current state and verifying that restartCount is present May 19 13:08:41.045: INFO: Initial restart count of pod liveness-3129fb7e-aa08-4b2a-86ba-e57ba33297a9 is 0 May 19 13:09:01.208: INFO: Restart count of pod container-probe-8151/liveness-3129fb7e-aa08-4b2a-86ba-e57ba33297a9 is now 1 (20.163366542s elapsed) May 19 13:09:21.250: INFO: Restart count of pod container-probe-8151/liveness-3129fb7e-aa08-4b2a-86ba-e57ba33297a9 is now 2 (40.205336586s elapsed) May 19 13:09:41.395: INFO: Restart count of pod container-probe-8151/liveness-3129fb7e-aa08-4b2a-86ba-e57ba33297a9 is now 3 (1m0.349856723s elapsed) May 19 13:10:01.495: INFO: Restart count of pod container-probe-8151/liveness-3129fb7e-aa08-4b2a-86ba-e57ba33297a9 is now 4 (1m20.450158133s elapsed) May 19 13:11:11.642: INFO: Restart count of pod container-probe-8151/liveness-3129fb7e-aa08-4b2a-86ba-e57ba33297a9 is now 5 (2m30.597481875s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:11:11.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8151" for this suite. May 19 13:11:17.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:11:17.821: INFO: namespace container-probe-8151 deletion completed in 6.086574902s • [SLOW TEST:160.849 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:11:17.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 19 13:11:17.895: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7cd36dd4-d9d3-4101-aad4-b5ee77b3fabe" in namespace "projected-3586" to be "success or failure" May 19 13:11:17.907: INFO: Pod "downwardapi-volume-7cd36dd4-d9d3-4101-aad4-b5ee77b3fabe": Phase="Pending", Reason="", readiness=false. Elapsed: 11.394874ms May 19 13:11:20.049: INFO: Pod "downwardapi-volume-7cd36dd4-d9d3-4101-aad4-b5ee77b3fabe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153913957s May 19 13:11:22.054: INFO: Pod "downwardapi-volume-7cd36dd4-d9d3-4101-aad4-b5ee77b3fabe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.158443508s STEP: Saw pod success May 19 13:11:22.054: INFO: Pod "downwardapi-volume-7cd36dd4-d9d3-4101-aad4-b5ee77b3fabe" satisfied condition "success or failure" May 19 13:11:22.057: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-7cd36dd4-d9d3-4101-aad4-b5ee77b3fabe container client-container: STEP: delete the pod May 19 13:11:22.093: INFO: Waiting for pod downwardapi-volume-7cd36dd4-d9d3-4101-aad4-b5ee77b3fabe to disappear May 19 13:11:22.110: INFO: Pod downwardapi-volume-7cd36dd4-d9d3-4101-aad4-b5ee77b3fabe no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:11:22.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3586" for this suite. May 19 13:11:28.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:11:28.208: INFO: namespace projected-3586 deletion completed in 6.093879338s • [SLOW TEST:10.386 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:11:28.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 19 13:11:28.257: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:11:32.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1424" for this suite. May 19 13:12:14.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:12:14.392: INFO: namespace pods-1424 deletion completed in 42.088856258s • [SLOW TEST:46.184 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:12:14.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-rdjk STEP: Creating a pod to test atomic-volume-subpath May 19 13:12:14.507: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-rdjk" in namespace "subpath-4683" to be "success or failure" May 19 13:12:14.519: INFO: Pod "pod-subpath-test-secret-rdjk": Phase="Pending", Reason="", readiness=false. Elapsed: 11.986305ms May 19 13:12:16.523: INFO: Pod "pod-subpath-test-secret-rdjk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01561425s May 19 13:12:18.527: INFO: Pod "pod-subpath-test-secret-rdjk": Phase="Running", Reason="", readiness=true. Elapsed: 4.019857515s May 19 13:12:20.531: INFO: Pod "pod-subpath-test-secret-rdjk": Phase="Running", Reason="", readiness=true. Elapsed: 6.02391201s May 19 13:12:22.536: INFO: Pod "pod-subpath-test-secret-rdjk": Phase="Running", Reason="", readiness=true. Elapsed: 8.029213529s May 19 13:12:24.540: INFO: Pod "pod-subpath-test-secret-rdjk": Phase="Running", Reason="", readiness=true. Elapsed: 10.032888567s May 19 13:12:26.544: INFO: Pod "pod-subpath-test-secret-rdjk": Phase="Running", Reason="", readiness=true. Elapsed: 12.037556976s May 19 13:12:28.549: INFO: Pod "pod-subpath-test-secret-rdjk": Phase="Running", Reason="", readiness=true. Elapsed: 14.041903765s May 19 13:12:30.553: INFO: Pod "pod-subpath-test-secret-rdjk": Phase="Running", Reason="", readiness=true. Elapsed: 16.045958158s May 19 13:12:32.557: INFO: Pod "pod-subpath-test-secret-rdjk": Phase="Running", Reason="", readiness=true. Elapsed: 18.050217814s May 19 13:12:34.571: INFO: Pod "pod-subpath-test-secret-rdjk": Phase="Running", Reason="", readiness=true. Elapsed: 20.064107966s May 19 13:12:36.575: INFO: Pod "pod-subpath-test-secret-rdjk": Phase="Running", Reason="", readiness=true. Elapsed: 22.068278673s May 19 13:12:38.580: INFO: Pod "pod-subpath-test-secret-rdjk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.072696047s STEP: Saw pod success May 19 13:12:38.580: INFO: Pod "pod-subpath-test-secret-rdjk" satisfied condition "success or failure" May 19 13:12:38.583: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-secret-rdjk container test-container-subpath-secret-rdjk: STEP: delete the pod May 19 13:12:38.602: INFO: Waiting for pod pod-subpath-test-secret-rdjk to disappear May 19 13:12:38.619: INFO: Pod pod-subpath-test-secret-rdjk no longer exists STEP: Deleting pod pod-subpath-test-secret-rdjk May 19 13:12:38.619: INFO: Deleting pod "pod-subpath-test-secret-rdjk" in namespace "subpath-4683" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:12:38.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4683" for this suite. May 19 13:12:44.669: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:12:44.744: INFO: namespace subpath-4683 deletion completed in 6.119419782s • [SLOW TEST:30.351 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:12:44.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:12:48.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1990" for this suite. May 19 13:13:26.916: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:13:27.029: INFO: namespace kubelet-test-1990 deletion completed in 38.139578196s • [SLOW TEST:42.285 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:13:27.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-bbc5f89b-1a37-441a-8602-49bbb4816b11 STEP: Creating configMap with name cm-test-opt-upd-c8f0f41d-3fcc-4c31-ba88-7a1ed9daa4b8 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-bbc5f89b-1a37-441a-8602-49bbb4816b11 STEP: Updating configmap cm-test-opt-upd-c8f0f41d-3fcc-4c31-ba88-7a1ed9daa4b8 STEP: Creating configMap with name cm-test-opt-create-98d4f48c-32af-4f9b-94c2-e40f2c2a4e65 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:13:35.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-884" for this suite. May 19 13:13:59.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:13:59.328: INFO: namespace configmap-884 deletion completed in 24.091199567s • [SLOW TEST:32.298 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:13:59.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod May 19 13:13:59.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7870' May 19 13:14:02.340: INFO: stderr: "" May 19 13:14:02.340: INFO: stdout: "pod/pause created\n" May 19 13:14:02.340: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 19 13:14:02.340: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-7870" to be "running and ready" May 19 13:14:02.347: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 7.187993ms May 19 13:14:04.352: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011991486s May 19 13:14:06.355: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.015283991s May 19 13:14:06.355: INFO: Pod "pause" satisfied condition "running and ready" May 19 13:14:06.355: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod May 19 13:14:06.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-7870' May 19 13:14:06.452: INFO: stderr: "" May 19 13:14:06.452: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 19 13:14:06.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7870' May 19 13:14:06.561: INFO: stderr: "" May 19 13:14:06.561: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod May 19 13:14:06.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-7870' May 19 13:14:06.672: INFO: stderr: "" May 19 13:14:06.672: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 19 13:14:06.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7870' May 19 13:14:06.784: INFO: stderr: "" May 19 13:14:06.784: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources May 19 13:14:06.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7870' May 19 13:14:06.929: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 19 13:14:06.929: INFO: stdout: "pod \"pause\" force deleted\n" May 19 13:14:06.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-7870' May 19 13:14:07.045: INFO: stderr: "No resources found.\n" May 19 13:14:07.045: INFO: stdout: "" May 19 13:14:07.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-7870 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 19 13:14:07.290: INFO: stderr: "" May 19 13:14:07.290: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:14:07.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7870" for this suite. May 19 13:14:13.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:14:13.460: INFO: namespace kubectl-7870 deletion completed in 6.165056114s • [SLOW TEST:14.132 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:14:13.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 19 13:14:13.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-837' May 19 13:14:13.631: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 19 13:14:13.631: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 May 19 13:14:13.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-837' May 19 13:14:13.772: INFO: stderr: "" May 19 13:14:13.772: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:14:13.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-837" for this suite. May 19 13:14:19.789: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:14:19.871: INFO: namespace kubectl-837 deletion completed in 6.096121037s • [SLOW TEST:6.411 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:14:19.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 19 13:14:27.993: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 19 13:14:28.004: INFO: Pod pod-with-prestop-exec-hook still exists May 19 13:14:30.005: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 19 13:14:30.009: INFO: Pod pod-with-prestop-exec-hook still exists May 19 13:14:32.005: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 19 13:14:32.009: INFO: Pod pod-with-prestop-exec-hook still exists May 19 13:14:34.005: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 19 13:14:34.009: INFO: Pod pod-with-prestop-exec-hook still exists May 19 13:14:36.005: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 19 13:14:36.009: INFO: Pod pod-with-prestop-exec-hook still exists May 19 13:14:38.005: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 19 13:14:38.009: INFO: Pod pod-with-prestop-exec-hook still exists May 19 13:14:40.005: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 19 13:14:40.009: INFO: Pod pod-with-prestop-exec-hook still exists May 19 13:14:42.005: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 19 13:14:42.010: INFO: Pod pod-with-prestop-exec-hook still exists May 19 13:14:44.005: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 19 13:14:44.009: INFO: Pod pod-with-prestop-exec-hook still exists May 19 13:14:46.005: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 19 13:14:46.009: INFO: Pod pod-with-prestop-exec-hook still exists May 19 13:14:48.005: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 19 13:14:48.009: INFO: Pod pod-with-prestop-exec-hook still exists May 19 13:14:50.005: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 19 13:14:50.009: INFO: Pod pod-with-prestop-exec-hook still exists May 19 13:14:52.005: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 19 13:14:52.010: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:14:52.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4028" for this suite. May 19 13:15:16.029: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:15:16.108: INFO: namespace container-lifecycle-hook-4028 deletion completed in 24.090393985s • [SLOW TEST:56.236 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:15:16.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-37416ddb-c604-4f32-ba73-59b1733e3fe6 STEP: Creating secret with name s-test-opt-upd-73e8337d-5659-4a25-a62a-81017545ee42 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-37416ddb-c604-4f32-ba73-59b1733e3fe6 STEP: Updating secret s-test-opt-upd-73e8337d-5659-4a25-a62a-81017545ee42 STEP: Creating secret with name s-test-opt-create-bf3703bb-8322-407b-89a9-ffadb259a53d STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:15:26.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9002" for this suite. May 19 13:15:50.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:15:50.411: INFO: namespace projected-9002 deletion completed in 24.105426357s • [SLOW TEST:34.301 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:15:50.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-906/secret-test-292fdba7-9a28-4748-bb90-c98549712249 STEP: Creating a pod to test consume secrets May 19 13:15:50.543: INFO: Waiting up to 5m0s for pod "pod-configmaps-fd9ef25b-1a6a-4fd0-8590-16e04bda0173" in namespace "secrets-906" to be "success or failure" May 19 13:15:50.558: INFO: Pod "pod-configmaps-fd9ef25b-1a6a-4fd0-8590-16e04bda0173": Phase="Pending", Reason="", readiness=false. Elapsed: 14.463549ms May 19 13:15:52.562: INFO: Pod "pod-configmaps-fd9ef25b-1a6a-4fd0-8590-16e04bda0173": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018509884s May 19 13:15:54.592: INFO: Pod "pod-configmaps-fd9ef25b-1a6a-4fd0-8590-16e04bda0173": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048733067s STEP: Saw pod success May 19 13:15:54.592: INFO: Pod "pod-configmaps-fd9ef25b-1a6a-4fd0-8590-16e04bda0173" satisfied condition "success or failure" May 19 13:15:54.594: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-fd9ef25b-1a6a-4fd0-8590-16e04bda0173 container env-test: STEP: delete the pod May 19 13:15:54.662: INFO: Waiting for pod pod-configmaps-fd9ef25b-1a6a-4fd0-8590-16e04bda0173 to disappear May 19 13:15:54.667: INFO: Pod pod-configmaps-fd9ef25b-1a6a-4fd0-8590-16e04bda0173 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:15:54.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-906" for this suite. May 19 13:16:00.777: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:16:00.850: INFO: namespace secrets-906 deletion completed in 6.180324534s • [SLOW TEST:10.439 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:16:00.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 19 13:16:05.482: INFO: Successfully updated pod "annotationupdate50394819-bf1d-4202-8ffc-98e4f66bef12" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:16:07.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3833" for this suite. May 19 13:16:29.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:16:29.646: INFO: namespace downward-api-3833 deletion completed in 22.108939419s • [SLOW TEST:28.796 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:16:29.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-d507e049-f7a0-4793-93c6-578a7fcd7fb5 in namespace container-probe-5016 May 19 13:16:33.763: INFO: Started pod busybox-d507e049-f7a0-4793-93c6-578a7fcd7fb5 in namespace container-probe-5016 STEP: checking the pod's current state and verifying that restartCount is present May 19 13:16:33.766: INFO: Initial restart count of pod busybox-d507e049-f7a0-4793-93c6-578a7fcd7fb5 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:20:34.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5016" for this suite. May 19 13:20:40.508: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:20:40.589: INFO: namespace container-probe-5016 deletion completed in 6.119539977s • [SLOW TEST:250.942 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:20:40.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 19 13:20:40.655: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 19 13:20:45.660: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 19 13:20:45.660: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 19 13:20:47.664: INFO: Creating deployment "test-rollover-deployment" May 19 13:20:47.696: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 19 13:20:49.704: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 19 13:20:49.709: INFO: Ensure that both replica sets have 1 created replica May 19 13:20:49.713: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 19 13:20:49.718: INFO: Updating deployment test-rollover-deployment May 19 13:20:49.718: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 19 13:20:51.757: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 19 13:20:51.764: INFO: Make sure deployment "test-rollover-deployment" is complete May 19 13:20:51.769: INFO: all replica sets need to contain the pod-template-hash label May 19 13:20:51.769: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725491247, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725491247, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725491249, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725491247, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 13:20:53.778: INFO: all replica sets need to contain the pod-template-hash label May 19 13:20:53.778: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725491247, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725491247, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725491253, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725491247, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 13:20:55.778: INFO: all replica sets need to contain the pod-template-hash label May 19 13:20:55.778: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725491247, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725491247, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725491253, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725491247, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 13:20:57.778: INFO: all replica sets need to contain the pod-template-hash label May 19 13:20:57.778: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725491247, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725491247, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725491253, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725491247, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 13:20:59.777: INFO: all replica sets need to contain the pod-template-hash label May 19 13:20:59.777: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725491247, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725491247, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725491253, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725491247, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 13:21:01.778: INFO: all replica sets need to contain the pod-template-hash label May 19 13:21:01.778: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725491247, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725491247, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725491253, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725491247, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 13:21:03.776: INFO: May 19 13:21:03.776: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 19 13:21:03.784: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-2251,SelfLink:/apis/apps/v1/namespaces/deployment-2251/deployments/test-rollover-deployment,UID:d8e9035c-44f2-42cb-ad1e-64461e8ab4a2,ResourceVersion:11754134,Generation:2,CreationTimestamp:2020-05-19 13:20:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-19 13:20:47 +0000 UTC 2020-05-19 13:20:47 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-19 13:21:03 +0000 UTC 2020-05-19 13:20:47 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 19 13:21:03.786: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-2251,SelfLink:/apis/apps/v1/namespaces/deployment-2251/replicasets/test-rollover-deployment-854595fc44,UID:dee4e7fa-626d-4d01-9858-66e8c0ab5d5b,ResourceVersion:11754123,Generation:2,CreationTimestamp:2020-05-19 13:20:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment d8e9035c-44f2-42cb-ad1e-64461e8ab4a2 0xc002c130b7 0xc002c130b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 19 13:21:03.786: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 19 13:21:03.787: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-2251,SelfLink:/apis/apps/v1/namespaces/deployment-2251/replicasets/test-rollover-controller,UID:63f3b556-ac16-4c6e-ab8f-36b149266f36,ResourceVersion:11754132,Generation:2,CreationTimestamp:2020-05-19 13:20:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment d8e9035c-44f2-42cb-ad1e-64461e8ab4a2 0xc002c12f37 0xc002c12f38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 19 13:21:03.787: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-2251,SelfLink:/apis/apps/v1/namespaces/deployment-2251/replicasets/test-rollover-deployment-9b8b997cf,UID:a64526bb-d55c-400b-a428-536d002311fc,ResourceVersion:11754088,Generation:2,CreationTimestamp:2020-05-19 13:20:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment d8e9035c-44f2-42cb-ad1e-64461e8ab4a2 0xc002c131a0 0xc002c131a1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 19 13:21:03.789: INFO: Pod "test-rollover-deployment-854595fc44-7dkq5" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-7dkq5,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-2251,SelfLink:/api/v1/namespaces/deployment-2251/pods/test-rollover-deployment-854595fc44-7dkq5,UID:f8f82d90-5b49-4d38-9f41-f27c2937f3e0,ResourceVersion:11754101,Generation:0,CreationTimestamp:2020-05-19 13:20:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 dee4e7fa-626d-4d01-9858-66e8c0ab5d5b 0xc00289e397 0xc00289e398}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w8bfk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w8bfk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-w8bfk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00289e410} {node.kubernetes.io/unreachable Exists NoExecute 0xc00289e430}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:20:49 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:20:53 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:20:53 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:20:49 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.34,StartTime:2020-05-19 13:20:49 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-19 13:20:53 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://ed446fc083889cde20e1eed377933451ac699d43625fc908be6e9a70dfab890a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:21:03.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2251" for this suite. May 19 13:21:09.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:21:09.999: INFO: namespace deployment-2251 deletion completed in 6.206798843s • [SLOW TEST:29.409 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:21:09.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 19 13:21:10.102: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-9829,SelfLink:/api/v1/namespaces/watch-9829/configmaps/e2e-watch-test-resource-version,UID:d4081370-99d2-4186-b4a7-956722dbd083,ResourceVersion:11754186,Generation:0,CreationTimestamp:2020-05-19 13:21:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 19 13:21:10.102: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-9829,SelfLink:/api/v1/namespaces/watch-9829/configmaps/e2e-watch-test-resource-version,UID:d4081370-99d2-4186-b4a7-956722dbd083,ResourceVersion:11754187,Generation:0,CreationTimestamp:2020-05-19 13:21:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:21:10.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9829" for this suite. May 19 13:21:16.115: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:21:16.208: INFO: namespace watch-9829 deletion completed in 6.102122377s • [SLOW TEST:6.209 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:21:16.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC May 19 13:21:16.259: INFO: namespace kubectl-3181 May 19 13:21:16.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3181' May 19 13:21:16.505: INFO: stderr: "" May 19 13:21:16.505: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 19 13:21:17.510: INFO: Selector matched 1 pods for map[app:redis] May 19 13:21:17.510: INFO: Found 0 / 1 May 19 13:21:18.535: INFO: Selector matched 1 pods for map[app:redis] May 19 13:21:18.535: INFO: Found 0 / 1 May 19 13:21:19.519: INFO: Selector matched 1 pods for map[app:redis] May 19 13:21:19.519: INFO: Found 0 / 1 May 19 13:21:20.510: INFO: Selector matched 1 pods for map[app:redis] May 19 13:21:20.510: INFO: Found 1 / 1 May 19 13:21:20.510: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 19 13:21:20.513: INFO: Selector matched 1 pods for map[app:redis] May 19 13:21:20.513: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 19 13:21:20.513: INFO: wait on redis-master startup in kubectl-3181 May 19 13:21:20.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-xqqkk redis-master --namespace=kubectl-3181' May 19 13:21:20.624: INFO: stderr: "" May 19 13:21:20.624: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 19 May 13:21:19.410 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 19 May 13:21:19.416 # Server started, Redis version 3.2.12\n1:M 19 May 13:21:19.417 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 19 May 13:21:19.417 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC May 19 13:21:20.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-3181' May 19 13:21:20.767: INFO: stderr: "" May 19 13:21:20.768: INFO: stdout: "service/rm2 exposed\n" May 19 13:21:20.807: INFO: Service rm2 in namespace kubectl-3181 found. STEP: exposing service May 19 13:21:22.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-3181' May 19 13:21:22.963: INFO: stderr: "" May 19 13:21:22.963: INFO: stdout: "service/rm3 exposed\n" May 19 13:21:22.968: INFO: Service rm3 in namespace kubectl-3181 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:21:24.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3181" for this suite. May 19 13:21:46.989: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:21:47.065: INFO: namespace kubectl-3181 deletion completed in 22.087637848s • [SLOW TEST:30.857 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:21:47.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info May 19 13:21:47.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 19 13:21:47.222: INFO: stderr: "" May 19 13:21:47.222: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:21:47.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6386" for this suite. May 19 13:21:53.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:21:53.322: INFO: namespace kubectl-6386 deletion completed in 6.095741139s • [SLOW TEST:6.256 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:21:53.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 19 13:21:53.373: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 19 13:21:55.415: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:21:56.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8323" for this suite. May 19 13:22:02.692: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:22:02.844: INFO: namespace replication-controller-8323 deletion completed in 6.387329398s • [SLOW TEST:9.521 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:22:02.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 19 13:22:02.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-573' May 19 13:22:03.045: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 19 13:22:03.045: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc May 19 13:22:03.077: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-7mkz9] May 19 13:22:03.077: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-7mkz9" in namespace "kubectl-573" to be "running and ready" May 19 13:22:03.079: INFO: Pod "e2e-test-nginx-rc-7mkz9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024659ms May 19 13:22:05.083: INFO: Pod "e2e-test-nginx-rc-7mkz9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006430584s May 19 13:22:07.088: INFO: Pod "e2e-test-nginx-rc-7mkz9": Phase="Running", Reason="", readiness=true. Elapsed: 4.010645478s May 19 13:22:07.088: INFO: Pod "e2e-test-nginx-rc-7mkz9" satisfied condition "running and ready" May 19 13:22:07.088: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-7mkz9] May 19 13:22:07.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-573' May 19 13:22:07.216: INFO: stderr: "" May 19 13:22:07.216: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 May 19 13:22:07.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-573' May 19 13:22:07.334: INFO: stderr: "" May 19 13:22:07.334: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:22:07.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-573" for this suite. May 19 13:22:29.350: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:22:29.439: INFO: namespace kubectl-573 deletion completed in 22.101429997s • [SLOW TEST:26.595 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:22:29.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-79291f78-7093-4253-a34b-e70ad8ea3d1e STEP: Creating a pod to test consume secrets May 19 13:22:29.619: INFO: Waiting up to 5m0s for pod "pod-secrets-d062d1b2-f93e-45fd-a7c6-42767a621e8f" in namespace "secrets-139" to be "success or failure" May 19 13:22:29.670: INFO: Pod "pod-secrets-d062d1b2-f93e-45fd-a7c6-42767a621e8f": Phase="Pending", Reason="", readiness=false. Elapsed: 51.052235ms May 19 13:22:31.674: INFO: Pod "pod-secrets-d062d1b2-f93e-45fd-a7c6-42767a621e8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05547806s May 19 13:22:33.678: INFO: Pod "pod-secrets-d062d1b2-f93e-45fd-a7c6-42767a621e8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059277683s STEP: Saw pod success May 19 13:22:33.678: INFO: Pod "pod-secrets-d062d1b2-f93e-45fd-a7c6-42767a621e8f" satisfied condition "success or failure" May 19 13:22:33.681: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-d062d1b2-f93e-45fd-a7c6-42767a621e8f container secret-volume-test: STEP: delete the pod May 19 13:22:33.704: INFO: Waiting for pod pod-secrets-d062d1b2-f93e-45fd-a7c6-42767a621e8f to disappear May 19 13:22:33.707: INFO: Pod pod-secrets-d062d1b2-f93e-45fd-a7c6-42767a621e8f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:22:33.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-139" for this suite. May 19 13:22:39.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:22:39.828: INFO: namespace secrets-139 deletion completed in 6.118163657s STEP: Destroying namespace "secret-namespace-2746" for this suite. May 19 13:22:45.842: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:22:45.914: INFO: namespace secret-namespace-2746 deletion completed in 6.085928869s • [SLOW TEST:16.475 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:22:45.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-0c47d429-3367-4847-8084-38c2f644ae71 STEP: Creating secret with name s-test-opt-upd-fddae0de-57fb-4539-ad97-acde33478bbc STEP: Creating the pod STEP: Deleting secret s-test-opt-del-0c47d429-3367-4847-8084-38c2f644ae71 STEP: Updating secret s-test-opt-upd-fddae0de-57fb-4539-ad97-acde33478bbc STEP: Creating secret with name s-test-opt-create-c3466d26-c45d-4973-befa-1ac57b91d20f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:22:54.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5683" for this suite. May 19 13:23:16.165: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:23:16.271: INFO: namespace secrets-5683 deletion completed in 22.12015569s • [SLOW TEST:30.356 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:23:16.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs May 19 13:23:16.345: INFO: Waiting up to 5m0s for pod "pod-244533f8-659b-4a09-96c8-e36ebdba12b3" in namespace "emptydir-4867" to be "success or failure" May 19 13:23:16.348: INFO: Pod "pod-244533f8-659b-4a09-96c8-e36ebdba12b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.674919ms May 19 13:23:18.352: INFO: Pod "pod-244533f8-659b-4a09-96c8-e36ebdba12b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006907909s May 19 13:23:20.366: INFO: Pod "pod-244533f8-659b-4a09-96c8-e36ebdba12b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020293844s STEP: Saw pod success May 19 13:23:20.366: INFO: Pod "pod-244533f8-659b-4a09-96c8-e36ebdba12b3" satisfied condition "success or failure" May 19 13:23:20.369: INFO: Trying to get logs from node iruya-worker2 pod pod-244533f8-659b-4a09-96c8-e36ebdba12b3 container test-container: STEP: delete the pod May 19 13:23:20.400: INFO: Waiting for pod pod-244533f8-659b-4a09-96c8-e36ebdba12b3 to disappear May 19 13:23:20.458: INFO: Pod pod-244533f8-659b-4a09-96c8-e36ebdba12b3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:23:20.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4867" for this suite. May 19 13:23:26.498: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:23:26.591: INFO: namespace emptydir-4867 deletion completed in 6.128368227s • [SLOW TEST:10.319 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:23:26.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 19 13:23:31.705: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:23:32.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1665" for this suite. May 19 13:23:54.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:23:54.882: INFO: namespace replicaset-1665 deletion completed in 22.155995127s • [SLOW TEST:28.291 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:23:54.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-bceba222-25d5-4339-9e50-a8a4e0d1513a STEP: Creating a pod to test consume secrets May 19 13:23:54.979: INFO: Waiting up to 5m0s for pod "pod-secrets-31ce5251-e0ed-495d-8fee-0cabf4206ff9" in namespace "secrets-8490" to be "success or failure" May 19 13:23:54.998: INFO: Pod "pod-secrets-31ce5251-e0ed-495d-8fee-0cabf4206ff9": Phase="Pending", Reason="", readiness=false. Elapsed: 18.888804ms May 19 13:23:57.002: INFO: Pod "pod-secrets-31ce5251-e0ed-495d-8fee-0cabf4206ff9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023155216s May 19 13:23:59.006: INFO: Pod "pod-secrets-31ce5251-e0ed-495d-8fee-0cabf4206ff9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027609749s STEP: Saw pod success May 19 13:23:59.006: INFO: Pod "pod-secrets-31ce5251-e0ed-495d-8fee-0cabf4206ff9" satisfied condition "success or failure" May 19 13:23:59.009: INFO: Trying to get logs from node iruya-worker pod pod-secrets-31ce5251-e0ed-495d-8fee-0cabf4206ff9 container secret-volume-test: STEP: delete the pod May 19 13:23:59.151: INFO: Waiting for pod pod-secrets-31ce5251-e0ed-495d-8fee-0cabf4206ff9 to disappear May 19 13:23:59.207: INFO: Pod pod-secrets-31ce5251-e0ed-495d-8fee-0cabf4206ff9 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:23:59.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8490" for this suite. May 19 13:24:05.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:24:05.426: INFO: namespace secrets-8490 deletion completed in 6.214976098s • [SLOW TEST:10.543 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:24:05.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 19 13:24:05.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-6303' May 19 13:24:08.258: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 19 13:24:08.258: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created May 19 13:24:08.280: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 May 19 13:24:08.390: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller May 19 13:24:08.406: INFO: scanned /root for discovery docs: May 19 13:24:08.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-6303' May 19 13:24:24.254: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 19 13:24:24.254: INFO: stdout: "Created e2e-test-nginx-rc-819279bb68256c9d765a1eafa5556fc9\nScaling up e2e-test-nginx-rc-819279bb68256c9d765a1eafa5556fc9 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-819279bb68256c9d765a1eafa5556fc9 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-819279bb68256c9d765a1eafa5556fc9 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" May 19 13:24:24.254: INFO: stdout: "Created e2e-test-nginx-rc-819279bb68256c9d765a1eafa5556fc9\nScaling up e2e-test-nginx-rc-819279bb68256c9d765a1eafa5556fc9 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-819279bb68256c9d765a1eafa5556fc9 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-819279bb68256c9d765a1eafa5556fc9 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. May 19 13:24:24.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-6303' May 19 13:24:24.366: INFO: stderr: "" May 19 13:24:24.366: INFO: stdout: "e2e-test-nginx-rc-819279bb68256c9d765a1eafa5556fc9-kqhfr " May 19 13:24:24.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-819279bb68256c9d765a1eafa5556fc9-kqhfr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6303' May 19 13:24:24.467: INFO: stderr: "" May 19 13:24:24.467: INFO: stdout: "true" May 19 13:24:24.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-819279bb68256c9d765a1eafa5556fc9-kqhfr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6303' May 19 13:24:24.562: INFO: stderr: "" May 19 13:24:24.562: INFO: stdout: "docker.io/library/nginx:1.14-alpine" May 19 13:24:24.562: INFO: e2e-test-nginx-rc-819279bb68256c9d765a1eafa5556fc9-kqhfr is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 May 19 13:24:24.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-6303' May 19 13:24:24.652: INFO: stderr: "" May 19 13:24:24.652: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:24:24.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6303" for this suite. May 19 13:24:46.684: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:24:46.754: INFO: namespace kubectl-6303 deletion completed in 22.099201883s • [SLOW TEST:41.328 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:24:46.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-9295a1fd-bf7c-4b32-8461-f21032ea6be4 in namespace container-probe-3186 May 19 13:24:50.844: INFO: Started pod test-webserver-9295a1fd-bf7c-4b32-8461-f21032ea6be4 in namespace container-probe-3186 STEP: checking the pod's current state and verifying that restartCount is present May 19 13:24:50.847: INFO: Initial restart count of pod test-webserver-9295a1fd-bf7c-4b32-8461-f21032ea6be4 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:28:51.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3186" for this suite. May 19 13:28:57.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:28:57.761: INFO: namespace container-probe-3186 deletion completed in 6.13069689s • [SLOW TEST:251.006 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:28:57.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-4884 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet May 19 13:28:57.881: INFO: Found 0 stateful pods, waiting for 3 May 19 13:29:07.932: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 19 13:29:07.932: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 19 13:29:07.932: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 19 13:29:17.887: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 19 13:29:17.887: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 19 13:29:17.887: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 19 13:29:17.914: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 19 13:29:27.986: INFO: Updating stateful set ss2 May 19 13:29:28.013: INFO: Waiting for Pod statefulset-4884/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted May 19 13:29:38.541: INFO: Found 2 stateful pods, waiting for 3 May 19 13:29:48.547: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 19 13:29:48.547: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 19 13:29:48.547: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 19 13:29:48.572: INFO: Updating stateful set ss2 May 19 13:29:48.587: INFO: Waiting for Pod statefulset-4884/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 19 13:29:58.596: INFO: Waiting for Pod statefulset-4884/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 19 13:30:08.611: INFO: Updating stateful set ss2 May 19 13:30:08.640: INFO: Waiting for StatefulSet statefulset-4884/ss2 to complete update May 19 13:30:08.640: INFO: Waiting for Pod statefulset-4884/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 19 13:30:18.648: INFO: Waiting for StatefulSet statefulset-4884/ss2 to complete update May 19 13:30:18.648: INFO: Waiting for Pod statefulset-4884/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 19 13:30:28.650: INFO: Deleting all statefulset in ns statefulset-4884 May 19 13:30:28.653: INFO: Scaling statefulset ss2 to 0 May 19 13:30:48.674: INFO: Waiting for statefulset status.replicas updated to 0 May 19 13:30:48.677: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:30:48.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4884" for this suite. May 19 13:30:54.707: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:30:54.783: INFO: namespace statefulset-4884 deletion completed in 6.086035259s • [SLOW TEST:117.022 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:30:54.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-5964 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-5964 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5964 May 19 13:30:54.877: INFO: Found 0 stateful pods, waiting for 1 May 19 13:31:04.924: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 19 13:31:04.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5964 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 19 13:31:05.146: INFO: stderr: "I0519 13:31:05.043610 1181 log.go:172] (0xc000116e70) (0xc00067e820) Create stream\nI0519 13:31:05.043656 1181 log.go:172] (0xc000116e70) (0xc00067e820) Stream added, broadcasting: 1\nI0519 13:31:05.045703 1181 log.go:172] (0xc000116e70) Reply frame received for 1\nI0519 13:31:05.045735 1181 log.go:172] (0xc000116e70) (0xc000994000) Create stream\nI0519 13:31:05.045748 1181 log.go:172] (0xc000116e70) (0xc000994000) Stream added, broadcasting: 3\nI0519 13:31:05.046512 1181 log.go:172] (0xc000116e70) Reply frame received for 3\nI0519 13:31:05.046552 1181 log.go:172] (0xc000116e70) (0xc00067e8c0) Create stream\nI0519 13:31:05.046564 1181 log.go:172] (0xc000116e70) (0xc00067e8c0) Stream added, broadcasting: 5\nI0519 13:31:05.047314 1181 log.go:172] (0xc000116e70) Reply frame received for 5\nI0519 13:31:05.103388 1181 log.go:172] (0xc000116e70) Data frame received for 5\nI0519 13:31:05.103425 1181 log.go:172] (0xc00067e8c0) (5) Data frame handling\nI0519 13:31:05.103449 1181 log.go:172] (0xc00067e8c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0519 13:31:05.138739 1181 log.go:172] (0xc000116e70) Data frame received for 3\nI0519 13:31:05.138775 1181 log.go:172] (0xc000994000) (3) Data frame handling\nI0519 13:31:05.138788 1181 log.go:172] (0xc000994000) (3) Data frame sent\nI0519 13:31:05.139021 1181 log.go:172] (0xc000116e70) Data frame received for 3\nI0519 13:31:05.139043 1181 log.go:172] (0xc000994000) (3) Data frame handling\nI0519 13:31:05.139253 1181 log.go:172] (0xc000116e70) Data frame received for 5\nI0519 13:31:05.139269 1181 log.go:172] (0xc00067e8c0) (5) Data frame handling\nI0519 13:31:05.141706 1181 log.go:172] (0xc000116e70) Data frame received for 1\nI0519 13:31:05.141729 1181 log.go:172] (0xc00067e820) (1) Data frame handling\nI0519 13:31:05.141741 1181 log.go:172] (0xc00067e820) (1) Data frame sent\nI0519 13:31:05.141753 1181 log.go:172] (0xc000116e70) (0xc00067e820) Stream removed, broadcasting: 1\nI0519 13:31:05.141822 1181 log.go:172] (0xc000116e70) Go away received\nI0519 13:31:05.142075 1181 log.go:172] (0xc000116e70) (0xc00067e820) Stream removed, broadcasting: 1\nI0519 13:31:05.142101 1181 log.go:172] (0xc000116e70) (0xc000994000) Stream removed, broadcasting: 3\nI0519 13:31:05.142115 1181 log.go:172] (0xc000116e70) (0xc00067e8c0) Stream removed, broadcasting: 5\n" May 19 13:31:05.147: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 19 13:31:05.147: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 19 13:31:05.150: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 19 13:31:15.155: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 19 13:31:15.156: INFO: Waiting for statefulset status.replicas updated to 0 May 19 13:31:15.169: INFO: POD NODE PHASE GRACE CONDITIONS May 19 13:31:15.169: INFO: ss-0 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:30:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:30:54 +0000 UTC }] May 19 13:31:15.169: INFO: May 19 13:31:15.169: INFO: StatefulSet ss has not reached scale 3, at 1 May 19 13:31:16.248: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.997225702s May 19 13:31:17.253: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.917492325s May 19 13:31:18.256: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.913378749s May 19 13:31:19.262: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.909491093s May 19 13:31:20.302: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.904109859s May 19 13:31:21.307: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.863732045s May 19 13:31:22.312: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.859097367s May 19 13:31:23.317: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.854483657s May 19 13:31:24.326: INFO: Verifying statefulset ss doesn't scale past 3 for another 848.940997ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5964 May 19 13:31:25.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5964 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 19 13:31:25.549: INFO: stderr: "I0519 13:31:25.466727 1202 log.go:172] (0xc00013a0b0) (0xc000680820) Create stream\nI0519 13:31:25.466778 1202 log.go:172] (0xc00013a0b0) (0xc000680820) Stream added, broadcasting: 1\nI0519 13:31:25.468848 1202 log.go:172] (0xc00013a0b0) Reply frame received for 1\nI0519 13:31:25.468880 1202 log.go:172] (0xc00013a0b0) (0xc000862000) Create stream\nI0519 13:31:25.468889 1202 log.go:172] (0xc00013a0b0) (0xc000862000) Stream added, broadcasting: 3\nI0519 13:31:25.469817 1202 log.go:172] (0xc00013a0b0) Reply frame received for 3\nI0519 13:31:25.469882 1202 log.go:172] (0xc00013a0b0) (0xc0006808c0) Create stream\nI0519 13:31:25.469903 1202 log.go:172] (0xc00013a0b0) (0xc0006808c0) Stream added, broadcasting: 5\nI0519 13:31:25.470884 1202 log.go:172] (0xc00013a0b0) Reply frame received for 5\nI0519 13:31:25.542144 1202 log.go:172] (0xc00013a0b0) Data frame received for 5\nI0519 13:31:25.542186 1202 log.go:172] (0xc00013a0b0) Data frame received for 3\nI0519 13:31:25.542230 1202 log.go:172] (0xc000862000) (3) Data frame handling\nI0519 13:31:25.542257 1202 log.go:172] (0xc000862000) (3) Data frame sent\nI0519 13:31:25.542275 1202 log.go:172] (0xc00013a0b0) Data frame received for 3\nI0519 13:31:25.542290 1202 log.go:172] (0xc000862000) (3) Data frame handling\nI0519 13:31:25.542342 1202 log.go:172] (0xc0006808c0) (5) Data frame handling\nI0519 13:31:25.542375 1202 log.go:172] (0xc0006808c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0519 13:31:25.542944 1202 log.go:172] (0xc00013a0b0) Data frame received for 5\nI0519 13:31:25.542971 1202 log.go:172] (0xc0006808c0) (5) Data frame handling\nI0519 13:31:25.544517 1202 log.go:172] (0xc00013a0b0) Data frame received for 1\nI0519 13:31:25.544536 1202 log.go:172] (0xc000680820) (1) Data frame handling\nI0519 13:31:25.544557 1202 log.go:172] (0xc000680820) (1) Data frame sent\nI0519 13:31:25.544633 1202 log.go:172] (0xc00013a0b0) (0xc000680820) Stream removed, broadcasting: 1\nI0519 13:31:25.544682 1202 log.go:172] (0xc00013a0b0) Go away received\nI0519 13:31:25.545032 1202 log.go:172] (0xc00013a0b0) (0xc000680820) Stream removed, broadcasting: 1\nI0519 13:31:25.545053 1202 log.go:172] (0xc00013a0b0) (0xc000862000) Stream removed, broadcasting: 3\nI0519 13:31:25.545062 1202 log.go:172] (0xc00013a0b0) (0xc0006808c0) Stream removed, broadcasting: 5\n" May 19 13:31:25.549: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 19 13:31:25.549: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 19 13:31:25.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5964 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 19 13:31:25.781: INFO: stderr: "I0519 13:31:25.675496 1221 log.go:172] (0xc00012a6e0) (0xc0004ee6e0) Create stream\nI0519 13:31:25.675553 1221 log.go:172] (0xc00012a6e0) (0xc0004ee6e0) Stream added, broadcasting: 1\nI0519 13:31:25.685605 1221 log.go:172] (0xc00012a6e0) Reply frame received for 1\nI0519 13:31:25.685655 1221 log.go:172] (0xc00012a6e0) (0xc0002a0320) Create stream\nI0519 13:31:25.685671 1221 log.go:172] (0xc00012a6e0) (0xc0002a0320) Stream added, broadcasting: 3\nI0519 13:31:25.690537 1221 log.go:172] (0xc00012a6e0) Reply frame received for 3\nI0519 13:31:25.690555 1221 log.go:172] (0xc00012a6e0) (0xc0002a0460) Create stream\nI0519 13:31:25.690561 1221 log.go:172] (0xc00012a6e0) (0xc0002a0460) Stream added, broadcasting: 5\nI0519 13:31:25.691113 1221 log.go:172] (0xc00012a6e0) Reply frame received for 5\nI0519 13:31:25.770710 1221 log.go:172] (0xc00012a6e0) Data frame received for 5\nI0519 13:31:25.770749 1221 log.go:172] (0xc0002a0460) (5) Data frame handling\nI0519 13:31:25.770779 1221 log.go:172] (0xc0002a0460) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0519 13:31:25.773664 1221 log.go:172] (0xc00012a6e0) Data frame received for 3\nI0519 13:31:25.773713 1221 log.go:172] (0xc0002a0320) (3) Data frame handling\nI0519 13:31:25.773734 1221 log.go:172] (0xc0002a0320) (3) Data frame sent\nI0519 13:31:25.773749 1221 log.go:172] (0xc00012a6e0) Data frame received for 3\nI0519 13:31:25.773765 1221 log.go:172] (0xc0002a0320) (3) Data frame handling\nI0519 13:31:25.773790 1221 log.go:172] (0xc00012a6e0) Data frame received for 5\nI0519 13:31:25.773811 1221 log.go:172] (0xc0002a0460) (5) Data frame handling\nI0519 13:31:25.773838 1221 log.go:172] (0xc0002a0460) (5) Data frame sent\nI0519 13:31:25.773851 1221 log.go:172] (0xc00012a6e0) Data frame received for 5\nI0519 13:31:25.773860 1221 log.go:172] (0xc0002a0460) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0519 13:31:25.773879 1221 log.go:172] (0xc0002a0460) (5) Data frame sent\nI0519 13:31:25.774113 1221 log.go:172] (0xc00012a6e0) Data frame received for 5\nI0519 13:31:25.774152 1221 log.go:172] (0xc0002a0460) (5) Data frame handling\nI0519 13:31:25.775585 1221 log.go:172] (0xc00012a6e0) Data frame received for 1\nI0519 13:31:25.775684 1221 log.go:172] (0xc0004ee6e0) (1) Data frame handling\nI0519 13:31:25.775711 1221 log.go:172] (0xc0004ee6e0) (1) Data frame sent\nI0519 13:31:25.775865 1221 log.go:172] (0xc00012a6e0) (0xc0004ee6e0) Stream removed, broadcasting: 1\nI0519 13:31:25.775899 1221 log.go:172] (0xc00012a6e0) Go away received\nI0519 13:31:25.776258 1221 log.go:172] (0xc00012a6e0) (0xc0004ee6e0) Stream removed, broadcasting: 1\nI0519 13:31:25.776277 1221 log.go:172] (0xc00012a6e0) (0xc0002a0320) Stream removed, broadcasting: 3\nI0519 13:31:25.776287 1221 log.go:172] (0xc00012a6e0) (0xc0002a0460) Stream removed, broadcasting: 5\n" May 19 13:31:25.781: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 19 13:31:25.781: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 19 13:31:25.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5964 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 19 13:31:26.006: INFO: stderr: "I0519 13:31:25.918876 1238 log.go:172] (0xc00013a790) (0xc0005c68c0) Create stream\nI0519 13:31:25.918932 1238 log.go:172] (0xc00013a790) (0xc0005c68c0) Stream added, broadcasting: 1\nI0519 13:31:25.926773 1238 log.go:172] (0xc00013a790) Reply frame received for 1\nI0519 13:31:25.926815 1238 log.go:172] (0xc00013a790) (0xc000858000) Create stream\nI0519 13:31:25.926825 1238 log.go:172] (0xc00013a790) (0xc000858000) Stream added, broadcasting: 3\nI0519 13:31:25.927762 1238 log.go:172] (0xc00013a790) Reply frame received for 3\nI0519 13:31:25.927794 1238 log.go:172] (0xc00013a790) (0xc000856000) Create stream\nI0519 13:31:25.927809 1238 log.go:172] (0xc00013a790) (0xc000856000) Stream added, broadcasting: 5\nI0519 13:31:25.928643 1238 log.go:172] (0xc00013a790) Reply frame received for 5\nI0519 13:31:25.998305 1238 log.go:172] (0xc00013a790) Data frame received for 5\nI0519 13:31:25.998366 1238 log.go:172] (0xc000856000) (5) Data frame handling\nI0519 13:31:25.998383 1238 log.go:172] (0xc000856000) (5) Data frame sent\nI0519 13:31:25.998396 1238 log.go:172] (0xc00013a790) Data frame received for 5\nI0519 13:31:25.998407 1238 log.go:172] (0xc000856000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0519 13:31:25.998437 1238 log.go:172] (0xc00013a790) Data frame received for 3\nI0519 13:31:25.998449 1238 log.go:172] (0xc000858000) (3) Data frame handling\nI0519 13:31:25.998474 1238 log.go:172] (0xc000858000) (3) Data frame sent\nI0519 13:31:25.998505 1238 log.go:172] (0xc00013a790) Data frame received for 3\nI0519 13:31:25.998518 1238 log.go:172] (0xc000858000) (3) Data frame handling\nI0519 13:31:26.000083 1238 log.go:172] (0xc00013a790) Data frame received for 1\nI0519 13:31:26.000116 1238 log.go:172] (0xc0005c68c0) (1) Data frame handling\nI0519 13:31:26.000149 1238 log.go:172] (0xc0005c68c0) (1) Data frame sent\nI0519 13:31:26.000170 1238 log.go:172] (0xc00013a790) (0xc0005c68c0) Stream removed, broadcasting: 1\nI0519 13:31:26.000201 1238 log.go:172] (0xc00013a790) Go away received\nI0519 13:31:26.000546 1238 log.go:172] (0xc00013a790) (0xc0005c68c0) Stream removed, broadcasting: 1\nI0519 13:31:26.000562 1238 log.go:172] (0xc00013a790) (0xc000858000) Stream removed, broadcasting: 3\nI0519 13:31:26.000571 1238 log.go:172] (0xc00013a790) (0xc000856000) Stream removed, broadcasting: 5\n" May 19 13:31:26.006: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 19 13:31:26.006: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 19 13:31:26.010: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 19 13:31:26.010: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 19 13:31:26.010: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 19 13:31:26.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5964 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 19 13:31:26.208: INFO: stderr: "I0519 13:31:26.137818 1255 log.go:172] (0xc000a3c0b0) (0xc000abe3c0) Create stream\nI0519 13:31:26.137892 1255 log.go:172] (0xc000a3c0b0) (0xc000abe3c0) Stream added, broadcasting: 1\nI0519 13:31:26.141034 1255 log.go:172] (0xc000a3c0b0) Reply frame received for 1\nI0519 13:31:26.141082 1255 log.go:172] (0xc000a3c0b0) (0xc000acc000) Create stream\nI0519 13:31:26.141094 1255 log.go:172] (0xc000a3c0b0) (0xc000acc000) Stream added, broadcasting: 3\nI0519 13:31:26.142261 1255 log.go:172] (0xc000a3c0b0) Reply frame received for 3\nI0519 13:31:26.142284 1255 log.go:172] (0xc000a3c0b0) (0xc000acc0a0) Create stream\nI0519 13:31:26.142292 1255 log.go:172] (0xc000a3c0b0) (0xc000acc0a0) Stream added, broadcasting: 5\nI0519 13:31:26.143580 1255 log.go:172] (0xc000a3c0b0) Reply frame received for 5\nI0519 13:31:26.200943 1255 log.go:172] (0xc000a3c0b0) Data frame received for 3\nI0519 13:31:26.200981 1255 log.go:172] (0xc000acc000) (3) Data frame handling\nI0519 13:31:26.200992 1255 log.go:172] (0xc000acc000) (3) Data frame sent\nI0519 13:31:26.200999 1255 log.go:172] (0xc000a3c0b0) Data frame received for 3\nI0519 13:31:26.201005 1255 log.go:172] (0xc000acc000) (3) Data frame handling\nI0519 13:31:26.201036 1255 log.go:172] (0xc000a3c0b0) Data frame received for 5\nI0519 13:31:26.201047 1255 log.go:172] (0xc000acc0a0) (5) Data frame handling\nI0519 13:31:26.201063 1255 log.go:172] (0xc000acc0a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0519 13:31:26.201075 1255 log.go:172] (0xc000a3c0b0) Data frame received for 5\nI0519 13:31:26.201269 1255 log.go:172] (0xc000acc0a0) (5) Data frame handling\nI0519 13:31:26.202729 1255 log.go:172] (0xc000a3c0b0) Data frame received for 1\nI0519 13:31:26.202741 1255 log.go:172] (0xc000abe3c0) (1) Data frame handling\nI0519 13:31:26.202752 1255 log.go:172] (0xc000abe3c0) (1) Data frame sent\nI0519 13:31:26.202816 1255 log.go:172] (0xc000a3c0b0) (0xc000abe3c0) Stream removed, broadcasting: 1\nI0519 13:31:26.202952 1255 log.go:172] (0xc000a3c0b0) Go away received\nI0519 13:31:26.203209 1255 log.go:172] (0xc000a3c0b0) (0xc000abe3c0) Stream removed, broadcasting: 1\nI0519 13:31:26.203227 1255 log.go:172] (0xc000a3c0b0) (0xc000acc000) Stream removed, broadcasting: 3\nI0519 13:31:26.203238 1255 log.go:172] (0xc000a3c0b0) (0xc000acc0a0) Stream removed, broadcasting: 5\n" May 19 13:31:26.208: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 19 13:31:26.208: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 19 13:31:26.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5964 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 19 13:31:26.440: INFO: stderr: "I0519 13:31:26.335869 1274 log.go:172] (0xc0009e8370) (0xc000a76820) Create stream\nI0519 13:31:26.335951 1274 log.go:172] (0xc0009e8370) (0xc000a76820) Stream added, broadcasting: 1\nI0519 13:31:26.338925 1274 log.go:172] (0xc0009e8370) Reply frame received for 1\nI0519 13:31:26.338995 1274 log.go:172] (0xc0009e8370) (0xc0003f59a0) Create stream\nI0519 13:31:26.339025 1274 log.go:172] (0xc0009e8370) (0xc0003f59a0) Stream added, broadcasting: 3\nI0519 13:31:26.339952 1274 log.go:172] (0xc0009e8370) Reply frame received for 3\nI0519 13:31:26.339991 1274 log.go:172] (0xc0009e8370) (0xc000a768c0) Create stream\nI0519 13:31:26.340004 1274 log.go:172] (0xc0009e8370) (0xc000a768c0) Stream added, broadcasting: 5\nI0519 13:31:26.340952 1274 log.go:172] (0xc0009e8370) Reply frame received for 5\nI0519 13:31:26.400641 1274 log.go:172] (0xc0009e8370) Data frame received for 5\nI0519 13:31:26.400673 1274 log.go:172] (0xc000a768c0) (5) Data frame handling\nI0519 13:31:26.400697 1274 log.go:172] (0xc000a768c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0519 13:31:26.433029 1274 log.go:172] (0xc0009e8370) Data frame received for 3\nI0519 13:31:26.433087 1274 log.go:172] (0xc0003f59a0) (3) Data frame handling\nI0519 13:31:26.433345 1274 log.go:172] (0xc0003f59a0) (3) Data frame sent\nI0519 13:31:26.433376 1274 log.go:172] (0xc0009e8370) Data frame received for 3\nI0519 13:31:26.433398 1274 log.go:172] (0xc0003f59a0) (3) Data frame handling\nI0519 13:31:26.433779 1274 log.go:172] (0xc0009e8370) Data frame received for 5\nI0519 13:31:26.433806 1274 log.go:172] (0xc000a768c0) (5) Data frame handling\nI0519 13:31:26.435107 1274 log.go:172] (0xc0009e8370) Data frame received for 1\nI0519 13:31:26.435132 1274 log.go:172] (0xc000a76820) (1) Data frame handling\nI0519 13:31:26.435147 1274 log.go:172] (0xc000a76820) (1) Data frame sent\nI0519 13:31:26.435178 1274 log.go:172] (0xc0009e8370) (0xc000a76820) Stream removed, broadcasting: 1\nI0519 13:31:26.435244 1274 log.go:172] (0xc0009e8370) Go away received\nI0519 13:31:26.435495 1274 log.go:172] (0xc0009e8370) (0xc000a76820) Stream removed, broadcasting: 1\nI0519 13:31:26.435509 1274 log.go:172] (0xc0009e8370) (0xc0003f59a0) Stream removed, broadcasting: 3\nI0519 13:31:26.435514 1274 log.go:172] (0xc0009e8370) (0xc000a768c0) Stream removed, broadcasting: 5\n" May 19 13:31:26.440: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 19 13:31:26.441: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 19 13:31:26.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5964 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 19 13:31:26.678: INFO: stderr: "I0519 13:31:26.556307 1294 log.go:172] (0xc0007d6420) (0xc0008b1220) Create stream\nI0519 13:31:26.556362 1294 log.go:172] (0xc0007d6420) (0xc0008b1220) Stream added, broadcasting: 1\nI0519 13:31:26.558867 1294 log.go:172] (0xc0007d6420) Reply frame received for 1\nI0519 13:31:26.558913 1294 log.go:172] (0xc0007d6420) (0xc0007f4000) Create stream\nI0519 13:31:26.558935 1294 log.go:172] (0xc0007d6420) (0xc0007f4000) Stream added, broadcasting: 3\nI0519 13:31:26.559772 1294 log.go:172] (0xc0007d6420) Reply frame received for 3\nI0519 13:31:26.559814 1294 log.go:172] (0xc0007d6420) (0xc0008b12c0) Create stream\nI0519 13:31:26.559831 1294 log.go:172] (0xc0007d6420) (0xc0008b12c0) Stream added, broadcasting: 5\nI0519 13:31:26.560643 1294 log.go:172] (0xc0007d6420) Reply frame received for 5\nI0519 13:31:26.629982 1294 log.go:172] (0xc0007d6420) Data frame received for 5\nI0519 13:31:26.630010 1294 log.go:172] (0xc0008b12c0) (5) Data frame handling\nI0519 13:31:26.630025 1294 log.go:172] (0xc0008b12c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0519 13:31:26.672150 1294 log.go:172] (0xc0007d6420) Data frame received for 5\nI0519 13:31:26.672192 1294 log.go:172] (0xc0008b12c0) (5) Data frame handling\nI0519 13:31:26.672221 1294 log.go:172] (0xc0007d6420) Data frame received for 3\nI0519 13:31:26.672244 1294 log.go:172] (0xc0007f4000) (3) Data frame handling\nI0519 13:31:26.672272 1294 log.go:172] (0xc0007f4000) (3) Data frame sent\nI0519 13:31:26.672289 1294 log.go:172] (0xc0007d6420) Data frame received for 3\nI0519 13:31:26.672310 1294 log.go:172] (0xc0007f4000) (3) Data frame handling\nI0519 13:31:26.673923 1294 log.go:172] (0xc0007d6420) Data frame received for 1\nI0519 13:31:26.673950 1294 log.go:172] (0xc0008b1220) (1) Data frame handling\nI0519 13:31:26.673968 1294 log.go:172] (0xc0008b1220) (1) Data frame sent\nI0519 13:31:26.673990 1294 log.go:172] (0xc0007d6420) (0xc0008b1220) Stream removed, broadcasting: 1\nI0519 13:31:26.674010 1294 log.go:172] (0xc0007d6420) Go away received\nI0519 13:31:26.674418 1294 log.go:172] (0xc0007d6420) (0xc0008b1220) Stream removed, broadcasting: 1\nI0519 13:31:26.674448 1294 log.go:172] (0xc0007d6420) (0xc0007f4000) Stream removed, broadcasting: 3\nI0519 13:31:26.674460 1294 log.go:172] (0xc0007d6420) (0xc0008b12c0) Stream removed, broadcasting: 5\n" May 19 13:31:26.679: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 19 13:31:26.679: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 19 13:31:26.679: INFO: Waiting for statefulset status.replicas updated to 0 May 19 13:31:26.685: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 19 13:31:36.692: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 19 13:31:36.692: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 19 13:31:36.692: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 19 13:31:36.720: INFO: POD NODE PHASE GRACE CONDITIONS May 19 13:31:36.720: INFO: ss-0 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:30:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:30:54 +0000 UTC }] May 19 13:31:36.720: INFO: ss-1 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:15 +0000 UTC }] May 19 13:31:36.720: INFO: ss-2 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:15 +0000 UTC }] May 19 13:31:36.720: INFO: May 19 13:31:36.720: INFO: StatefulSet ss has not reached scale 0, at 3 May 19 13:31:37.726: INFO: POD NODE PHASE GRACE CONDITIONS May 19 13:31:37.726: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:30:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:30:54 +0000 UTC }] May 19 13:31:37.726: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:15 +0000 UTC }] May 19 13:31:37.726: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:15 +0000 UTC }] May 19 13:31:37.726: INFO: May 19 13:31:37.726: INFO: StatefulSet ss has not reached scale 0, at 3 May 19 13:31:38.732: INFO: POD NODE PHASE GRACE CONDITIONS May 19 13:31:38.732: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:30:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:30:54 +0000 UTC }] May 19 13:31:38.732: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:15 +0000 UTC }] May 19 13:31:38.732: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:15 +0000 UTC }] May 19 13:31:38.732: INFO: May 19 13:31:38.732: INFO: StatefulSet ss has not reached scale 0, at 3 May 19 13:31:39.738: INFO: POD NODE PHASE GRACE CONDITIONS May 19 13:31:39.738: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:30:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:30:54 +0000 UTC }] May 19 13:31:39.738: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:15 +0000 UTC }] May 19 13:31:39.738: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:15 +0000 UTC }] May 19 13:31:39.738: INFO: May 19 13:31:39.738: INFO: StatefulSet ss has not reached scale 0, at 3 May 19 13:31:40.742: INFO: POD NODE PHASE GRACE CONDITIONS May 19 13:31:40.742: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:15 +0000 UTC }] May 19 13:31:40.742: INFO: May 19 13:31:40.742: INFO: StatefulSet ss has not reached scale 0, at 1 May 19 13:31:41.747: INFO: POD NODE PHASE GRACE CONDITIONS May 19 13:31:41.747: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:31:15 +0000 UTC }] May 19 13:31:41.747: INFO: May 19 13:31:41.747: INFO: StatefulSet ss has not reached scale 0, at 1 May 19 13:31:42.751: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.96252517s May 19 13:31:43.755: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.959420091s May 19 13:31:44.760: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.954975152s May 19 13:31:45.764: INFO: Verifying statefulset ss doesn't scale past 0 for another 950.026881ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5964 May 19 13:31:46.768: INFO: Scaling statefulset ss to 0 May 19 13:31:46.777: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 19 13:31:46.779: INFO: Deleting all statefulset in ns statefulset-5964 May 19 13:31:46.782: INFO: Scaling statefulset ss to 0 May 19 13:31:46.790: INFO: Waiting for statefulset status.replicas updated to 0 May 19 13:31:46.811: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:31:46.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5964" for this suite. May 19 13:31:52.838: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:31:52.918: INFO: namespace statefulset-5964 deletion completed in 6.092025039s • [SLOW TEST:58.134 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:31:52.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9487.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9487.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 19 13:31:59.067: INFO: DNS probes using dns-9487/dns-test-d69b6bce-b525-4dc3-a96b-fdf197b8c427 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:31:59.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9487" for this suite. May 19 13:32:05.192: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:32:05.308: INFO: namespace dns-9487 deletion completed in 6.183060919s • [SLOW TEST:12.389 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:32:05.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 19 13:32:05.412: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 13:32:05.439: INFO: Number of nodes with available pods: 0 May 19 13:32:05.439: INFO: Node iruya-worker is running more than one daemon pod May 19 13:32:06.445: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 13:32:06.506: INFO: Number of nodes with available pods: 0 May 19 13:32:06.506: INFO: Node iruya-worker is running more than one daemon pod May 19 13:32:07.444: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 13:32:07.447: INFO: Number of nodes with available pods: 0 May 19 13:32:07.447: INFO: Node iruya-worker is running more than one daemon pod May 19 13:32:08.458: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 13:32:08.463: INFO: Number of nodes with available pods: 0 May 19 13:32:08.463: INFO: Node iruya-worker is running more than one daemon pod May 19 13:32:09.451: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 13:32:09.493: INFO: Number of nodes with available pods: 1 May 19 13:32:09.493: INFO: Node iruya-worker is running more than one daemon pod May 19 13:32:10.445: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 13:32:10.449: INFO: Number of nodes with available pods: 2 May 19 13:32:10.449: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 19 13:32:10.507: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 13:32:10.510: INFO: Number of nodes with available pods: 1 May 19 13:32:10.510: INFO: Node iruya-worker is running more than one daemon pod May 19 13:32:11.519: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 13:32:11.523: INFO: Number of nodes with available pods: 1 May 19 13:32:11.523: INFO: Node iruya-worker is running more than one daemon pod May 19 13:32:12.516: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 13:32:12.520: INFO: Number of nodes with available pods: 1 May 19 13:32:12.520: INFO: Node iruya-worker is running more than one daemon pod May 19 13:32:13.515: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 13:32:13.518: INFO: Number of nodes with available pods: 1 May 19 13:32:13.518: INFO: Node iruya-worker is running more than one daemon pod May 19 13:32:14.515: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 13:32:14.518: INFO: Number of nodes with available pods: 1 May 19 13:32:14.518: INFO: Node iruya-worker is running more than one daemon pod May 19 13:32:15.516: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 13:32:15.520: INFO: Number of nodes with available pods: 1 May 19 13:32:15.520: INFO: Node iruya-worker is running more than one daemon pod May 19 13:32:16.516: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 13:32:16.521: INFO: Number of nodes with available pods: 1 May 19 13:32:16.521: INFO: Node iruya-worker is running more than one daemon pod May 19 13:32:17.515: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 13:32:17.518: INFO: Number of nodes with available pods: 1 May 19 13:32:17.518: INFO: Node iruya-worker is running more than one daemon pod May 19 13:32:18.516: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 13:32:18.520: INFO: Number of nodes with available pods: 1 May 19 13:32:18.520: INFO: Node iruya-worker is running more than one daemon pod May 19 13:32:19.525: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 13:32:19.528: INFO: Number of nodes with available pods: 1 May 19 13:32:19.528: INFO: Node iruya-worker is running more than one daemon pod May 19 13:32:20.516: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 13:32:20.519: INFO: Number of nodes with available pods: 1 May 19 13:32:20.519: INFO: Node iruya-worker is running more than one daemon pod May 19 13:32:21.515: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 13:32:21.518: INFO: Number of nodes with available pods: 1 May 19 13:32:21.518: INFO: Node iruya-worker is running more than one daemon pod May 19 13:32:22.514: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 13:32:22.517: INFO: Number of nodes with available pods: 1 May 19 13:32:22.517: INFO: Node iruya-worker is running more than one daemon pod May 19 13:32:23.515: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 13:32:23.520: INFO: Number of nodes with available pods: 1 May 19 13:32:23.520: INFO: Node iruya-worker is running more than one daemon pod May 19 13:32:24.516: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 13:32:24.520: INFO: Number of nodes with available pods: 1 May 19 13:32:24.520: INFO: Node iruya-worker is running more than one daemon pod May 19 13:32:25.516: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 13:32:25.520: INFO: Number of nodes with available pods: 1 May 19 13:32:25.520: INFO: Node iruya-worker is running more than one daemon pod May 19 13:32:26.516: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 13:32:26.519: INFO: Number of nodes with available pods: 2 May 19 13:32:26.519: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7871, will wait for the garbage collector to delete the pods May 19 13:32:26.591: INFO: Deleting DaemonSet.extensions daemon-set took: 15.555478ms May 19 13:32:26.891: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.28539ms May 19 13:32:31.895: INFO: Number of nodes with available pods: 0 May 19 13:32:31.895: INFO: Number of running nodes: 0, number of available pods: 0 May 19 13:32:31.896: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7871/daemonsets","resourceVersion":"11756469"},"items":null} May 19 13:32:31.898: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7871/pods","resourceVersion":"11756469"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:32:31.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7871" for this suite. May 19 13:32:37.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:32:38.030: INFO: namespace daemonsets-7871 deletion completed in 6.121702438s • [SLOW TEST:32.722 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:32:38.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7287.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7287.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7287.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7287.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 19 13:32:44.122: INFO: DNS probes using dns-test-7e9ae253-53d3-4d1b-b665-b6a2a059aee9 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7287.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7287.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7287.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7287.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 19 13:32:52.242: INFO: File wheezy_udp@dns-test-service-3.dns-7287.svc.cluster.local from pod dns-7287/dns-test-89016917-0fd2-4e30-af0e-6cd579dec5c7 contains 'foo.example.com. ' instead of 'bar.example.com.' May 19 13:32:52.245: INFO: File jessie_udp@dns-test-service-3.dns-7287.svc.cluster.local from pod dns-7287/dns-test-89016917-0fd2-4e30-af0e-6cd579dec5c7 contains 'foo.example.com. ' instead of 'bar.example.com.' May 19 13:32:52.245: INFO: Lookups using dns-7287/dns-test-89016917-0fd2-4e30-af0e-6cd579dec5c7 failed for: [wheezy_udp@dns-test-service-3.dns-7287.svc.cluster.local jessie_udp@dns-test-service-3.dns-7287.svc.cluster.local] May 19 13:32:57.250: INFO: File wheezy_udp@dns-test-service-3.dns-7287.svc.cluster.local from pod dns-7287/dns-test-89016917-0fd2-4e30-af0e-6cd579dec5c7 contains 'foo.example.com. ' instead of 'bar.example.com.' May 19 13:32:57.253: INFO: File jessie_udp@dns-test-service-3.dns-7287.svc.cluster.local from pod dns-7287/dns-test-89016917-0fd2-4e30-af0e-6cd579dec5c7 contains 'foo.example.com. ' instead of 'bar.example.com.' May 19 13:32:57.253: INFO: Lookups using dns-7287/dns-test-89016917-0fd2-4e30-af0e-6cd579dec5c7 failed for: [wheezy_udp@dns-test-service-3.dns-7287.svc.cluster.local jessie_udp@dns-test-service-3.dns-7287.svc.cluster.local] May 19 13:33:02.250: INFO: File wheezy_udp@dns-test-service-3.dns-7287.svc.cluster.local from pod dns-7287/dns-test-89016917-0fd2-4e30-af0e-6cd579dec5c7 contains 'foo.example.com. ' instead of 'bar.example.com.' May 19 13:33:02.255: INFO: File jessie_udp@dns-test-service-3.dns-7287.svc.cluster.local from pod dns-7287/dns-test-89016917-0fd2-4e30-af0e-6cd579dec5c7 contains 'foo.example.com. ' instead of 'bar.example.com.' May 19 13:33:02.255: INFO: Lookups using dns-7287/dns-test-89016917-0fd2-4e30-af0e-6cd579dec5c7 failed for: [wheezy_udp@dns-test-service-3.dns-7287.svc.cluster.local jessie_udp@dns-test-service-3.dns-7287.svc.cluster.local] May 19 13:33:07.252: INFO: File wheezy_udp@dns-test-service-3.dns-7287.svc.cluster.local from pod dns-7287/dns-test-89016917-0fd2-4e30-af0e-6cd579dec5c7 contains 'foo.example.com. ' instead of 'bar.example.com.' May 19 13:33:07.255: INFO: File jessie_udp@dns-test-service-3.dns-7287.svc.cluster.local from pod dns-7287/dns-test-89016917-0fd2-4e30-af0e-6cd579dec5c7 contains 'foo.example.com. ' instead of 'bar.example.com.' May 19 13:33:07.255: INFO: Lookups using dns-7287/dns-test-89016917-0fd2-4e30-af0e-6cd579dec5c7 failed for: [wheezy_udp@dns-test-service-3.dns-7287.svc.cluster.local jessie_udp@dns-test-service-3.dns-7287.svc.cluster.local] May 19 13:33:12.251: INFO: File wheezy_udp@dns-test-service-3.dns-7287.svc.cluster.local from pod dns-7287/dns-test-89016917-0fd2-4e30-af0e-6cd579dec5c7 contains 'foo.example.com. ' instead of 'bar.example.com.' May 19 13:33:12.255: INFO: File jessie_udp@dns-test-service-3.dns-7287.svc.cluster.local from pod dns-7287/dns-test-89016917-0fd2-4e30-af0e-6cd579dec5c7 contains 'foo.example.com. ' instead of 'bar.example.com.' May 19 13:33:12.255: INFO: Lookups using dns-7287/dns-test-89016917-0fd2-4e30-af0e-6cd579dec5c7 failed for: [wheezy_udp@dns-test-service-3.dns-7287.svc.cluster.local jessie_udp@dns-test-service-3.dns-7287.svc.cluster.local] May 19 13:33:17.252: INFO: DNS probes using dns-test-89016917-0fd2-4e30-af0e-6cd579dec5c7 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7287.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-7287.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7287.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-7287.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 19 13:33:25.473: INFO: DNS probes using dns-test-dc244b4b-b6c8-4272-ba0f-22c6265ddae2 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:33:25.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7287" for this suite. May 19 13:33:31.768: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:33:31.849: INFO: namespace dns-7287 deletion completed in 6.220818904s • [SLOW TEST:53.818 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:33:31.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-32270c55-f354-4237-98c2-e7f00e8d2072 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:33:38.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3655" for this suite. May 19 13:34:00.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:34:00.144: INFO: namespace configmap-3655 deletion completed in 22.123648514s • [SLOW TEST:28.294 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:34:00.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium May 19 13:34:00.217: INFO: Waiting up to 5m0s for pod "pod-d2dfe951-6556-4a11-8bc8-413e7e64e883" in namespace "emptydir-8099" to be "success or failure" May 19 13:34:00.234: INFO: Pod "pod-d2dfe951-6556-4a11-8bc8-413e7e64e883": Phase="Pending", Reason="", readiness=false. Elapsed: 16.466966ms May 19 13:34:02.323: INFO: Pod "pod-d2dfe951-6556-4a11-8bc8-413e7e64e883": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105272489s May 19 13:34:04.327: INFO: Pod "pod-d2dfe951-6556-4a11-8bc8-413e7e64e883": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.1095978s STEP: Saw pod success May 19 13:34:04.327: INFO: Pod "pod-d2dfe951-6556-4a11-8bc8-413e7e64e883" satisfied condition "success or failure" May 19 13:34:04.330: INFO: Trying to get logs from node iruya-worker pod pod-d2dfe951-6556-4a11-8bc8-413e7e64e883 container test-container: STEP: delete the pod May 19 13:34:04.422: INFO: Waiting for pod pod-d2dfe951-6556-4a11-8bc8-413e7e64e883 to disappear May 19 13:34:04.466: INFO: Pod pod-d2dfe951-6556-4a11-8bc8-413e7e64e883 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:34:04.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8099" for this suite. May 19 13:34:10.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:34:10.612: INFO: namespace emptydir-8099 deletion completed in 6.141883825s • [SLOW TEST:10.468 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:34:10.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 19 13:34:10.676: INFO: Creating deployment "nginx-deployment" May 19 13:34:10.690: INFO: Waiting for observed generation 1 May 19 13:34:12.712: INFO: Waiting for all required pods to come up May 19 13:34:12.717: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running May 19 13:34:24.726: INFO: Waiting for deployment "nginx-deployment" to complete May 19 13:34:24.732: INFO: Updating deployment "nginx-deployment" with a non-existent image May 19 13:34:24.739: INFO: Updating deployment nginx-deployment May 19 13:34:24.739: INFO: Waiting for observed generation 2 May 19 13:34:26.751: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 19 13:34:26.754: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 19 13:34:26.756: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 19 13:34:26.763: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 19 13:34:26.763: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 19 13:34:26.765: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 19 13:34:26.769: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas May 19 13:34:26.769: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 May 19 13:34:26.775: INFO: Updating deployment nginx-deployment May 19 13:34:26.775: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas May 19 13:34:26.895: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 19 13:34:26.915: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 19 13:34:27.084: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-9005,SelfLink:/apis/apps/v1/namespaces/deployment-9005/deployments/nginx-deployment,UID:5055682a-5415-4205-966a-8ae6230a4f65,ResourceVersion:11757100,Generation:3,CreationTimestamp:2020-05-19 13:34:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-05-19 13:34:25 +0000 UTC 2020-05-19 13:34:10 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-05-19 13:34:26 +0000 UTC 2020-05-19 13:34:26 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} May 19 13:34:27.210: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-9005,SelfLink:/apis/apps/v1/namespaces/deployment-9005/replicasets/nginx-deployment-55fb7cb77f,UID:93d27028-c09e-4146-820a-4a144f1bcf6c,ResourceVersion:11757116,Generation:3,CreationTimestamp:2020-05-19 13:34:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 5055682a-5415-4205-966a-8ae6230a4f65 0xc002f6a027 0xc002f6a028}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 19 13:34:27.210: INFO: All old ReplicaSets of Deployment "nginx-deployment": May 19 13:34:27.210: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-9005,SelfLink:/apis/apps/v1/namespaces/deployment-9005/replicasets/nginx-deployment-7b8c6f4498,UID:19c33e53-9caf-4b02-80d8-0bf7d3178afb,ResourceVersion:11757099,Generation:3,CreationTimestamp:2020-05-19 13:34:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 5055682a-5415-4205-966a-8ae6230a4f65 0xc002f6a0f7 0xc002f6a0f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} May 19 13:34:27.291: INFO: Pod "nginx-deployment-55fb7cb77f-2wcgs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-2wcgs,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9005,SelfLink:/api/v1/namespaces/deployment-9005/pods/nginx-deployment-55fb7cb77f-2wcgs,UID:50eb093a-3eca-4295-88ca-51bb62d0d9b7,ResourceVersion:11757105,Generation:0,CreationTimestamp:2020-05-19 13:34:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 93d27028-c09e-4146-820a-4a144f1bcf6c 0xc002f6b427 0xc002f6b428}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cf78h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cf78h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-cf78h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f6b510} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f6b530}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:27 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 13:34:27.291: INFO: Pod "nginx-deployment-55fb7cb77f-7sq96" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-7sq96,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9005,SelfLink:/api/v1/namespaces/deployment-9005/pods/nginx-deployment-55fb7cb77f-7sq96,UID:1ebf3292-d9fd-430f-9540-8e282b2ea472,ResourceVersion:11757052,Generation:0,CreationTimestamp:2020-05-19 13:34:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 93d27028-c09e-4146-820a-4a144f1bcf6c 0xc002f6b697 0xc002f6b698}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cf78h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cf78h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-cf78h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f6b7f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f6b860}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:25 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-19 13:34:25 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 13:34:27.291: INFO: Pod "nginx-deployment-55fb7cb77f-bsvs4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-bsvs4,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9005,SelfLink:/api/v1/namespaces/deployment-9005/pods/nginx-deployment-55fb7cb77f-bsvs4,UID:918ad87b-7e02-4c69-acd2-9aedae71cb21,ResourceVersion:11757107,Generation:0,CreationTimestamp:2020-05-19 13:34:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 93d27028-c09e-4146-820a-4a144f1bcf6c 0xc002f6b9d7 0xc002f6b9d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cf78h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cf78h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-cf78h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f6bb00} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f6bb20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:27 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 13:34:27.291: INFO: Pod "nginx-deployment-55fb7cb77f-fj8rt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-fj8rt,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9005,SelfLink:/api/v1/namespaces/deployment-9005/pods/nginx-deployment-55fb7cb77f-fj8rt,UID:4760e3fb-d462-493a-8998-8bb9091f95f9,ResourceVersion:11757038,Generation:0,CreationTimestamp:2020-05-19 13:34:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 93d27028-c09e-4146-820a-4a144f1bcf6c 0xc002f6bc87 0xc002f6bc88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cf78h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cf78h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-cf78h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f6bdb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f6bdd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:24 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-19 13:34:24 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 13:34:27.291: INFO: Pod "nginx-deployment-55fb7cb77f-k8nll" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-k8nll,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9005,SelfLink:/api/v1/namespaces/deployment-9005/pods/nginx-deployment-55fb7cb77f-k8nll,UID:04605c48-21f7-48e0-b410-3454d392ceb8,ResourceVersion:11757123,Generation:0,CreationTimestamp:2020-05-19 13:34:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 93d27028-c09e-4146-820a-4a144f1bcf6c 0xc002010097 0xc002010098}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cf78h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cf78h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-cf78h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002010110} {node.kubernetes.io/unreachable Exists NoExecute 0xc002010130}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-19 13:34:27 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 13:34:27.291: INFO: Pod "nginx-deployment-55fb7cb77f-m6h4f" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-m6h4f,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9005,SelfLink:/api/v1/namespaces/deployment-9005/pods/nginx-deployment-55fb7cb77f-m6h4f,UID:18affba2-ed12-47c4-8a0c-bb4440fd106c,ResourceVersion:11757104,Generation:0,CreationTimestamp:2020-05-19 13:34:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 93d27028-c09e-4146-820a-4a144f1bcf6c 0xc002010207 0xc002010208}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cf78h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cf78h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-cf78h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002010280} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020102a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:27 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 13:34:27.292: INFO: Pod "nginx-deployment-55fb7cb77f-mprrp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-mprrp,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9005,SelfLink:/api/v1/namespaces/deployment-9005/pods/nginx-deployment-55fb7cb77f-mprrp,UID:31eff332-f51c-413e-9a5c-25c045f67bf5,ResourceVersion:11757032,Generation:0,CreationTimestamp:2020-05-19 13:34:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 93d27028-c09e-4146-820a-4a144f1bcf6c 0xc002010327 0xc002010328}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cf78h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cf78h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-cf78h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020103a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020103c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:24 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-19 13:34:24 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 13:34:27.292: INFO: Pod "nginx-deployment-55fb7cb77f-pbfl4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-pbfl4,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9005,SelfLink:/api/v1/namespaces/deployment-9005/pods/nginx-deployment-55fb7cb77f-pbfl4,UID:44d23788-83c2-4a5a-8606-d0d5066219b5,ResourceVersion:11757112,Generation:0,CreationTimestamp:2020-05-19 13:34:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 93d27028-c09e-4146-820a-4a144f1bcf6c 0xc002010497 0xc002010498}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cf78h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cf78h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-cf78h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002010510} {node.kubernetes.io/unreachable Exists NoExecute 0xc002010530}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:27 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 13:34:27.292: INFO: Pod "nginx-deployment-55fb7cb77f-qmkpp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-qmkpp,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9005,SelfLink:/api/v1/namespaces/deployment-9005/pods/nginx-deployment-55fb7cb77f-qmkpp,UID:0f274739-8b0b-4d4d-afad-1b767c5098e6,ResourceVersion:11757093,Generation:0,CreationTimestamp:2020-05-19 13:34:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 93d27028-c09e-4146-820a-4a144f1bcf6c 0xc0020105b7 0xc0020105b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cf78h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cf78h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-cf78h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002010630} {node.kubernetes.io/unreachable Exists NoExecute 0xc002010650}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:26 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 13:34:27.292: INFO: Pod "nginx-deployment-55fb7cb77f-sb95z" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-sb95z,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9005,SelfLink:/api/v1/namespaces/deployment-9005/pods/nginx-deployment-55fb7cb77f-sb95z,UID:b6a25696-d2c0-4990-856e-6c54673dd3fa,ResourceVersion:11757101,Generation:0,CreationTimestamp:2020-05-19 13:34:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 93d27028-c09e-4146-820a-4a144f1bcf6c 0xc0020106d7 0xc0020106d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cf78h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cf78h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-cf78h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002010750} {node.kubernetes.io/unreachable Exists NoExecute 0xc002010770}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:27 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 13:34:27.292: INFO: Pod "nginx-deployment-55fb7cb77f-tmf4w" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-tmf4w,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9005,SelfLink:/api/v1/namespaces/deployment-9005/pods/nginx-deployment-55fb7cb77f-tmf4w,UID:63c70aee-ff82-4250-896f-c565023e2600,ResourceVersion:11757021,Generation:0,CreationTimestamp:2020-05-19 13:34:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 93d27028-c09e-4146-820a-4a144f1bcf6c 0xc0020107f7 0xc0020107f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cf78h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cf78h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-cf78h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002010870} {node.kubernetes.io/unreachable Exists NoExecute 0xc002010890}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:24 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-19 13:34:24 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 13:34:27.292: INFO: Pod "nginx-deployment-55fb7cb77f-vlgvn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-vlgvn,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9005,SelfLink:/api/v1/namespaces/deployment-9005/pods/nginx-deployment-55fb7cb77f-vlgvn,UID:038242de-cd60-498e-b233-1681d19f9b31,ResourceVersion:11757048,Generation:0,CreationTimestamp:2020-05-19 13:34:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 93d27028-c09e-4146-820a-4a144f1bcf6c 0xc002010967 0xc002010968}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cf78h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cf78h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-cf78h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020109e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002010a00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:24 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-19 13:34:25 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 13:34:27.292: INFO: Pod "nginx-deployment-55fb7cb77f-wxjwl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-wxjwl,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9005,SelfLink:/api/v1/namespaces/deployment-9005/pods/nginx-deployment-55fb7cb77f-wxjwl,UID:dacfe7f2-4730-4e04-9b4b-12e9df4e4db8,ResourceVersion:11757081,Generation:0,CreationTimestamp:2020-05-19 13:34:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 93d27028-c09e-4146-820a-4a144f1bcf6c 0xc002010ad7 0xc002010ad8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cf78h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cf78h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-cf78h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002010b50} {node.kubernetes.io/unreachable Exists NoExecute 0xc002010b70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:26 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 13:34:27.292: INFO: Pod "nginx-deployment-7b8c6f4498-2brmx" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2brmx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9005,SelfLink:/api/v1/namespaces/deployment-9005/pods/nginx-deployment-7b8c6f4498-2brmx,UID:08532ff9-c9fd-47d3-b875-c7709ef79dff,ResourceVersion:11756965,Generation:0,CreationTimestamp:2020-05-19 13:34:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19c33e53-9caf-4b02-80d8-0bf7d3178afb 0xc002010bf7 0xc002010bf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cf78h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cf78h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cf78h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002010c70} {node.kubernetes.io/unreachable Exists NoExecute 0xc002010c90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.53,StartTime:2020-05-19 13:34:11 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-19 13:34:18 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://904d22a91aab14589d92df09bc3001b76c2505b40450b4f91bf6ba73a5c0ad81}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 13:34:27.293: INFO: Pod "nginx-deployment-7b8c6f4498-4j25w" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4j25w,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9005,SelfLink:/api/v1/namespaces/deployment-9005/pods/nginx-deployment-7b8c6f4498-4j25w,UID:fffb1f7f-2f79-4a42-95e5-82e3b574d87b,ResourceVersion:11756929,Generation:0,CreationTimestamp:2020-05-19 13:34:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19c33e53-9caf-4b02-80d8-0bf7d3178afb 0xc002010d67 0xc002010d68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cf78h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cf78h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cf78h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002010de0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002010e00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.173,StartTime:2020-05-19 13:34:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-19 13:34:14 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://07918ee73b9a672cf4e3da1746af7ead3815144847c62cdfd9494502d96b1364}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 13:34:27.293: INFO: Pod "nginx-deployment-7b8c6f4498-5mk6g" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5mk6g,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9005,SelfLink:/api/v1/namespaces/deployment-9005/pods/nginx-deployment-7b8c6f4498-5mk6g,UID:9a339f2e-edcf-445f-8acf-06e2db1103e3,ResourceVersion:11756971,Generation:0,CreationTimestamp:2020-05-19 13:34:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19c33e53-9caf-4b02-80d8-0bf7d3178afb 0xc002010ed7 0xc002010ed8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cf78h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cf78h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cf78h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002010f50} {node.kubernetes.io/unreachable Exists NoExecute 0xc002010f70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:21 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.176,StartTime:2020-05-19 13:34:11 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-19 13:34:20 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://3f79faabcf3d9619d13cb1137dae1b1cb48365a04d97a6209fa97202116dcc07}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 13:34:27.293: INFO: Pod "nginx-deployment-7b8c6f4498-6b98q" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6b98q,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9005,SelfLink:/api/v1/namespaces/deployment-9005/pods/nginx-deployment-7b8c6f4498-6b98q,UID:2524824a-1a94-4621-8de6-8b62def7c9ca,ResourceVersion:11756953,Generation:0,CreationTimestamp:2020-05-19 13:34:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19c33e53-9caf-4b02-80d8-0bf7d3178afb 0xc002011047 0xc002011048}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cf78h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cf78h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cf78h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020110c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020110e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:19 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.52,StartTime:2020-05-19 13:34:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-19 13:34:18 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://05bea2e36d533eaa43a5443b2ff685361ebd649baa284605be3fa4545d47020f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 13:34:27.293: INFO: Pod "nginx-deployment-7b8c6f4498-7n4pt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7n4pt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9005,SelfLink:/api/v1/namespaces/deployment-9005/pods/nginx-deployment-7b8c6f4498-7n4pt,UID:9dc9a101-b994-4de7-99c3-13a21080ce0e,ResourceVersion:11757086,Generation:0,CreationTimestamp:2020-05-19 13:34:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19c33e53-9caf-4b02-80d8-0bf7d3178afb 0xc0020111b7 0xc0020111b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cf78h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cf78h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cf78h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002011230} {node.kubernetes.io/unreachable Exists NoExecute 0xc002011250}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:26 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 13:34:27.293: INFO: Pod "nginx-deployment-7b8c6f4498-bgcgc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bgcgc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9005,SelfLink:/api/v1/namespaces/deployment-9005/pods/nginx-deployment-7b8c6f4498-bgcgc,UID:c17bc54f-6038-427e-bab7-f6af20f74efe,ResourceVersion:11757095,Generation:0,CreationTimestamp:2020-05-19 13:34:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19c33e53-9caf-4b02-80d8-0bf7d3178afb 0xc0020112d7 0xc0020112d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cf78h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cf78h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cf78h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002011350} {node.kubernetes.io/unreachable Exists NoExecute 0xc002011370}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:26 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 13:34:27.293: INFO: Pod "nginx-deployment-7b8c6f4498-f8kp6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-f8kp6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9005,SelfLink:/api/v1/namespaces/deployment-9005/pods/nginx-deployment-7b8c6f4498-f8kp6,UID:c6d81d58-6479-46d1-a049-c254a0fb8b93,ResourceVersion:11757110,Generation:0,CreationTimestamp:2020-05-19 13:34:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19c33e53-9caf-4b02-80d8-0bf7d3178afb 0xc0020113f7 0xc0020113f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cf78h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cf78h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cf78h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002011470} {node.kubernetes.io/unreachable Exists NoExecute 0xc002011490}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-19 13:34:26 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 13:34:27.293: INFO: Pod "nginx-deployment-7b8c6f4498-hzjwx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hzjwx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9005,SelfLink:/api/v1/namespaces/deployment-9005/pods/nginx-deployment-7b8c6f4498-hzjwx,UID:8ac67313-7af4-49c3-a1c3-fc976cfef626,ResourceVersion:11757096,Generation:0,CreationTimestamp:2020-05-19 13:34:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19c33e53-9caf-4b02-80d8-0bf7d3178afb 0xc002011557 0xc002011558}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cf78h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cf78h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cf78h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020115d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020115f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:27 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 13:34:27.294: INFO: Pod "nginx-deployment-7b8c6f4498-jb7qv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jb7qv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9005,SelfLink:/api/v1/namespaces/deployment-9005/pods/nginx-deployment-7b8c6f4498-jb7qv,UID:7ff02a32-ba4b-47cd-ac4f-bd9d4ca02eca,ResourceVersion:11757103,Generation:0,CreationTimestamp:2020-05-19 13:34:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19c33e53-9caf-4b02-80d8-0bf7d3178afb 0xc002011677 0xc002011678}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cf78h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cf78h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cf78h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020116f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002011710}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:27 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 13:34:27.294: INFO: Pod "nginx-deployment-7b8c6f4498-jdpp8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jdpp8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9005,SelfLink:/api/v1/namespaces/deployment-9005/pods/nginx-deployment-7b8c6f4498-jdpp8,UID:9203e07b-3b48-4bd3-8ba5-4d5e60057506,ResourceVersion:11757073,Generation:0,CreationTimestamp:2020-05-19 13:34:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19c33e53-9caf-4b02-80d8-0bf7d3178afb 0xc002011797 0xc002011798}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cf78h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cf78h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cf78h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002011810} {node.kubernetes.io/unreachable Exists NoExecute 0xc002011830}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:26 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 13:34:27.294: INFO: Pod "nginx-deployment-7b8c6f4498-khk7g" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-khk7g,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9005,SelfLink:/api/v1/namespaces/deployment-9005/pods/nginx-deployment-7b8c6f4498-khk7g,UID:a0080008-fd24-4933-a40f-1f535bcd8aa7,ResourceVersion:11757102,Generation:0,CreationTimestamp:2020-05-19 13:34:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19c33e53-9caf-4b02-80d8-0bf7d3178afb 0xc0020118b7 0xc0020118b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cf78h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cf78h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cf78h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002011930} {node.kubernetes.io/unreachable Exists NoExecute 0xc002011950}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:27 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 13:34:27.294: INFO: Pod "nginx-deployment-7b8c6f4498-kpwfb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kpwfb,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9005,SelfLink:/api/v1/namespaces/deployment-9005/pods/nginx-deployment-7b8c6f4498-kpwfb,UID:87a41cd7-807a-4a4b-8807-db3815863aec,ResourceVersion:11757092,Generation:0,CreationTimestamp:2020-05-19 13:34:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19c33e53-9caf-4b02-80d8-0bf7d3178afb 0xc0020119d7 0xc0020119d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cf78h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cf78h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cf78h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002011a50} {node.kubernetes.io/unreachable Exists NoExecute 0xc002011a70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:26 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 13:34:27.294: INFO: Pod "nginx-deployment-7b8c6f4498-mtmbg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mtmbg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9005,SelfLink:/api/v1/namespaces/deployment-9005/pods/nginx-deployment-7b8c6f4498-mtmbg,UID:7cee54df-3165-4226-b850-f1e5f659c8f4,ResourceVersion:11756979,Generation:0,CreationTimestamp:2020-05-19 13:34:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19c33e53-9caf-4b02-80d8-0bf7d3178afb 0xc002011af7 0xc002011af8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cf78h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cf78h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cf78h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002011b70} {node.kubernetes.io/unreachable Exists NoExecute 0xc002011b90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:21 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.54,StartTime:2020-05-19 13:34:11 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-19 13:34:21 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://c1c5f5948fa4f349073e25f415903f29b73973ea432c4c05623250d4f299e0b3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 13:34:27.294: INFO: Pod "nginx-deployment-7b8c6f4498-np4x4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-np4x4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9005,SelfLink:/api/v1/namespaces/deployment-9005/pods/nginx-deployment-7b8c6f4498-np4x4,UID:0a361b54-90c9-4ead-a992-6a10f4971fd0,ResourceVersion:11757108,Generation:0,CreationTimestamp:2020-05-19 13:34:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19c33e53-9caf-4b02-80d8-0bf7d3178afb 0xc002011c67 0xc002011c68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cf78h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cf78h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cf78h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002011ce0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002011d00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:27 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 13:34:27.294: INFO: Pod "nginx-deployment-7b8c6f4498-q2phr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-q2phr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9005,SelfLink:/api/v1/namespaces/deployment-9005/pods/nginx-deployment-7b8c6f4498-q2phr,UID:f30bebc4-a3d8-42d6-beb1-1d877095d2de,ResourceVersion:11757098,Generation:0,CreationTimestamp:2020-05-19 13:34:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19c33e53-9caf-4b02-80d8-0bf7d3178afb 0xc002011e47 0xc002011e48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cf78h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cf78h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cf78h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002011f40} {node.kubernetes.io/unreachable Exists NoExecute 0xc002011f60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:27 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 13:34:27.295: INFO: Pod "nginx-deployment-7b8c6f4498-tg7hs" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tg7hs,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9005,SelfLink:/api/v1/namespaces/deployment-9005/pods/nginx-deployment-7b8c6f4498-tg7hs,UID:57079820-b6b4-4772-9400-d1df9a42a71c,ResourceVersion:11756990,Generation:0,CreationTimestamp:2020-05-19 13:34:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19c33e53-9caf-4b02-80d8-0bf7d3178afb 0xc002011fe7 0xc002011fe8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cf78h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cf78h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cf78h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023de060} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023de080}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:22 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:22 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.55,StartTime:2020-05-19 13:34:11 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-19 13:34:21 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://adeb6938676c8ae00074d7e503f2704ae9923771366b59d45261a1f3924f58f9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 13:34:27.295: INFO: Pod "nginx-deployment-7b8c6f4498-wjcwc" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-wjcwc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9005,SelfLink:/api/v1/namespaces/deployment-9005/pods/nginx-deployment-7b8c6f4498-wjcwc,UID:09e5ae9f-872d-4b74-9502-f1883db5b88d,ResourceVersion:11756942,Generation:0,CreationTimestamp:2020-05-19 13:34:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19c33e53-9caf-4b02-80d8-0bf7d3178afb 0xc0023de157 0xc0023de158}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cf78h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cf78h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cf78h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023de1d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023de1f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:18 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:18 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.174,StartTime:2020-05-19 13:34:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-19 13:34:17 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://21d3543eb0d05cfef20b012af1dc323d8183f67112cd1db5b1f055714efa6e64}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 13:34:27.295: INFO: Pod "nginx-deployment-7b8c6f4498-xkkc9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xkkc9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9005,SelfLink:/api/v1/namespaces/deployment-9005/pods/nginx-deployment-7b8c6f4498-xkkc9,UID:f872ed57-e376-40b4-bc84-651fdcebb684,ResourceVersion:11757094,Generation:0,CreationTimestamp:2020-05-19 13:34:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19c33e53-9caf-4b02-80d8-0bf7d3178afb 0xc0023de2c7 0xc0023de2c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cf78h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cf78h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cf78h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023de340} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023de360}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:26 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 13:34:27.295: INFO: Pod "nginx-deployment-7b8c6f4498-ztr7m" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ztr7m,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9005,SelfLink:/api/v1/namespaces/deployment-9005/pods/nginx-deployment-7b8c6f4498-ztr7m,UID:255829b9-ab94-4edd-bec6-255de0f9a8d9,ResourceVersion:11756951,Generation:0,CreationTimestamp:2020-05-19 13:34:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19c33e53-9caf-4b02-80d8-0bf7d3178afb 0xc0023de3e7 0xc0023de3e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cf78h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cf78h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cf78h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023de460} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023de480}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:19 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.175,StartTime:2020-05-19 13:34:11 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-19 13:34:18 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://5739b2bb843d7bca4e2ee5bf8cf3d3c5e33152be36dfa00b79c3ebbcac06b29f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 13:34:27.296: INFO: Pod "nginx-deployment-7b8c6f4498-zzf5l" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zzf5l,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9005,SelfLink:/api/v1/namespaces/deployment-9005/pods/nginx-deployment-7b8c6f4498-zzf5l,UID:50612e88-ccfd-44e3-9cb7-663febca8b2b,ResourceVersion:11757117,Generation:0,CreationTimestamp:2020-05-19 13:34:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19c33e53-9caf-4b02-80d8-0bf7d3178afb 0xc0023de557 0xc0023de558}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cf78h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cf78h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cf78h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023de5e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023de600}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:34:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-19 13:34:27 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:34:27.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9005" for this suite. May 19 13:34:47.443: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:34:47.508: INFO: namespace deployment-9005 deletion completed in 20.152683845s • [SLOW TEST:36.895 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:34:47.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 19 13:34:47.910: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9982,SelfLink:/api/v1/namespaces/watch-9982/configmaps/e2e-watch-test-configmap-a,UID:3620ac3a-416b-4b6e-964a-206318ba2357,ResourceVersion:11757414,Generation:0,CreationTimestamp:2020-05-19 13:34:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 19 13:34:47.911: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9982,SelfLink:/api/v1/namespaces/watch-9982/configmaps/e2e-watch-test-configmap-a,UID:3620ac3a-416b-4b6e-964a-206318ba2357,ResourceVersion:11757414,Generation:0,CreationTimestamp:2020-05-19 13:34:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 19 13:34:57.920: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9982,SelfLink:/api/v1/namespaces/watch-9982/configmaps/e2e-watch-test-configmap-a,UID:3620ac3a-416b-4b6e-964a-206318ba2357,ResourceVersion:11757433,Generation:0,CreationTimestamp:2020-05-19 13:34:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 19 13:34:57.920: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9982,SelfLink:/api/v1/namespaces/watch-9982/configmaps/e2e-watch-test-configmap-a,UID:3620ac3a-416b-4b6e-964a-206318ba2357,ResourceVersion:11757433,Generation:0,CreationTimestamp:2020-05-19 13:34:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 19 13:35:07.929: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9982,SelfLink:/api/v1/namespaces/watch-9982/configmaps/e2e-watch-test-configmap-a,UID:3620ac3a-416b-4b6e-964a-206318ba2357,ResourceVersion:11757453,Generation:0,CreationTimestamp:2020-05-19 13:34:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 19 13:35:07.929: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9982,SelfLink:/api/v1/namespaces/watch-9982/configmaps/e2e-watch-test-configmap-a,UID:3620ac3a-416b-4b6e-964a-206318ba2357,ResourceVersion:11757453,Generation:0,CreationTimestamp:2020-05-19 13:34:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 19 13:35:17.938: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9982,SelfLink:/api/v1/namespaces/watch-9982/configmaps/e2e-watch-test-configmap-a,UID:3620ac3a-416b-4b6e-964a-206318ba2357,ResourceVersion:11757473,Generation:0,CreationTimestamp:2020-05-19 13:34:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 19 13:35:17.938: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9982,SelfLink:/api/v1/namespaces/watch-9982/configmaps/e2e-watch-test-configmap-a,UID:3620ac3a-416b-4b6e-964a-206318ba2357,ResourceVersion:11757473,Generation:0,CreationTimestamp:2020-05-19 13:34:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 19 13:35:27.947: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-9982,SelfLink:/api/v1/namespaces/watch-9982/configmaps/e2e-watch-test-configmap-b,UID:55421acb-bc9e-4ef9-b421-b1b2b001d4a2,ResourceVersion:11757496,Generation:0,CreationTimestamp:2020-05-19 13:35:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 19 13:35:27.947: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-9982,SelfLink:/api/v1/namespaces/watch-9982/configmaps/e2e-watch-test-configmap-b,UID:55421acb-bc9e-4ef9-b421-b1b2b001d4a2,ResourceVersion:11757496,Generation:0,CreationTimestamp:2020-05-19 13:35:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 19 13:35:37.954: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-9982,SelfLink:/api/v1/namespaces/watch-9982/configmaps/e2e-watch-test-configmap-b,UID:55421acb-bc9e-4ef9-b421-b1b2b001d4a2,ResourceVersion:11757516,Generation:0,CreationTimestamp:2020-05-19 13:35:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 19 13:35:37.954: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-9982,SelfLink:/api/v1/namespaces/watch-9982/configmaps/e2e-watch-test-configmap-b,UID:55421acb-bc9e-4ef9-b421-b1b2b001d4a2,ResourceVersion:11757516,Generation:0,CreationTimestamp:2020-05-19 13:35:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:35:47.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9982" for this suite. May 19 13:35:53.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:35:54.049: INFO: namespace watch-9982 deletion completed in 6.089357012s • [SLOW TEST:66.541 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:35:54.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 19 13:35:54.159: INFO: Waiting up to 5m0s for pod "downward-api-0142d984-4d12-4aed-9add-af440a555d1d" in namespace "downward-api-8527" to be "success or failure" May 19 13:35:54.189: INFO: Pod "downward-api-0142d984-4d12-4aed-9add-af440a555d1d": Phase="Pending", Reason="", readiness=false. Elapsed: 29.419005ms May 19 13:35:56.192: INFO: Pod "downward-api-0142d984-4d12-4aed-9add-af440a555d1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03269287s May 19 13:35:58.196: INFO: Pod "downward-api-0142d984-4d12-4aed-9add-af440a555d1d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037091429s STEP: Saw pod success May 19 13:35:58.196: INFO: Pod "downward-api-0142d984-4d12-4aed-9add-af440a555d1d" satisfied condition "success or failure" May 19 13:35:58.200: INFO: Trying to get logs from node iruya-worker pod downward-api-0142d984-4d12-4aed-9add-af440a555d1d container dapi-container: STEP: delete the pod May 19 13:35:58.357: INFO: Waiting for pod downward-api-0142d984-4d12-4aed-9add-af440a555d1d to disappear May 19 13:35:58.540: INFO: Pod downward-api-0142d984-4d12-4aed-9add-af440a555d1d no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:35:58.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8527" for this suite. May 19 13:36:04.576: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:36:04.654: INFO: namespace downward-api-8527 deletion completed in 6.108164388s • [SLOW TEST:10.605 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:36:04.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:36:08.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1031" for this suite. May 19 13:36:14.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:36:14.873: INFO: namespace kubelet-test-1031 deletion completed in 6.089781953s • [SLOW TEST:10.219 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:36:14.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-7723 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7723 to expose endpoints map[] May 19 13:36:15.018: INFO: Get endpoints failed (2.826498ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 19 13:36:16.022: INFO: successfully validated that service endpoint-test2 in namespace services-7723 exposes endpoints map[] (1.006959099s elapsed) STEP: Creating pod pod1 in namespace services-7723 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7723 to expose endpoints map[pod1:[80]] May 19 13:36:19.124: INFO: successfully validated that service endpoint-test2 in namespace services-7723 exposes endpoints map[pod1:[80]] (3.094924545s elapsed) STEP: Creating pod pod2 in namespace services-7723 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7723 to expose endpoints map[pod1:[80] pod2:[80]] May 19 13:36:23.340: INFO: successfully validated that service endpoint-test2 in namespace services-7723 exposes endpoints map[pod1:[80] pod2:[80]] (4.212493373s elapsed) STEP: Deleting pod pod1 in namespace services-7723 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7723 to expose endpoints map[pod2:[80]] May 19 13:36:24.389: INFO: successfully validated that service endpoint-test2 in namespace services-7723 exposes endpoints map[pod2:[80]] (1.045473074s elapsed) STEP: Deleting pod pod2 in namespace services-7723 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7723 to expose endpoints map[] May 19 13:36:25.424: INFO: successfully validated that service endpoint-test2 in namespace services-7723 exposes endpoints map[] (1.029022193s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:36:25.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7723" for this suite. May 19 13:36:47.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:36:47.523: INFO: namespace services-7723 deletion completed in 22.06987732s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:32.649 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:36:47.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs May 19 13:36:47.581: INFO: Waiting up to 5m0s for pod "pod-1bcbf606-f18d-4cfb-8ae5-9fbc76e159dc" in namespace "emptydir-6498" to be "success or failure" May 19 13:36:47.585: INFO: Pod "pod-1bcbf606-f18d-4cfb-8ae5-9fbc76e159dc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.955485ms May 19 13:36:49.589: INFO: Pod "pod-1bcbf606-f18d-4cfb-8ae5-9fbc76e159dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007451206s May 19 13:36:51.593: INFO: Pod "pod-1bcbf606-f18d-4cfb-8ae5-9fbc76e159dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012085144s STEP: Saw pod success May 19 13:36:51.593: INFO: Pod "pod-1bcbf606-f18d-4cfb-8ae5-9fbc76e159dc" satisfied condition "success or failure" May 19 13:36:51.596: INFO: Trying to get logs from node iruya-worker pod pod-1bcbf606-f18d-4cfb-8ae5-9fbc76e159dc container test-container: STEP: delete the pod May 19 13:36:51.646: INFO: Waiting for pod pod-1bcbf606-f18d-4cfb-8ae5-9fbc76e159dc to disappear May 19 13:36:51.657: INFO: Pod pod-1bcbf606-f18d-4cfb-8ae5-9fbc76e159dc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:36:51.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6498" for this suite. May 19 13:36:57.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:36:57.732: INFO: namespace emptydir-6498 deletion completed in 6.072146228s • [SLOW TEST:10.209 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:36:57.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium May 19 13:36:57.819: INFO: Waiting up to 5m0s for pod "pod-6a83e15a-22d2-484c-839a-ee8af9e2a1ec" in namespace "emptydir-7343" to be "success or failure" May 19 13:36:57.824: INFO: Pod "pod-6a83e15a-22d2-484c-839a-ee8af9e2a1ec": Phase="Pending", Reason="", readiness=false. Elapsed: 5.343349ms May 19 13:36:59.827: INFO: Pod "pod-6a83e15a-22d2-484c-839a-ee8af9e2a1ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008652868s May 19 13:37:01.853: INFO: Pod "pod-6a83e15a-22d2-484c-839a-ee8af9e2a1ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034783221s STEP: Saw pod success May 19 13:37:01.853: INFO: Pod "pod-6a83e15a-22d2-484c-839a-ee8af9e2a1ec" satisfied condition "success or failure" May 19 13:37:01.856: INFO: Trying to get logs from node iruya-worker pod pod-6a83e15a-22d2-484c-839a-ee8af9e2a1ec container test-container: STEP: delete the pod May 19 13:37:01.909: INFO: Waiting for pod pod-6a83e15a-22d2-484c-839a-ee8af9e2a1ec to disappear May 19 13:37:01.932: INFO: Pod pod-6a83e15a-22d2-484c-839a-ee8af9e2a1ec no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:37:01.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7343" for this suite. May 19 13:37:07.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:37:08.020: INFO: namespace emptydir-7343 deletion completed in 6.081640776s • [SLOW TEST:10.287 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:37:08.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 19 13:37:34.110: INFO: Container started at 2020-05-19 13:37:10 +0000 UTC, pod became ready at 2020-05-19 13:37:33 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:37:34.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2484" for this suite. May 19 13:37:56.139: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:37:56.220: INFO: namespace container-probe-2484 deletion completed in 22.105552761s • [SLOW TEST:48.200 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:37:56.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 19 13:37:56.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-3735' May 19 13:37:59.224: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 19 13:37:59.224: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 May 19 13:37:59.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-3735' May 19 13:37:59.356: INFO: stderr: "" May 19 13:37:59.356: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:37:59.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3735" for this suite. May 19 13:38:05.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:38:05.448: INFO: namespace kubectl-3735 deletion completed in 6.088022031s • [SLOW TEST:9.227 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:38:05.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-e0812aa6-24db-4c3c-82ca-067988c0cd6b STEP: Creating a pod to test consume configMaps May 19 13:38:05.571: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a3bd505e-b7cf-45b5-8f7e-d62c1afcee5f" in namespace "projected-6658" to be "success or failure" May 19 13:38:05.574: INFO: Pod "pod-projected-configmaps-a3bd505e-b7cf-45b5-8f7e-d62c1afcee5f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.593448ms May 19 13:38:07.608: INFO: Pod "pod-projected-configmaps-a3bd505e-b7cf-45b5-8f7e-d62c1afcee5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037157975s May 19 13:38:09.616: INFO: Pod "pod-projected-configmaps-a3bd505e-b7cf-45b5-8f7e-d62c1afcee5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045068369s STEP: Saw pod success May 19 13:38:09.616: INFO: Pod "pod-projected-configmaps-a3bd505e-b7cf-45b5-8f7e-d62c1afcee5f" satisfied condition "success or failure" May 19 13:38:09.619: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-a3bd505e-b7cf-45b5-8f7e-d62c1afcee5f container projected-configmap-volume-test: STEP: delete the pod May 19 13:38:09.786: INFO: Waiting for pod pod-projected-configmaps-a3bd505e-b7cf-45b5-8f7e-d62c1afcee5f to disappear May 19 13:38:09.790: INFO: Pod pod-projected-configmaps-a3bd505e-b7cf-45b5-8f7e-d62c1afcee5f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:38:09.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6658" for this suite. May 19 13:38:15.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:38:15.879: INFO: namespace projected-6658 deletion completed in 6.085465564s • [SLOW TEST:10.431 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:38:15.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 19 13:38:15.988: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b8572712-aeea-45b9-aacb-db6181861d59" in namespace "projected-4706" to be "success or failure" May 19 13:38:15.994: INFO: Pod "downwardapi-volume-b8572712-aeea-45b9-aacb-db6181861d59": Phase="Pending", Reason="", readiness=false. Elapsed: 6.414759ms May 19 13:38:17.998: INFO: Pod "downwardapi-volume-b8572712-aeea-45b9-aacb-db6181861d59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010741689s May 19 13:38:20.003: INFO: Pod "downwardapi-volume-b8572712-aeea-45b9-aacb-db6181861d59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015015936s STEP: Saw pod success May 19 13:38:20.003: INFO: Pod "downwardapi-volume-b8572712-aeea-45b9-aacb-db6181861d59" satisfied condition "success or failure" May 19 13:38:20.006: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-b8572712-aeea-45b9-aacb-db6181861d59 container client-container: STEP: delete the pod May 19 13:38:20.026: INFO: Waiting for pod downwardapi-volume-b8572712-aeea-45b9-aacb-db6181861d59 to disappear May 19 13:38:20.099: INFO: Pod downwardapi-volume-b8572712-aeea-45b9-aacb-db6181861d59 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:38:20.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4706" for this suite. May 19 13:38:26.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:38:26.184: INFO: namespace projected-4706 deletion completed in 6.080075457s • [SLOW TEST:10.304 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:38:26.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 19 13:38:26.458: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8456,SelfLink:/api/v1/namespaces/watch-8456/configmaps/e2e-watch-test-label-changed,UID:250027f7-fd37-4aa7-bc42-d083b0cd50ef,ResourceVersion:11758095,Generation:0,CreationTimestamp:2020-05-19 13:38:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 19 13:38:26.458: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8456,SelfLink:/api/v1/namespaces/watch-8456/configmaps/e2e-watch-test-label-changed,UID:250027f7-fd37-4aa7-bc42-d083b0cd50ef,ResourceVersion:11758096,Generation:0,CreationTimestamp:2020-05-19 13:38:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 19 13:38:26.459: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8456,SelfLink:/api/v1/namespaces/watch-8456/configmaps/e2e-watch-test-label-changed,UID:250027f7-fd37-4aa7-bc42-d083b0cd50ef,ResourceVersion:11758097,Generation:0,CreationTimestamp:2020-05-19 13:38:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 19 13:38:36.487: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8456,SelfLink:/api/v1/namespaces/watch-8456/configmaps/e2e-watch-test-label-changed,UID:250027f7-fd37-4aa7-bc42-d083b0cd50ef,ResourceVersion:11758118,Generation:0,CreationTimestamp:2020-05-19 13:38:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 19 13:38:36.487: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8456,SelfLink:/api/v1/namespaces/watch-8456/configmaps/e2e-watch-test-label-changed,UID:250027f7-fd37-4aa7-bc42-d083b0cd50ef,ResourceVersion:11758119,Generation:0,CreationTimestamp:2020-05-19 13:38:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 19 13:38:36.487: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8456,SelfLink:/api/v1/namespaces/watch-8456/configmaps/e2e-watch-test-label-changed,UID:250027f7-fd37-4aa7-bc42-d083b0cd50ef,ResourceVersion:11758120,Generation:0,CreationTimestamp:2020-05-19 13:38:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:38:36.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8456" for this suite. May 19 13:38:42.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:38:42.614: INFO: namespace watch-8456 deletion completed in 6.12180412s • [SLOW TEST:16.429 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:38:42.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-d3a107f4-bd8d-453d-829a-6a2c0713e738 STEP: Creating a pod to test consume secrets May 19 13:38:42.702: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-42b0cfb7-f869-4356-866c-27ce9ca1891f" in namespace "projected-9386" to be "success or failure" May 19 13:38:42.757: INFO: Pod "pod-projected-secrets-42b0cfb7-f869-4356-866c-27ce9ca1891f": Phase="Pending", Reason="", readiness=false. Elapsed: 54.860573ms May 19 13:38:44.776: INFO: Pod "pod-projected-secrets-42b0cfb7-f869-4356-866c-27ce9ca1891f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074219446s May 19 13:38:46.781: INFO: Pod "pod-projected-secrets-42b0cfb7-f869-4356-866c-27ce9ca1891f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.078663304s STEP: Saw pod success May 19 13:38:46.781: INFO: Pod "pod-projected-secrets-42b0cfb7-f869-4356-866c-27ce9ca1891f" satisfied condition "success or failure" May 19 13:38:46.784: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-42b0cfb7-f869-4356-866c-27ce9ca1891f container projected-secret-volume-test: STEP: delete the pod May 19 13:38:46.804: INFO: Waiting for pod pod-projected-secrets-42b0cfb7-f869-4356-866c-27ce9ca1891f to disappear May 19 13:38:46.852: INFO: Pod pod-projected-secrets-42b0cfb7-f869-4356-866c-27ce9ca1891f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:38:46.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9386" for this suite. May 19 13:38:52.890: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:38:52.962: INFO: namespace projected-9386 deletion completed in 6.105590607s • [SLOW TEST:10.348 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:38:52.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod May 19 13:38:57.030: INFO: Pod pod-hostip-0019aefd-216f-4c4d-b337-25726fb85352 has hostIP: 172.17.0.6 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:38:57.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6600" for this suite. May 19 13:39:19.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:39:19.121: INFO: namespace pods-6600 deletion completed in 22.088545413s • [SLOW TEST:26.159 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:39:19.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9393.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9393.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9393.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9393.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9393.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9393.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9393.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9393.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9393.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9393.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9393.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 15.113.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.113.15_udp@PTR;check="$$(dig +tcp +noall +answer +search 15.113.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.113.15_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9393.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9393.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9393.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9393.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9393.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9393.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9393.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9393.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9393.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9393.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9393.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 15.113.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.113.15_udp@PTR;check="$$(dig +tcp +noall +answer +search 15.113.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.113.15_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 19 13:39:27.442: INFO: Unable to read wheezy_udp@dns-test-service.dns-9393.svc.cluster.local from pod dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c: the server could not find the requested resource (get pods dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c) May 19 13:39:27.445: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9393.svc.cluster.local from pod dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c: the server could not find the requested resource (get pods dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c) May 19 13:39:27.447: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local from pod dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c: the server could not find the requested resource (get pods dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c) May 19 13:39:27.449: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local from pod dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c: the server could not find the requested resource (get pods dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c) May 19 13:39:27.465: INFO: Unable to read jessie_udp@dns-test-service.dns-9393.svc.cluster.local from pod dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c: the server could not find the requested resource (get pods dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c) May 19 13:39:27.467: INFO: Unable to read jessie_tcp@dns-test-service.dns-9393.svc.cluster.local from pod dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c: the server could not find the requested resource (get pods dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c) May 19 13:39:27.469: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local from pod dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c: the server could not find the requested resource (get pods dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c) May 19 13:39:27.471: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local from pod dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c: the server could not find the requested resource (get pods dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c) May 19 13:39:27.482: INFO: Lookups using dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c failed for: [wheezy_udp@dns-test-service.dns-9393.svc.cluster.local wheezy_tcp@dns-test-service.dns-9393.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local jessie_udp@dns-test-service.dns-9393.svc.cluster.local jessie_tcp@dns-test-service.dns-9393.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local] May 19 13:39:32.488: INFO: Unable to read wheezy_udp@dns-test-service.dns-9393.svc.cluster.local from pod dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c: the server could not find the requested resource (get pods dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c) May 19 13:39:32.492: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9393.svc.cluster.local from pod dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c: the server could not find the requested resource (get pods dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c) May 19 13:39:32.496: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local from pod dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c: the server could not find the requested resource (get pods dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c) May 19 13:39:32.499: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local from pod dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c: the server could not find the requested resource (get pods dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c) May 19 13:39:32.518: INFO: Unable to read jessie_udp@dns-test-service.dns-9393.svc.cluster.local from pod dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c: the server could not find the requested resource (get pods dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c) May 19 13:39:32.521: INFO: Unable to read jessie_tcp@dns-test-service.dns-9393.svc.cluster.local from pod dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c: the server could not find the requested resource (get pods dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c) May 19 13:39:32.524: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local from pod dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c: the server could not find the requested resource (get pods dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c) May 19 13:39:32.526: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local from pod dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c: the server could not find the requested resource (get pods dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c) May 19 13:39:32.542: INFO: Lookups using dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c failed for: [wheezy_udp@dns-test-service.dns-9393.svc.cluster.local wheezy_tcp@dns-test-service.dns-9393.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local jessie_udp@dns-test-service.dns-9393.svc.cluster.local jessie_tcp@dns-test-service.dns-9393.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local] May 19 13:39:37.487: INFO: Unable to read wheezy_udp@dns-test-service.dns-9393.svc.cluster.local from pod dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c: the server could not find the requested resource (get pods dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c) May 19 13:39:37.490: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9393.svc.cluster.local from pod dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c: the server could not find the requested resource (get pods dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c) May 19 13:39:37.493: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local from pod dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c: the server could not find the requested resource (get pods dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c) May 19 13:39:37.496: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local from pod dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c: the server could not find the requested resource (get pods dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c) May 19 13:39:37.512: INFO: Unable to read jessie_udp@dns-test-service.dns-9393.svc.cluster.local from pod dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c: the server could not find the requested resource (get pods dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c) May 19 13:39:37.514: INFO: Unable to read jessie_tcp@dns-test-service.dns-9393.svc.cluster.local from pod dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c: the server could not find the requested resource (get pods dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c) May 19 13:39:37.517: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local from pod dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c: the server could not find the requested resource (get pods dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c) May 19 13:39:37.519: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local from pod dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c: the server could not find the requested resource (get pods dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c) May 19 13:39:37.530: INFO: Lookups using dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c failed for: [wheezy_udp@dns-test-service.dns-9393.svc.cluster.local wheezy_tcp@dns-test-service.dns-9393.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local jessie_udp@dns-test-service.dns-9393.svc.cluster.local jessie_tcp@dns-test-service.dns-9393.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local] May 19 13:39:42.488: INFO: Unable to read wheezy_udp@dns-test-service.dns-9393.svc.cluster.local from pod dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c: the server could not find the requested resource (get pods dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c) May 19 13:39:42.492: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9393.svc.cluster.local from pod dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c: the server could not find the requested resource (get pods dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c) May 19 13:39:42.495: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local from pod dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c: the server could not find the requested resource (get pods dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c) May 19 13:39:42.499: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local from pod dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c: the server could not find the requested resource (get pods dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c) May 19 13:39:42.520: INFO: Unable to read jessie_udp@dns-test-service.dns-9393.svc.cluster.local from pod dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c: the server could not find the requested resource (get pods dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c) May 19 13:39:42.523: INFO: Unable to read jessie_tcp@dns-test-service.dns-9393.svc.cluster.local from pod dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c: the server could not find the requested resource (get pods dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c) May 19 13:39:42.525: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local from pod dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c: the server could not find the requested resource (get pods dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c) May 19 13:39:42.528: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local from pod dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c: the server could not find the requested resource (get pods dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c) May 19 13:39:42.546: INFO: Lookups using dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c failed for: [wheezy_udp@dns-test-service.dns-9393.svc.cluster.local wheezy_tcp@dns-test-service.dns-9393.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local jessie_udp@dns-test-service.dns-9393.svc.cluster.local jessie_tcp@dns-test-service.dns-9393.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local] May 19 13:39:47.496: INFO: Unable to read wheezy_udp@dns-test-service.dns-9393.svc.cluster.local from pod dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c: the server could not find the requested resource (get pods dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c) May 19 13:39:47.499: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9393.svc.cluster.local from pod dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c: the server could not find the requested resource (get pods dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c) May 19 13:39:47.502: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local from pod dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c: the server could not find the requested resource (get pods dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c) May 19 13:39:47.504: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local from pod dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c: the server could not find the requested resource (get pods dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c) May 19 13:39:47.524: INFO: Unable to read jessie_udp@dns-test-service.dns-9393.svc.cluster.local from pod dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c: the server could not find the requested resource (get pods dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c) May 19 13:39:47.526: INFO: Unable to read jessie_tcp@dns-test-service.dns-9393.svc.cluster.local from pod dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c: the server could not find the requested resource (get pods dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c) May 19 13:39:47.527: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local from pod dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c: the server could not find the requested resource (get pods dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c) May 19 13:39:47.529: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local from pod dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c: the server could not find the requested resource (get pods dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c) May 19 13:39:47.539: INFO: Lookups using dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c failed for: [wheezy_udp@dns-test-service.dns-9393.svc.cluster.local wheezy_tcp@dns-test-service.dns-9393.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local jessie_udp@dns-test-service.dns-9393.svc.cluster.local jessie_tcp@dns-test-service.dns-9393.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local] May 19 13:39:52.488: INFO: Unable to read wheezy_udp@dns-test-service.dns-9393.svc.cluster.local from pod dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c: the server could not find the requested resource (get pods dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c) May 19 13:39:52.492: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9393.svc.cluster.local from pod dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c: the server could not find the requested resource (get pods dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c) May 19 13:39:52.494: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local from pod dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c: the server could not find the requested resource (get pods dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c) May 19 13:39:52.497: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local from pod dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c: the server could not find the requested resource (get pods dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c) May 19 13:39:52.518: INFO: Unable to read jessie_udp@dns-test-service.dns-9393.svc.cluster.local from pod dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c: the server could not find the requested resource (get pods dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c) May 19 13:39:52.520: INFO: Unable to read jessie_tcp@dns-test-service.dns-9393.svc.cluster.local from pod dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c: the server could not find the requested resource (get pods dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c) May 19 13:39:52.523: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local from pod dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c: the server could not find the requested resource (get pods dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c) May 19 13:39:52.526: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local from pod dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c: the server could not find the requested resource (get pods dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c) May 19 13:39:52.544: INFO: Lookups using dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c failed for: [wheezy_udp@dns-test-service.dns-9393.svc.cluster.local wheezy_tcp@dns-test-service.dns-9393.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local jessie_udp@dns-test-service.dns-9393.svc.cluster.local jessie_tcp@dns-test-service.dns-9393.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9393.svc.cluster.local] May 19 13:39:57.582: INFO: DNS probes using dns-9393/dns-test-0c5a0fbd-39ef-4b8a-8047-42a1f06e944c succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:39:58.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9393" for this suite. May 19 13:40:04.412: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:40:04.486: INFO: namespace dns-9393 deletion completed in 6.128855157s • [SLOW TEST:45.364 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:40:04.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 19 13:40:04.551: INFO: Creating deployment "test-recreate-deployment" May 19 13:40:04.566: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 19 13:40:04.578: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 19 13:40:06.585: INFO: Waiting deployment "test-recreate-deployment" to complete May 19 13:40:06.587: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725492404, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725492404, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725492404, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725492404, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 13:40:08.591: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 19 13:40:08.598: INFO: Updating deployment test-recreate-deployment May 19 13:40:08.598: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 19 13:40:09.024: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-905,SelfLink:/apis/apps/v1/namespaces/deployment-905/deployments/test-recreate-deployment,UID:bafba74c-3823-4ad4-96d6-b5bf4353509c,ResourceVersion:11758450,Generation:2,CreationTimestamp:2020-05-19 13:40:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-05-19 13:40:08 +0000 UTC 2020-05-19 13:40:08 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-05-19 13:40:08 +0000 UTC 2020-05-19 13:40:04 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} May 19 13:40:09.087: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-905,SelfLink:/apis/apps/v1/namespaces/deployment-905/replicasets/test-recreate-deployment-5c8c9cc69d,UID:2fe8b247-8062-4162-be13-892d33802ff5,ResourceVersion:11758447,Generation:1,CreationTimestamp:2020-05-19 13:40:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment bafba74c-3823-4ad4-96d6-b5bf4353509c 0xc0030369f7 0xc0030369f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 19 13:40:09.087: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 19 13:40:09.087: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-905,SelfLink:/apis/apps/v1/namespaces/deployment-905/replicasets/test-recreate-deployment-6df85df6b9,UID:c252018d-8b85-4808-a443-d3a2a1883652,ResourceVersion:11758439,Generation:2,CreationTimestamp:2020-05-19 13:40:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment bafba74c-3823-4ad4-96d6-b5bf4353509c 0xc003036ac7 0xc003036ac8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 19 13:40:09.226: INFO: Pod "test-recreate-deployment-5c8c9cc69d-jm2w2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-jm2w2,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-905,SelfLink:/api/v1/namespaces/deployment-905/pods/test-recreate-deployment-5c8c9cc69d-jm2w2,UID:923e86bd-e1a3-49bd-954e-c5ad9d4ac47f,ResourceVersion:11758452,Generation:0,CreationTimestamp:2020-05-19 13:40:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 2fe8b247-8062-4162-be13-892d33802ff5 0xc003037397 0xc003037398}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j78h8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j78h8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-j78h8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003037410} {node.kubernetes.io/unreachable Exists NoExecute 0xc003037430}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:40:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:40:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:40:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:40:08 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-19 13:40:09 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:40:09.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-905" for this suite. May 19 13:40:15.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:40:15.319: INFO: namespace deployment-905 deletion completed in 6.090112418s • [SLOW TEST:10.832 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:40:15.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium May 19 13:40:15.600: INFO: Waiting up to 5m0s for pod "pod-707c7f5f-d257-4a6e-bced-bdaf035e676c" in namespace "emptydir-3305" to be "success or failure" May 19 13:40:15.627: INFO: Pod "pod-707c7f5f-d257-4a6e-bced-bdaf035e676c": Phase="Pending", Reason="", readiness=false. Elapsed: 26.13278ms May 19 13:40:17.630: INFO: Pod "pod-707c7f5f-d257-4a6e-bced-bdaf035e676c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029894601s May 19 13:40:19.637: INFO: Pod "pod-707c7f5f-d257-4a6e-bced-bdaf035e676c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036414004s STEP: Saw pod success May 19 13:40:19.637: INFO: Pod "pod-707c7f5f-d257-4a6e-bced-bdaf035e676c" satisfied condition "success or failure" May 19 13:40:19.643: INFO: Trying to get logs from node iruya-worker2 pod pod-707c7f5f-d257-4a6e-bced-bdaf035e676c container test-container: STEP: delete the pod May 19 13:40:19.662: INFO: Waiting for pod pod-707c7f5f-d257-4a6e-bced-bdaf035e676c to disappear May 19 13:40:19.668: INFO: Pod pod-707c7f5f-d257-4a6e-bced-bdaf035e676c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:40:19.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3305" for this suite. May 19 13:40:25.683: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:40:25.772: INFO: namespace emptydir-3305 deletion completed in 6.101090658s • [SLOW TEST:10.451 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:40:25.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:40:31.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4395" for this suite. May 19 13:40:37.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:40:37.601: INFO: namespace watch-4395 deletion completed in 6.170535989s • [SLOW TEST:11.828 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:40:37.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-5e37feef-96ac-41d5-9eb1-7d768e0912fa STEP: Creating a pod to test consume secrets May 19 13:40:37.786: INFO: Waiting up to 5m0s for pod "pod-secrets-a6e5b148-786c-4c50-9a42-95587f862976" in namespace "secrets-7620" to be "success or failure" May 19 13:40:37.799: INFO: Pod "pod-secrets-a6e5b148-786c-4c50-9a42-95587f862976": Phase="Pending", Reason="", readiness=false. Elapsed: 13.120499ms May 19 13:40:39.850: INFO: Pod "pod-secrets-a6e5b148-786c-4c50-9a42-95587f862976": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064360044s May 19 13:40:41.855: INFO: Pod "pod-secrets-a6e5b148-786c-4c50-9a42-95587f862976": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068715521s STEP: Saw pod success May 19 13:40:41.855: INFO: Pod "pod-secrets-a6e5b148-786c-4c50-9a42-95587f862976" satisfied condition "success or failure" May 19 13:40:41.858: INFO: Trying to get logs from node iruya-worker pod pod-secrets-a6e5b148-786c-4c50-9a42-95587f862976 container secret-volume-test: STEP: delete the pod May 19 13:40:42.069: INFO: Waiting for pod pod-secrets-a6e5b148-786c-4c50-9a42-95587f862976 to disappear May 19 13:40:42.155: INFO: Pod pod-secrets-a6e5b148-786c-4c50-9a42-95587f862976 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:40:42.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7620" for this suite. May 19 13:40:48.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:40:48.259: INFO: namespace secrets-7620 deletion completed in 6.100156714s • [SLOW TEST:10.657 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:40:48.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 19 13:40:48.389: INFO: Pod name pod-release: Found 0 pods out of 1 May 19 13:40:53.393: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:40:54.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9232" for this suite. May 19 13:41:00.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:41:00.560: INFO: namespace replication-controller-9232 deletion completed in 6.147868005s • [SLOW TEST:12.300 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:41:00.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 19 13:41:08.826: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 19 13:41:08.851: INFO: Pod pod-with-prestop-http-hook still exists May 19 13:41:10.851: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 19 13:41:10.855: INFO: Pod pod-with-prestop-http-hook still exists May 19 13:41:12.851: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 19 13:41:12.860: INFO: Pod pod-with-prestop-http-hook still exists May 19 13:41:14.851: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 19 13:41:14.855: INFO: Pod pod-with-prestop-http-hook still exists May 19 13:41:16.851: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 19 13:41:16.855: INFO: Pod pod-with-prestop-http-hook still exists May 19 13:41:18.851: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 19 13:41:18.855: INFO: Pod pod-with-prestop-http-hook still exists May 19 13:41:20.851: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 19 13:41:20.855: INFO: Pod pod-with-prestop-http-hook still exists May 19 13:41:22.851: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 19 13:41:22.855: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:41:22.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6142" for this suite. May 19 13:41:44.929: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:41:45.000: INFO: namespace container-lifecycle-hook-6142 deletion completed in 22.089786741s • [SLOW TEST:44.440 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:41:45.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-a8a0f470-c663-47c4-8bc2-e706538e493e STEP: Creating a pod to test consume configMaps May 19 13:41:45.071: INFO: Waiting up to 5m0s for pod "pod-configmaps-bf89e1b7-2b86-47a0-a1ae-debff8116033" in namespace "configmap-6606" to be "success or failure" May 19 13:41:45.110: INFO: Pod "pod-configmaps-bf89e1b7-2b86-47a0-a1ae-debff8116033": Phase="Pending", Reason="", readiness=false. Elapsed: 39.364537ms May 19 13:41:47.114: INFO: Pod "pod-configmaps-bf89e1b7-2b86-47a0-a1ae-debff8116033": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043136666s May 19 13:41:49.118: INFO: Pod "pod-configmaps-bf89e1b7-2b86-47a0-a1ae-debff8116033": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046904411s May 19 13:41:51.121: INFO: Pod "pod-configmaps-bf89e1b7-2b86-47a0-a1ae-debff8116033": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.050147901s STEP: Saw pod success May 19 13:41:51.121: INFO: Pod "pod-configmaps-bf89e1b7-2b86-47a0-a1ae-debff8116033" satisfied condition "success or failure" May 19 13:41:51.123: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-bf89e1b7-2b86-47a0-a1ae-debff8116033 container configmap-volume-test: STEP: delete the pod May 19 13:41:51.145: INFO: Waiting for pod pod-configmaps-bf89e1b7-2b86-47a0-a1ae-debff8116033 to disappear May 19 13:41:51.149: INFO: Pod pod-configmaps-bf89e1b7-2b86-47a0-a1ae-debff8116033 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:41:51.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6606" for this suite. May 19 13:41:57.165: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:41:57.264: INFO: namespace configmap-6606 deletion completed in 6.111606194s • [SLOW TEST:12.262 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:41:57.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-5f50cc9d-0798-41ca-bf61-91fc9366800b STEP: Creating a pod to test consume configMaps May 19 13:41:57.332: INFO: Waiting up to 5m0s for pod "pod-configmaps-4edbe272-23bf-4ac1-88f3-38ce031a73fa" in namespace "configmap-9902" to be "success or failure" May 19 13:41:57.390: INFO: Pod "pod-configmaps-4edbe272-23bf-4ac1-88f3-38ce031a73fa": Phase="Pending", Reason="", readiness=false. Elapsed: 57.662216ms May 19 13:41:59.395: INFO: Pod "pod-configmaps-4edbe272-23bf-4ac1-88f3-38ce031a73fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062340867s May 19 13:42:01.399: INFO: Pod "pod-configmaps-4edbe272-23bf-4ac1-88f3-38ce031a73fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.066398178s STEP: Saw pod success May 19 13:42:01.399: INFO: Pod "pod-configmaps-4edbe272-23bf-4ac1-88f3-38ce031a73fa" satisfied condition "success or failure" May 19 13:42:01.402: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-4edbe272-23bf-4ac1-88f3-38ce031a73fa container configmap-volume-test: STEP: delete the pod May 19 13:42:01.597: INFO: Waiting for pod pod-configmaps-4edbe272-23bf-4ac1-88f3-38ce031a73fa to disappear May 19 13:42:01.702: INFO: Pod pod-configmaps-4edbe272-23bf-4ac1-88f3-38ce031a73fa no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:42:01.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9902" for this suite. May 19 13:42:07.726: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:42:07.794: INFO: namespace configmap-9902 deletion completed in 6.08879707s • [SLOW TEST:10.531 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:42:07.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 19 13:42:11.916: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:42:11.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9578" for this suite. May 19 13:42:17.996: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:42:18.076: INFO: namespace container-runtime-9578 deletion completed in 6.099877238s • [SLOW TEST:10.282 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:42:18.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-4p8j STEP: Creating a pod to test atomic-volume-subpath May 19 13:42:18.210: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-4p8j" in namespace "subpath-4003" to be "success or failure" May 19 13:42:18.234: INFO: Pod "pod-subpath-test-projected-4p8j": Phase="Pending", Reason="", readiness=false. Elapsed: 23.623744ms May 19 13:42:20.238: INFO: Pod "pod-subpath-test-projected-4p8j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027511128s May 19 13:42:22.242: INFO: Pod "pod-subpath-test-projected-4p8j": Phase="Running", Reason="", readiness=true. Elapsed: 4.032095577s May 19 13:42:24.247: INFO: Pod "pod-subpath-test-projected-4p8j": Phase="Running", Reason="", readiness=true. Elapsed: 6.036296423s May 19 13:42:26.251: INFO: Pod "pod-subpath-test-projected-4p8j": Phase="Running", Reason="", readiness=true. Elapsed: 8.040859949s May 19 13:42:28.255: INFO: Pod "pod-subpath-test-projected-4p8j": Phase="Running", Reason="", readiness=true. Elapsed: 10.045076281s May 19 13:42:30.260: INFO: Pod "pod-subpath-test-projected-4p8j": Phase="Running", Reason="", readiness=true. Elapsed: 12.049722012s May 19 13:42:32.265: INFO: Pod "pod-subpath-test-projected-4p8j": Phase="Running", Reason="", readiness=true. Elapsed: 14.054728818s May 19 13:42:34.269: INFO: Pod "pod-subpath-test-projected-4p8j": Phase="Running", Reason="", readiness=true. Elapsed: 16.058564166s May 19 13:42:36.274: INFO: Pod "pod-subpath-test-projected-4p8j": Phase="Running", Reason="", readiness=true. Elapsed: 18.063283518s May 19 13:42:38.277: INFO: Pod "pod-subpath-test-projected-4p8j": Phase="Running", Reason="", readiness=true. Elapsed: 20.066823611s May 19 13:42:40.280: INFO: Pod "pod-subpath-test-projected-4p8j": Phase="Running", Reason="", readiness=true. Elapsed: 22.070126637s May 19 13:42:42.285: INFO: Pod "pod-subpath-test-projected-4p8j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.074208946s STEP: Saw pod success May 19 13:42:42.285: INFO: Pod "pod-subpath-test-projected-4p8j" satisfied condition "success or failure" May 19 13:42:42.288: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-projected-4p8j container test-container-subpath-projected-4p8j: STEP: delete the pod May 19 13:42:42.319: INFO: Waiting for pod pod-subpath-test-projected-4p8j to disappear May 19 13:42:42.323: INFO: Pod pod-subpath-test-projected-4p8j no longer exists STEP: Deleting pod pod-subpath-test-projected-4p8j May 19 13:42:42.323: INFO: Deleting pod "pod-subpath-test-projected-4p8j" in namespace "subpath-4003" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:42:42.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4003" for this suite. May 19 13:42:48.358: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:42:48.430: INFO: namespace subpath-4003 deletion completed in 6.102967665s • [SLOW TEST:30.353 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:42:48.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 19 13:42:48.506: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ab543a0c-1642-4fa4-8c45-6eebc8eb5e28" in namespace "downward-api-6897" to be "success or failure" May 19 13:42:48.547: INFO: Pod "downwardapi-volume-ab543a0c-1642-4fa4-8c45-6eebc8eb5e28": Phase="Pending", Reason="", readiness=false. Elapsed: 40.51896ms May 19 13:42:50.551: INFO: Pod "downwardapi-volume-ab543a0c-1642-4fa4-8c45-6eebc8eb5e28": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044736603s May 19 13:42:52.555: INFO: Pod "downwardapi-volume-ab543a0c-1642-4fa4-8c45-6eebc8eb5e28": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049104862s STEP: Saw pod success May 19 13:42:52.555: INFO: Pod "downwardapi-volume-ab543a0c-1642-4fa4-8c45-6eebc8eb5e28" satisfied condition "success or failure" May 19 13:42:52.558: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-ab543a0c-1642-4fa4-8c45-6eebc8eb5e28 container client-container: STEP: delete the pod May 19 13:42:52.635: INFO: Waiting for pod downwardapi-volume-ab543a0c-1642-4fa4-8c45-6eebc8eb5e28 to disappear May 19 13:42:52.726: INFO: Pod downwardapi-volume-ab543a0c-1642-4fa4-8c45-6eebc8eb5e28 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:42:52.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6897" for this suite. May 19 13:42:58.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:42:58.819: INFO: namespace downward-api-6897 deletion completed in 6.088539453s • [SLOW TEST:10.389 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:42:58.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:43:02.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2408" for this suite. May 19 13:43:42.961: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:43:43.052: INFO: namespace kubelet-test-2408 deletion completed in 40.102406781s • [SLOW TEST:44.233 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:43:43.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-dmnzl in namespace proxy-2456 I0519 13:43:43.195009 6 runners.go:180] Created replication controller with name: proxy-service-dmnzl, namespace: proxy-2456, replica count: 1 I0519 13:43:44.245469 6 runners.go:180] proxy-service-dmnzl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 13:43:45.245694 6 runners.go:180] proxy-service-dmnzl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 13:43:46.245926 6 runners.go:180] proxy-service-dmnzl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 13:43:47.246122 6 runners.go:180] proxy-service-dmnzl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 19 13:43:47.249: INFO: setup took 4.123644092s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 19 13:43:47.256: INFO: (0) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct/proxy/: test (200; 6.35822ms) May 19 13:43:47.257: INFO: (0) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct:1080/proxy/: test<... (200; 6.938871ms) May 19 13:43:47.257: INFO: (0) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct:162/proxy/: bar (200; 6.958506ms) May 19 13:43:47.257: INFO: (0) /api/v1/namespaces/proxy-2456/pods/http:proxy-service-dmnzl-6qvct:160/proxy/: foo (200; 7.161988ms) May 19 13:43:47.257: INFO: (0) /api/v1/namespaces/proxy-2456/pods/http:proxy-service-dmnzl-6qvct:162/proxy/: bar (200; 7.187378ms) May 19 13:43:47.257: INFO: (0) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct:160/proxy/: foo (200; 7.556489ms) May 19 13:43:47.257: INFO: (0) /api/v1/namespaces/proxy-2456/pods/http:proxy-service-dmnzl-6qvct:1080/proxy/: ... (200; 7.782421ms) May 19 13:43:47.258: INFO: (0) /api/v1/namespaces/proxy-2456/services/http:proxy-service-dmnzl:portname2/proxy/: bar (200; 8.937501ms) May 19 13:43:47.259: INFO: (0) /api/v1/namespaces/proxy-2456/services/proxy-service-dmnzl:portname2/proxy/: bar (200; 8.944147ms) May 19 13:43:47.259: INFO: (0) /api/v1/namespaces/proxy-2456/services/http:proxy-service-dmnzl:portname1/proxy/: foo (200; 8.919303ms) May 19 13:43:47.259: INFO: (0) /api/v1/namespaces/proxy-2456/services/proxy-service-dmnzl:portname1/proxy/: foo (200; 9.138472ms) May 19 13:43:47.270: INFO: (0) /api/v1/namespaces/proxy-2456/services/https:proxy-service-dmnzl:tlsportname2/proxy/: tls qux (200; 19.941293ms) May 19 13:43:47.270: INFO: (0) /api/v1/namespaces/proxy-2456/pods/https:proxy-service-dmnzl-6qvct:443/proxy/: ... (200; 2.920711ms) May 19 13:43:47.273: INFO: (1) /api/v1/namespaces/proxy-2456/pods/http:proxy-service-dmnzl-6qvct:160/proxy/: foo (200; 2.958372ms) May 19 13:43:47.273: INFO: (1) /api/v1/namespaces/proxy-2456/pods/https:proxy-service-dmnzl-6qvct:462/proxy/: tls qux (200; 3.350977ms) May 19 13:43:47.274: INFO: (1) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct:160/proxy/: foo (200; 3.779255ms) May 19 13:43:47.274: INFO: (1) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct:162/proxy/: bar (200; 3.798733ms) May 19 13:43:47.274: INFO: (1) /api/v1/namespaces/proxy-2456/pods/http:proxy-service-dmnzl-6qvct:162/proxy/: bar (200; 3.804587ms) May 19 13:43:47.274: INFO: (1) /api/v1/namespaces/proxy-2456/pods/https:proxy-service-dmnzl-6qvct:460/proxy/: tls baz (200; 4.038617ms) May 19 13:43:47.274: INFO: (1) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct:1080/proxy/: test<... (200; 4.449264ms) May 19 13:43:47.274: INFO: (1) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct/proxy/: test (200; 4.359369ms) May 19 13:43:47.275: INFO: (1) /api/v1/namespaces/proxy-2456/pods/https:proxy-service-dmnzl-6qvct:443/proxy/: test<... (200; 6.059298ms) May 19 13:43:47.282: INFO: (2) /api/v1/namespaces/proxy-2456/services/proxy-service-dmnzl:portname1/proxy/: foo (200; 6.413917ms) May 19 13:43:47.282: INFO: (2) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct/proxy/: test (200; 6.312763ms) May 19 13:43:47.283: INFO: (2) /api/v1/namespaces/proxy-2456/pods/http:proxy-service-dmnzl-6qvct:1080/proxy/: ... (200; 6.708144ms) May 19 13:43:47.283: INFO: (2) /api/v1/namespaces/proxy-2456/services/https:proxy-service-dmnzl:tlsportname2/proxy/: tls qux (200; 7.066997ms) May 19 13:43:47.283: INFO: (2) /api/v1/namespaces/proxy-2456/services/http:proxy-service-dmnzl:portname2/proxy/: bar (200; 7.119474ms) May 19 13:43:47.283: INFO: (2) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct:160/proxy/: foo (200; 7.096238ms) May 19 13:43:47.283: INFO: (2) /api/v1/namespaces/proxy-2456/services/http:proxy-service-dmnzl:portname1/proxy/: foo (200; 7.124771ms) May 19 13:43:47.283: INFO: (2) /api/v1/namespaces/proxy-2456/services/https:proxy-service-dmnzl:tlsportname1/proxy/: tls baz (200; 7.192507ms) May 19 13:43:47.284: INFO: (2) /api/v1/namespaces/proxy-2456/services/proxy-service-dmnzl:portname2/proxy/: bar (200; 7.88688ms) May 19 13:43:47.287: INFO: (3) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct/proxy/: test (200; 3.177153ms) May 19 13:43:47.289: INFO: (3) /api/v1/namespaces/proxy-2456/pods/https:proxy-service-dmnzl-6qvct:443/proxy/: test<... (200; 5.316001ms) May 19 13:43:47.289: INFO: (3) /api/v1/namespaces/proxy-2456/pods/https:proxy-service-dmnzl-6qvct:460/proxy/: tls baz (200; 5.396295ms) May 19 13:43:47.289: INFO: (3) /api/v1/namespaces/proxy-2456/pods/http:proxy-service-dmnzl-6qvct:1080/proxy/: ... (200; 5.38868ms) May 19 13:43:47.289: INFO: (3) /api/v1/namespaces/proxy-2456/services/http:proxy-service-dmnzl:portname2/proxy/: bar (200; 5.510714ms) May 19 13:43:47.289: INFO: (3) /api/v1/namespaces/proxy-2456/services/https:proxy-service-dmnzl:tlsportname1/proxy/: tls baz (200; 5.408007ms) May 19 13:43:47.289: INFO: (3) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct:162/proxy/: bar (200; 5.386731ms) May 19 13:43:47.289: INFO: (3) /api/v1/namespaces/proxy-2456/pods/http:proxy-service-dmnzl-6qvct:162/proxy/: bar (200; 5.343541ms) May 19 13:43:47.290: INFO: (3) /api/v1/namespaces/proxy-2456/services/https:proxy-service-dmnzl:tlsportname2/proxy/: tls qux (200; 5.467927ms) May 19 13:43:47.290: INFO: (3) /api/v1/namespaces/proxy-2456/pods/https:proxy-service-dmnzl-6qvct:462/proxy/: tls qux (200; 5.698543ms) May 19 13:43:47.290: INFO: (3) /api/v1/namespaces/proxy-2456/services/http:proxy-service-dmnzl:portname1/proxy/: foo (200; 5.856071ms) May 19 13:43:47.291: INFO: (3) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct:160/proxy/: foo (200; 6.983581ms) May 19 13:43:47.291: INFO: (3) /api/v1/namespaces/proxy-2456/services/proxy-service-dmnzl:portname2/proxy/: bar (200; 7.064436ms) May 19 13:43:47.296: INFO: (4) /api/v1/namespaces/proxy-2456/pods/http:proxy-service-dmnzl-6qvct:160/proxy/: foo (200; 4.431265ms) May 19 13:43:47.296: INFO: (4) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct:160/proxy/: foo (200; 4.495783ms) May 19 13:43:47.296: INFO: (4) /api/v1/namespaces/proxy-2456/pods/https:proxy-service-dmnzl-6qvct:462/proxy/: tls qux (200; 4.617672ms) May 19 13:43:47.296: INFO: (4) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct:1080/proxy/: test<... (200; 4.513644ms) May 19 13:43:47.296: INFO: (4) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct:162/proxy/: bar (200; 5.035204ms) May 19 13:43:47.297: INFO: (4) /api/v1/namespaces/proxy-2456/pods/https:proxy-service-dmnzl-6qvct:443/proxy/: ... (200; 5.518046ms) May 19 13:43:47.297: INFO: (4) /api/v1/namespaces/proxy-2456/services/proxy-service-dmnzl:portname1/proxy/: foo (200; 5.621248ms) May 19 13:43:47.297: INFO: (4) /api/v1/namespaces/proxy-2456/services/https:proxy-service-dmnzl:tlsportname1/proxy/: tls baz (200; 5.667947ms) May 19 13:43:47.297: INFO: (4) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct/proxy/: test (200; 6.197178ms) May 19 13:43:47.297: INFO: (4) /api/v1/namespaces/proxy-2456/services/http:proxy-service-dmnzl:portname1/proxy/: foo (200; 6.162354ms) May 19 13:43:47.297: INFO: (4) /api/v1/namespaces/proxy-2456/services/http:proxy-service-dmnzl:portname2/proxy/: bar (200; 6.138466ms) May 19 13:43:47.297: INFO: (4) /api/v1/namespaces/proxy-2456/services/https:proxy-service-dmnzl:tlsportname2/proxy/: tls qux (200; 6.305747ms) May 19 13:43:47.298: INFO: (4) /api/v1/namespaces/proxy-2456/services/proxy-service-dmnzl:portname2/proxy/: bar (200; 6.335804ms) May 19 13:43:47.301: INFO: (5) /api/v1/namespaces/proxy-2456/pods/https:proxy-service-dmnzl-6qvct:443/proxy/: ... (200; 4.031077ms) May 19 13:43:47.302: INFO: (5) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct:162/proxy/: bar (200; 4.131736ms) May 19 13:43:47.302: INFO: (5) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct:1080/proxy/: test<... (200; 4.055989ms) May 19 13:43:47.302: INFO: (5) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct/proxy/: test (200; 4.044056ms) May 19 13:43:47.302: INFO: (5) /api/v1/namespaces/proxy-2456/services/https:proxy-service-dmnzl:tlsportname1/proxy/: tls baz (200; 4.149088ms) May 19 13:43:47.302: INFO: (5) /api/v1/namespaces/proxy-2456/pods/http:proxy-service-dmnzl-6qvct:160/proxy/: foo (200; 4.32954ms) May 19 13:43:47.302: INFO: (5) /api/v1/namespaces/proxy-2456/services/proxy-service-dmnzl:portname2/proxy/: bar (200; 4.530166ms) May 19 13:43:47.302: INFO: (5) /api/v1/namespaces/proxy-2456/services/proxy-service-dmnzl:portname1/proxy/: foo (200; 4.660284ms) May 19 13:43:47.302: INFO: (5) /api/v1/namespaces/proxy-2456/services/http:proxy-service-dmnzl:portname1/proxy/: foo (200; 4.661002ms) May 19 13:43:47.302: INFO: (5) /api/v1/namespaces/proxy-2456/services/http:proxy-service-dmnzl:portname2/proxy/: bar (200; 4.700814ms) May 19 13:43:47.307: INFO: (6) /api/v1/namespaces/proxy-2456/services/https:proxy-service-dmnzl:tlsportname1/proxy/: tls baz (200; 4.169643ms) May 19 13:43:47.307: INFO: (6) /api/v1/namespaces/proxy-2456/services/proxy-service-dmnzl:portname2/proxy/: bar (200; 4.312751ms) May 19 13:43:47.307: INFO: (6) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct:162/proxy/: bar (200; 4.360113ms) May 19 13:43:47.307: INFO: (6) /api/v1/namespaces/proxy-2456/services/https:proxy-service-dmnzl:tlsportname2/proxy/: tls qux (200; 4.537154ms) May 19 13:43:47.307: INFO: (6) /api/v1/namespaces/proxy-2456/pods/https:proxy-service-dmnzl-6qvct:462/proxy/: tls qux (200; 4.855565ms) May 19 13:43:47.307: INFO: (6) /api/v1/namespaces/proxy-2456/services/http:proxy-service-dmnzl:portname1/proxy/: foo (200; 4.826387ms) May 19 13:43:47.307: INFO: (6) /api/v1/namespaces/proxy-2456/services/proxy-service-dmnzl:portname1/proxy/: foo (200; 4.807105ms) May 19 13:43:47.308: INFO: (6) /api/v1/namespaces/proxy-2456/pods/http:proxy-service-dmnzl-6qvct:162/proxy/: bar (200; 5.175657ms) May 19 13:43:47.308: INFO: (6) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct:1080/proxy/: test<... (200; 5.218566ms) May 19 13:43:47.308: INFO: (6) /api/v1/namespaces/proxy-2456/pods/http:proxy-service-dmnzl-6qvct:1080/proxy/: ... (200; 5.149244ms) May 19 13:43:47.308: INFO: (6) /api/v1/namespaces/proxy-2456/services/http:proxy-service-dmnzl:portname2/proxy/: bar (200; 5.181356ms) May 19 13:43:47.308: INFO: (6) /api/v1/namespaces/proxy-2456/pods/https:proxy-service-dmnzl-6qvct:443/proxy/: test (200; 5.377151ms) May 19 13:43:47.311: INFO: (7) /api/v1/namespaces/proxy-2456/pods/http:proxy-service-dmnzl-6qvct:160/proxy/: foo (200; 3.509701ms) May 19 13:43:47.312: INFO: (7) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct:1080/proxy/: test<... (200; 3.995419ms) May 19 13:43:47.313: INFO: (7) /api/v1/namespaces/proxy-2456/services/https:proxy-service-dmnzl:tlsportname2/proxy/: tls qux (200; 4.953909ms) May 19 13:43:47.313: INFO: (7) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct/proxy/: test (200; 4.941014ms) May 19 13:43:47.313: INFO: (7) /api/v1/namespaces/proxy-2456/pods/https:proxy-service-dmnzl-6qvct:462/proxy/: tls qux (200; 5.025527ms) May 19 13:43:47.313: INFO: (7) /api/v1/namespaces/proxy-2456/services/proxy-service-dmnzl:portname1/proxy/: foo (200; 5.055454ms) May 19 13:43:47.313: INFO: (7) /api/v1/namespaces/proxy-2456/services/proxy-service-dmnzl:portname2/proxy/: bar (200; 5.05286ms) May 19 13:43:47.313: INFO: (7) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct:160/proxy/: foo (200; 5.032177ms) May 19 13:43:47.313: INFO: (7) /api/v1/namespaces/proxy-2456/pods/https:proxy-service-dmnzl-6qvct:443/proxy/: ... (200; 5.305741ms) May 19 13:43:47.313: INFO: (7) /api/v1/namespaces/proxy-2456/pods/https:proxy-service-dmnzl-6qvct:460/proxy/: tls baz (200; 5.214498ms) May 19 13:43:47.313: INFO: (7) /api/v1/namespaces/proxy-2456/services/http:proxy-service-dmnzl:portname1/proxy/: foo (200; 5.337278ms) May 19 13:43:47.313: INFO: (7) /api/v1/namespaces/proxy-2456/services/https:proxy-service-dmnzl:tlsportname1/proxy/: tls baz (200; 5.345652ms) May 19 13:43:47.313: INFO: (7) /api/v1/namespaces/proxy-2456/services/http:proxy-service-dmnzl:portname2/proxy/: bar (200; 5.364759ms) May 19 13:43:47.316: INFO: (8) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct:1080/proxy/: test<... (200; 2.326187ms) May 19 13:43:47.316: INFO: (8) /api/v1/namespaces/proxy-2456/pods/http:proxy-service-dmnzl-6qvct:1080/proxy/: ... (200; 2.354585ms) May 19 13:43:47.316: INFO: (8) /api/v1/namespaces/proxy-2456/pods/https:proxy-service-dmnzl-6qvct:460/proxy/: tls baz (200; 2.652538ms) May 19 13:43:47.317: INFO: (8) /api/v1/namespaces/proxy-2456/pods/https:proxy-service-dmnzl-6qvct:462/proxy/: tls qux (200; 4.09519ms) May 19 13:43:47.318: INFO: (8) /api/v1/namespaces/proxy-2456/services/http:proxy-service-dmnzl:portname2/proxy/: bar (200; 4.142343ms) May 19 13:43:47.318: INFO: (8) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct:160/proxy/: foo (200; 4.141968ms) May 19 13:43:47.318: INFO: (8) /api/v1/namespaces/proxy-2456/services/https:proxy-service-dmnzl:tlsportname1/proxy/: tls baz (200; 4.194743ms) May 19 13:43:47.318: INFO: (8) /api/v1/namespaces/proxy-2456/services/http:proxy-service-dmnzl:portname1/proxy/: foo (200; 4.194936ms) May 19 13:43:47.318: INFO: (8) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct/proxy/: test (200; 4.263801ms) May 19 13:43:47.318: INFO: (8) /api/v1/namespaces/proxy-2456/services/proxy-service-dmnzl:portname1/proxy/: foo (200; 4.350789ms) May 19 13:43:47.318: INFO: (8) /api/v1/namespaces/proxy-2456/services/https:proxy-service-dmnzl:tlsportname2/proxy/: tls qux (200; 4.471497ms) May 19 13:43:47.318: INFO: (8) /api/v1/namespaces/proxy-2456/pods/https:proxy-service-dmnzl-6qvct:443/proxy/: test (200; 3.40181ms) May 19 13:43:47.322: INFO: (9) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct:160/proxy/: foo (200; 3.896766ms) May 19 13:43:47.323: INFO: (9) /api/v1/namespaces/proxy-2456/pods/http:proxy-service-dmnzl-6qvct:1080/proxy/: ... (200; 4.468161ms) May 19 13:43:47.323: INFO: (9) /api/v1/namespaces/proxy-2456/pods/https:proxy-service-dmnzl-6qvct:443/proxy/: test<... (200; 4.898954ms) May 19 13:43:47.323: INFO: (9) /api/v1/namespaces/proxy-2456/pods/https:proxy-service-dmnzl-6qvct:460/proxy/: tls baz (200; 5.01877ms) May 19 13:43:47.323: INFO: (9) /api/v1/namespaces/proxy-2456/pods/http:proxy-service-dmnzl-6qvct:160/proxy/: foo (200; 4.980623ms) May 19 13:43:47.323: INFO: (9) /api/v1/namespaces/proxy-2456/services/https:proxy-service-dmnzl:tlsportname1/proxy/: tls baz (200; 4.942585ms) May 19 13:43:47.323: INFO: (9) /api/v1/namespaces/proxy-2456/services/https:proxy-service-dmnzl:tlsportname2/proxy/: tls qux (200; 5.002941ms) May 19 13:43:47.323: INFO: (9) /api/v1/namespaces/proxy-2456/services/proxy-service-dmnzl:portname2/proxy/: bar (200; 4.936621ms) May 19 13:43:47.323: INFO: (9) /api/v1/namespaces/proxy-2456/services/http:proxy-service-dmnzl:portname2/proxy/: bar (200; 4.961296ms) May 19 13:43:47.323: INFO: (9) /api/v1/namespaces/proxy-2456/pods/http:proxy-service-dmnzl-6qvct:162/proxy/: bar (200; 4.920228ms) May 19 13:43:47.323: INFO: (9) /api/v1/namespaces/proxy-2456/services/http:proxy-service-dmnzl:portname1/proxy/: foo (200; 5.041886ms) May 19 13:43:47.323: INFO: (9) /api/v1/namespaces/proxy-2456/services/proxy-service-dmnzl:portname1/proxy/: foo (200; 5.009514ms) May 19 13:43:47.326: INFO: (10) /api/v1/namespaces/proxy-2456/pods/https:proxy-service-dmnzl-6qvct:460/proxy/: tls baz (200; 2.339606ms) May 19 13:43:47.326: INFO: (10) /api/v1/namespaces/proxy-2456/pods/https:proxy-service-dmnzl-6qvct:462/proxy/: tls qux (200; 2.548523ms) May 19 13:43:47.329: INFO: (10) /api/v1/namespaces/proxy-2456/pods/http:proxy-service-dmnzl-6qvct:160/proxy/: foo (200; 5.910608ms) May 19 13:43:47.330: INFO: (10) /api/v1/namespaces/proxy-2456/services/proxy-service-dmnzl:portname1/proxy/: foo (200; 6.357013ms) May 19 13:43:47.330: INFO: (10) /api/v1/namespaces/proxy-2456/services/https:proxy-service-dmnzl:tlsportname2/proxy/: tls qux (200; 6.573908ms) May 19 13:43:47.330: INFO: (10) /api/v1/namespaces/proxy-2456/pods/https:proxy-service-dmnzl-6qvct:443/proxy/: ... (200; 7.081205ms) May 19 13:43:47.331: INFO: (10) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct:160/proxy/: foo (200; 7.162605ms) May 19 13:43:47.331: INFO: (10) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct/proxy/: test (200; 7.171268ms) May 19 13:43:47.331: INFO: (10) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct:162/proxy/: bar (200; 7.148719ms) May 19 13:43:47.331: INFO: (10) /api/v1/namespaces/proxy-2456/services/proxy-service-dmnzl:portname2/proxy/: bar (200; 7.245524ms) May 19 13:43:47.331: INFO: (10) /api/v1/namespaces/proxy-2456/services/http:proxy-service-dmnzl:portname2/proxy/: bar (200; 7.277164ms) May 19 13:43:47.331: INFO: (10) /api/v1/namespaces/proxy-2456/services/http:proxy-service-dmnzl:portname1/proxy/: foo (200; 7.303557ms) May 19 13:43:47.331: INFO: (10) /api/v1/namespaces/proxy-2456/services/https:proxy-service-dmnzl:tlsportname1/proxy/: tls baz (200; 7.273412ms) May 19 13:43:47.331: INFO: (10) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct:1080/proxy/: test<... (200; 7.353325ms) May 19 13:43:47.331: INFO: (10) /api/v1/namespaces/proxy-2456/pods/http:proxy-service-dmnzl-6qvct:162/proxy/: bar (200; 7.396047ms) May 19 13:43:47.335: INFO: (11) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct:160/proxy/: foo (200; 3.842114ms) May 19 13:43:47.335: INFO: (11) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct:1080/proxy/: test<... (200; 3.690293ms) May 19 13:43:47.335: INFO: (11) /api/v1/namespaces/proxy-2456/pods/https:proxy-service-dmnzl-6qvct:460/proxy/: tls baz (200; 4.054748ms) May 19 13:43:47.335: INFO: (11) /api/v1/namespaces/proxy-2456/pods/http:proxy-service-dmnzl-6qvct:160/proxy/: foo (200; 3.878357ms) May 19 13:43:47.335: INFO: (11) /api/v1/namespaces/proxy-2456/pods/http:proxy-service-dmnzl-6qvct:1080/proxy/: ... (200; 3.876646ms) May 19 13:43:47.335: INFO: (11) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct/proxy/: test (200; 3.931339ms) May 19 13:43:47.335: INFO: (11) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct:162/proxy/: bar (200; 4.080169ms) May 19 13:43:47.335: INFO: (11) /api/v1/namespaces/proxy-2456/pods/http:proxy-service-dmnzl-6qvct:162/proxy/: bar (200; 3.90566ms) May 19 13:43:47.335: INFO: (11) /api/v1/namespaces/proxy-2456/pods/https:proxy-service-dmnzl-6qvct:443/proxy/: test<... (200; 5.195396ms) May 19 13:43:47.341: INFO: (12) /api/v1/namespaces/proxy-2456/services/https:proxy-service-dmnzl:tlsportname1/proxy/: tls baz (200; 5.147221ms) May 19 13:43:47.342: INFO: (12) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct:160/proxy/: foo (200; 5.606178ms) May 19 13:43:47.342: INFO: (12) /api/v1/namespaces/proxy-2456/services/http:proxy-service-dmnzl:portname2/proxy/: bar (200; 5.669193ms) May 19 13:43:47.342: INFO: (12) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct/proxy/: test (200; 5.583812ms) May 19 13:43:47.342: INFO: (12) /api/v1/namespaces/proxy-2456/pods/http:proxy-service-dmnzl-6qvct:1080/proxy/: ... (200; 5.648257ms) May 19 13:43:47.342: INFO: (12) /api/v1/namespaces/proxy-2456/pods/https:proxy-service-dmnzl-6qvct:460/proxy/: tls baz (200; 6.100229ms) May 19 13:43:47.342: INFO: (12) /api/v1/namespaces/proxy-2456/services/https:proxy-service-dmnzl:tlsportname2/proxy/: tls qux (200; 6.151213ms) May 19 13:43:47.342: INFO: (12) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct:162/proxy/: bar (200; 6.124617ms) May 19 13:43:47.342: INFO: (12) /api/v1/namespaces/proxy-2456/pods/https:proxy-service-dmnzl-6qvct:462/proxy/: tls qux (200; 6.231236ms) May 19 13:43:47.342: INFO: (12) /api/v1/namespaces/proxy-2456/pods/http:proxy-service-dmnzl-6qvct:160/proxy/: foo (200; 6.142859ms) May 19 13:43:47.347: INFO: (13) /api/v1/namespaces/proxy-2456/services/proxy-service-dmnzl:portname2/proxy/: bar (200; 3.991837ms) May 19 13:43:47.347: INFO: (13) /api/v1/namespaces/proxy-2456/services/https:proxy-service-dmnzl:tlsportname1/proxy/: tls baz (200; 3.978886ms) May 19 13:43:47.347: INFO: (13) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct/proxy/: test (200; 3.903922ms) May 19 13:43:47.347: INFO: (13) /api/v1/namespaces/proxy-2456/pods/https:proxy-service-dmnzl-6qvct:443/proxy/: test<... (200; 4.96938ms) May 19 13:43:47.348: INFO: (13) /api/v1/namespaces/proxy-2456/services/http:proxy-service-dmnzl:portname1/proxy/: foo (200; 5.000902ms) May 19 13:43:47.348: INFO: (13) /api/v1/namespaces/proxy-2456/services/http:proxy-service-dmnzl:portname2/proxy/: bar (200; 5.00704ms) May 19 13:43:47.348: INFO: (13) /api/v1/namespaces/proxy-2456/pods/http:proxy-service-dmnzl-6qvct:1080/proxy/: ... (200; 5.03492ms) May 19 13:43:47.348: INFO: (13) /api/v1/namespaces/proxy-2456/services/proxy-service-dmnzl:portname1/proxy/: foo (200; 5.105728ms) May 19 13:43:47.348: INFO: (13) /api/v1/namespaces/proxy-2456/services/https:proxy-service-dmnzl:tlsportname2/proxy/: tls qux (200; 5.598349ms) May 19 13:43:47.351: INFO: (14) /api/v1/namespaces/proxy-2456/pods/http:proxy-service-dmnzl-6qvct:160/proxy/: foo (200; 3.055544ms) May 19 13:43:47.352: INFO: (14) /api/v1/namespaces/proxy-2456/pods/https:proxy-service-dmnzl-6qvct:462/proxy/: tls qux (200; 4.225875ms) May 19 13:43:47.353: INFO: (14) /api/v1/namespaces/proxy-2456/pods/https:proxy-service-dmnzl-6qvct:443/proxy/: test (200; 4.241691ms) May 19 13:43:47.353: INFO: (14) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct:1080/proxy/: test<... (200; 4.191238ms) May 19 13:43:47.353: INFO: (14) /api/v1/namespaces/proxy-2456/pods/https:proxy-service-dmnzl-6qvct:460/proxy/: tls baz (200; 4.23976ms) May 19 13:43:47.353: INFO: (14) /api/v1/namespaces/proxy-2456/pods/http:proxy-service-dmnzl-6qvct:162/proxy/: bar (200; 4.25538ms) May 19 13:43:47.353: INFO: (14) /api/v1/namespaces/proxy-2456/services/http:proxy-service-dmnzl:portname2/proxy/: bar (200; 5.009039ms) May 19 13:43:47.353: INFO: (14) /api/v1/namespaces/proxy-2456/services/https:proxy-service-dmnzl:tlsportname2/proxy/: tls qux (200; 5.022565ms) May 19 13:43:47.353: INFO: (14) /api/v1/namespaces/proxy-2456/services/proxy-service-dmnzl:portname1/proxy/: foo (200; 5.054108ms) May 19 13:43:47.353: INFO: (14) /api/v1/namespaces/proxy-2456/pods/http:proxy-service-dmnzl-6qvct:1080/proxy/: ... (200; 5.184792ms) May 19 13:43:47.353: INFO: (14) /api/v1/namespaces/proxy-2456/services/https:proxy-service-dmnzl:tlsportname1/proxy/: tls baz (200; 5.168609ms) May 19 13:43:47.354: INFO: (14) /api/v1/namespaces/proxy-2456/services/http:proxy-service-dmnzl:portname1/proxy/: foo (200; 5.208873ms) May 19 13:43:47.354: INFO: (14) /api/v1/namespaces/proxy-2456/services/proxy-service-dmnzl:portname2/proxy/: bar (200; 5.179318ms) May 19 13:43:47.357: INFO: (15) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct:1080/proxy/: test<... (200; 3.692207ms) May 19 13:43:47.357: INFO: (15) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct/proxy/: test (200; 3.717364ms) May 19 13:43:47.357: INFO: (15) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct:162/proxy/: bar (200; 3.788423ms) May 19 13:43:47.357: INFO: (15) /api/v1/namespaces/proxy-2456/pods/http:proxy-service-dmnzl-6qvct:160/proxy/: foo (200; 3.794182ms) May 19 13:43:47.357: INFO: (15) /api/v1/namespaces/proxy-2456/pods/https:proxy-service-dmnzl-6qvct:443/proxy/: ... (200; 3.836552ms) May 19 13:43:47.357: INFO: (15) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct:160/proxy/: foo (200; 3.851093ms) May 19 13:43:47.359: INFO: (15) /api/v1/namespaces/proxy-2456/services/proxy-service-dmnzl:portname1/proxy/: foo (200; 4.936997ms) May 19 13:43:47.359: INFO: (15) /api/v1/namespaces/proxy-2456/services/http:proxy-service-dmnzl:portname2/proxy/: bar (200; 4.920534ms) May 19 13:43:47.359: INFO: (15) /api/v1/namespaces/proxy-2456/services/https:proxy-service-dmnzl:tlsportname1/proxy/: tls baz (200; 5.105269ms) May 19 13:43:47.359: INFO: (15) /api/v1/namespaces/proxy-2456/services/https:proxy-service-dmnzl:tlsportname2/proxy/: tls qux (200; 5.128903ms) May 19 13:43:47.359: INFO: (15) /api/v1/namespaces/proxy-2456/services/proxy-service-dmnzl:portname2/proxy/: bar (200; 5.151369ms) May 19 13:43:47.359: INFO: (15) /api/v1/namespaces/proxy-2456/services/http:proxy-service-dmnzl:portname1/proxy/: foo (200; 5.105114ms) May 19 13:43:47.361: INFO: (16) /api/v1/namespaces/proxy-2456/pods/https:proxy-service-dmnzl-6qvct:443/proxy/: test (200; 4.213789ms) May 19 13:43:47.363: INFO: (16) /api/v1/namespaces/proxy-2456/pods/http:proxy-service-dmnzl-6qvct:162/proxy/: bar (200; 4.122396ms) May 19 13:43:47.363: INFO: (16) /api/v1/namespaces/proxy-2456/pods/https:proxy-service-dmnzl-6qvct:462/proxy/: tls qux (200; 4.384828ms) May 19 13:43:47.363: INFO: (16) /api/v1/namespaces/proxy-2456/pods/http:proxy-service-dmnzl-6qvct:160/proxy/: foo (200; 4.401174ms) May 19 13:43:47.363: INFO: (16) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct:1080/proxy/: test<... (200; 4.372886ms) May 19 13:43:47.363: INFO: (16) /api/v1/namespaces/proxy-2456/pods/http:proxy-service-dmnzl-6qvct:1080/proxy/: ... (200; 4.43251ms) May 19 13:43:47.364: INFO: (16) /api/v1/namespaces/proxy-2456/services/http:proxy-service-dmnzl:portname2/proxy/: bar (200; 5.380251ms) May 19 13:43:47.365: INFO: (16) /api/v1/namespaces/proxy-2456/services/proxy-service-dmnzl:portname2/proxy/: bar (200; 5.592388ms) May 19 13:43:47.365: INFO: (16) /api/v1/namespaces/proxy-2456/services/proxy-service-dmnzl:portname1/proxy/: foo (200; 6.058261ms) May 19 13:43:47.365: INFO: (16) /api/v1/namespaces/proxy-2456/services/https:proxy-service-dmnzl:tlsportname1/proxy/: tls baz (200; 6.066667ms) May 19 13:43:47.365: INFO: (16) /api/v1/namespaces/proxy-2456/services/https:proxy-service-dmnzl:tlsportname2/proxy/: tls qux (200; 6.071874ms) May 19 13:43:47.366: INFO: (16) /api/v1/namespaces/proxy-2456/services/http:proxy-service-dmnzl:portname1/proxy/: foo (200; 7.448103ms) May 19 13:43:47.369: INFO: (17) /api/v1/namespaces/proxy-2456/pods/https:proxy-service-dmnzl-6qvct:462/proxy/: tls qux (200; 2.52097ms) May 19 13:43:47.369: INFO: (17) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct:160/proxy/: foo (200; 2.63393ms) May 19 13:43:47.370: INFO: (17) /api/v1/namespaces/proxy-2456/pods/http:proxy-service-dmnzl-6qvct:162/proxy/: bar (200; 2.970508ms) May 19 13:43:47.370: INFO: (17) /api/v1/namespaces/proxy-2456/pods/https:proxy-service-dmnzl-6qvct:443/proxy/: ... (200; 4.8552ms) May 19 13:43:47.371: INFO: (17) /api/v1/namespaces/proxy-2456/pods/http:proxy-service-dmnzl-6qvct:160/proxy/: foo (200; 4.551176ms) May 19 13:43:47.371: INFO: (17) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct/proxy/: test (200; 4.402979ms) May 19 13:43:47.371: INFO: (17) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct:1080/proxy/: test<... (200; 4.637883ms) May 19 13:43:47.372: INFO: (17) /api/v1/namespaces/proxy-2456/pods/https:proxy-service-dmnzl-6qvct:460/proxy/: tls baz (200; 5.023249ms) May 19 13:43:47.372: INFO: (17) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct:162/proxy/: bar (200; 4.752549ms) May 19 13:43:47.372: INFO: (17) /api/v1/namespaces/proxy-2456/services/http:proxy-service-dmnzl:portname1/proxy/: foo (200; 4.90949ms) May 19 13:43:47.372: INFO: (17) /api/v1/namespaces/proxy-2456/services/proxy-service-dmnzl:portname1/proxy/: foo (200; 5.133794ms) May 19 13:43:47.372: INFO: (17) /api/v1/namespaces/proxy-2456/services/https:proxy-service-dmnzl:tlsportname2/proxy/: tls qux (200; 5.303863ms) May 19 13:43:47.372: INFO: (17) /api/v1/namespaces/proxy-2456/services/https:proxy-service-dmnzl:tlsportname1/proxy/: tls baz (200; 5.140137ms) May 19 13:43:47.376: INFO: (18) /api/v1/namespaces/proxy-2456/pods/https:proxy-service-dmnzl-6qvct:460/proxy/: tls baz (200; 3.51558ms) May 19 13:43:47.377: INFO: (18) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct/proxy/: test (200; 5.269795ms) May 19 13:43:47.378: INFO: (18) /api/v1/namespaces/proxy-2456/pods/http:proxy-service-dmnzl-6qvct:160/proxy/: foo (200; 5.557335ms) May 19 13:43:47.378: INFO: (18) /api/v1/namespaces/proxy-2456/pods/https:proxy-service-dmnzl-6qvct:443/proxy/: ... (200; 6.032958ms) May 19 13:43:47.378: INFO: (18) /api/v1/namespaces/proxy-2456/pods/http:proxy-service-dmnzl-6qvct:162/proxy/: bar (200; 5.936441ms) May 19 13:43:47.378: INFO: (18) /api/v1/namespaces/proxy-2456/services/https:proxy-service-dmnzl:tlsportname1/proxy/: tls baz (200; 6.179986ms) May 19 13:43:47.378: INFO: (18) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct:160/proxy/: foo (200; 6.184699ms) May 19 13:43:47.379: INFO: (18) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct:162/proxy/: bar (200; 6.522582ms) May 19 13:43:47.379: INFO: (18) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct:1080/proxy/: test<... (200; 6.631512ms) May 19 13:43:47.379: INFO: (18) /api/v1/namespaces/proxy-2456/services/proxy-service-dmnzl:portname1/proxy/: foo (200; 6.55198ms) May 19 13:43:47.379: INFO: (18) /api/v1/namespaces/proxy-2456/services/http:proxy-service-dmnzl:portname1/proxy/: foo (200; 6.638692ms) May 19 13:43:47.379: INFO: (18) /api/v1/namespaces/proxy-2456/pods/https:proxy-service-dmnzl-6qvct:462/proxy/: tls qux (200; 6.645036ms) May 19 13:43:47.379: INFO: (18) /api/v1/namespaces/proxy-2456/services/https:proxy-service-dmnzl:tlsportname2/proxy/: tls qux (200; 6.570541ms) May 19 13:43:47.379: INFO: (18) /api/v1/namespaces/proxy-2456/services/proxy-service-dmnzl:portname2/proxy/: bar (200; 6.736175ms) May 19 13:43:47.379: INFO: (18) /api/v1/namespaces/proxy-2456/services/http:proxy-service-dmnzl:portname2/proxy/: bar (200; 6.683309ms) May 19 13:43:47.382: INFO: (19) /api/v1/namespaces/proxy-2456/pods/http:proxy-service-dmnzl-6qvct:160/proxy/: foo (200; 3.269432ms) May 19 13:43:47.382: INFO: (19) /api/v1/namespaces/proxy-2456/pods/https:proxy-service-dmnzl-6qvct:462/proxy/: tls qux (200; 3.212882ms) May 19 13:43:47.382: INFO: (19) /api/v1/namespaces/proxy-2456/pods/http:proxy-service-dmnzl-6qvct:162/proxy/: bar (200; 3.21719ms) May 19 13:43:47.382: INFO: (19) /api/v1/namespaces/proxy-2456/pods/https:proxy-service-dmnzl-6qvct:460/proxy/: tls baz (200; 3.339185ms) May 19 13:43:47.382: INFO: (19) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct:162/proxy/: bar (200; 3.393096ms) May 19 13:43:47.382: INFO: (19) /api/v1/namespaces/proxy-2456/pods/http:proxy-service-dmnzl-6qvct:1080/proxy/: ... (200; 3.376951ms) May 19 13:43:47.382: INFO: (19) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct/proxy/: test (200; 3.377493ms) May 19 13:43:47.382: INFO: (19) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct:1080/proxy/: test<... (200; 3.605775ms) May 19 13:43:47.383: INFO: (19) /api/v1/namespaces/proxy-2456/pods/proxy-service-dmnzl-6qvct:160/proxy/: foo (200; 3.6361ms) May 19 13:43:47.383: INFO: (19) /api/v1/namespaces/proxy-2456/pods/https:proxy-service-dmnzl-6qvct:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 19 13:44:01.159: INFO: Successfully updated pod "annotationupdate52097b7c-3dde-4d7d-9d3e-cf40607c705f" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:44:05.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9349" for this suite. May 19 13:44:27.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:44:27.273: INFO: namespace projected-9349 deletion completed in 22.085880479s • [SLOW TEST:30.736 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:44:27.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller May 19 13:44:27.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4588' May 19 13:44:27.602: INFO: stderr: "" May 19 13:44:27.602: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 19 13:44:27.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4588' May 19 13:44:27.750: INFO: stderr: "" May 19 13:44:27.750: INFO: stdout: "update-demo-nautilus-mmp4t update-demo-nautilus-ssfw2 " May 19 13:44:27.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mmp4t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4588' May 19 13:44:27.840: INFO: stderr: "" May 19 13:44:27.840: INFO: stdout: "" May 19 13:44:27.840: INFO: update-demo-nautilus-mmp4t is created but not running May 19 13:44:32.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4588' May 19 13:44:32.939: INFO: stderr: "" May 19 13:44:32.939: INFO: stdout: "update-demo-nautilus-mmp4t update-demo-nautilus-ssfw2 " May 19 13:44:32.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mmp4t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4588' May 19 13:44:33.036: INFO: stderr: "" May 19 13:44:33.036: INFO: stdout: "true" May 19 13:44:33.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mmp4t -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4588' May 19 13:44:33.146: INFO: stderr: "" May 19 13:44:33.146: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 19 13:44:33.146: INFO: validating pod update-demo-nautilus-mmp4t May 19 13:44:33.181: INFO: got data: { "image": "nautilus.jpg" } May 19 13:44:33.181: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 19 13:44:33.181: INFO: update-demo-nautilus-mmp4t is verified up and running May 19 13:44:33.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ssfw2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4588' May 19 13:44:33.281: INFO: stderr: "" May 19 13:44:33.281: INFO: stdout: "true" May 19 13:44:33.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ssfw2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4588' May 19 13:44:33.377: INFO: stderr: "" May 19 13:44:33.377: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 19 13:44:33.377: INFO: validating pod update-demo-nautilus-ssfw2 May 19 13:44:33.381: INFO: got data: { "image": "nautilus.jpg" } May 19 13:44:33.381: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 19 13:44:33.381: INFO: update-demo-nautilus-ssfw2 is verified up and running STEP: using delete to clean up resources May 19 13:44:33.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4588' May 19 13:44:33.491: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 19 13:44:33.491: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 19 13:44:33.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4588' May 19 13:44:33.596: INFO: stderr: "No resources found.\n" May 19 13:44:33.596: INFO: stdout: "" May 19 13:44:33.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4588 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 19 13:44:33.722: INFO: stderr: "" May 19 13:44:33.722: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:44:33.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4588" for this suite. May 19 13:44:49.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:44:49.824: INFO: namespace kubectl-4588 deletion completed in 16.091830543s • [SLOW TEST:22.551 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:44:49.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 19 13:44:49.891: INFO: Creating ReplicaSet my-hostname-basic-d786c559-20d7-46dd-a88c-56e5fd13b63a May 19 13:44:49.924: INFO: Pod name my-hostname-basic-d786c559-20d7-46dd-a88c-56e5fd13b63a: Found 0 pods out of 1 May 19 13:44:54.928: INFO: Pod name my-hostname-basic-d786c559-20d7-46dd-a88c-56e5fd13b63a: Found 1 pods out of 1 May 19 13:44:54.928: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-d786c559-20d7-46dd-a88c-56e5fd13b63a" is running May 19 13:44:54.930: INFO: Pod "my-hostname-basic-d786c559-20d7-46dd-a88c-56e5fd13b63a-zrzwj" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-19 13:44:49 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-19 13:44:52 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-19 13:44:52 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-19 13:44:49 +0000 UTC Reason: Message:}]) May 19 13:44:54.930: INFO: Trying to dial the pod May 19 13:44:59.952: INFO: Controller my-hostname-basic-d786c559-20d7-46dd-a88c-56e5fd13b63a: Got expected result from replica 1 [my-hostname-basic-d786c559-20d7-46dd-a88c-56e5fd13b63a-zrzwj]: "my-hostname-basic-d786c559-20d7-46dd-a88c-56e5fd13b63a-zrzwj", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:44:59.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6517" for this suite. May 19 13:45:05.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:45:06.072: INFO: namespace replicaset-6517 deletion completed in 6.116617014s • [SLOW TEST:16.248 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:45:06.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 19 13:45:06.150: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2bd0ddb5-0536-4110-b17a-f075c17e8408" in namespace "projected-9607" to be "success or failure" May 19 13:45:06.154: INFO: Pod "downwardapi-volume-2bd0ddb5-0536-4110-b17a-f075c17e8408": Phase="Pending", Reason="", readiness=false. Elapsed: 4.237545ms May 19 13:45:08.215: INFO: Pod "downwardapi-volume-2bd0ddb5-0536-4110-b17a-f075c17e8408": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06478527s May 19 13:45:10.219: INFO: Pod "downwardapi-volume-2bd0ddb5-0536-4110-b17a-f075c17e8408": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06933207s STEP: Saw pod success May 19 13:45:10.219: INFO: Pod "downwardapi-volume-2bd0ddb5-0536-4110-b17a-f075c17e8408" satisfied condition "success or failure" May 19 13:45:10.223: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-2bd0ddb5-0536-4110-b17a-f075c17e8408 container client-container: STEP: delete the pod May 19 13:45:10.245: INFO: Waiting for pod downwardapi-volume-2bd0ddb5-0536-4110-b17a-f075c17e8408 to disappear May 19 13:45:10.483: INFO: Pod downwardapi-volume-2bd0ddb5-0536-4110-b17a-f075c17e8408 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:45:10.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9607" for this suite. May 19 13:45:16.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:45:16.609: INFO: namespace projected-9607 deletion completed in 6.121513931s • [SLOW TEST:10.537 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:45:16.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-58148214-b8d2-410c-aa6d-676d3628b171 STEP: Creating a pod to test consume secrets May 19 13:45:16.672: INFO: Waiting up to 5m0s for pod "pod-secrets-6fcd888c-e525-46e4-a04e-9b711fcbd83d" in namespace "secrets-4072" to be "success or failure" May 19 13:45:16.676: INFO: Pod "pod-secrets-6fcd888c-e525-46e4-a04e-9b711fcbd83d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.887072ms May 19 13:45:18.680: INFO: Pod "pod-secrets-6fcd888c-e525-46e4-a04e-9b711fcbd83d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008582768s May 19 13:45:20.685: INFO: Pod "pod-secrets-6fcd888c-e525-46e4-a04e-9b711fcbd83d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013450063s STEP: Saw pod success May 19 13:45:20.685: INFO: Pod "pod-secrets-6fcd888c-e525-46e4-a04e-9b711fcbd83d" satisfied condition "success or failure" May 19 13:45:20.688: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-6fcd888c-e525-46e4-a04e-9b711fcbd83d container secret-volume-test: STEP: delete the pod May 19 13:45:20.722: INFO: Waiting for pod pod-secrets-6fcd888c-e525-46e4-a04e-9b711fcbd83d to disappear May 19 13:45:20.746: INFO: Pod pod-secrets-6fcd888c-e525-46e4-a04e-9b711fcbd83d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:45:20.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4072" for this suite. May 19 13:45:26.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:45:26.836: INFO: namespace secrets-4072 deletion completed in 6.086552012s • [SLOW TEST:10.227 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:45:26.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-c0150b15-0dcf-4127-bc6f-c4db8c8edbd5 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-c0150b15-0dcf-4127-bc6f-c4db8c8edbd5 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:45:35.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3713" for this suite. May 19 13:45:57.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:45:57.103: INFO: namespace configmap-3713 deletion completed in 22.081453567s • [SLOW TEST:30.266 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:45:57.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 19 13:46:01.719: INFO: Successfully updated pod "labelsupdatee1e46bba-d330-4279-b2a8-f11a3e33224e" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:46:03.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6972" for this suite. May 19 13:46:29.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:46:29.851: INFO: namespace downward-api-6972 deletion completed in 26.112913145s • [SLOW TEST:32.747 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:46:29.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 19 13:46:29.909: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:46:39.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8791" for this suite. May 19 13:47:01.268: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:47:01.344: INFO: namespace init-container-8791 deletion completed in 22.092705157s • [SLOW TEST:31.493 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:47:01.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:47:06.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3590" for this suite. May 19 13:47:28.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:47:28.544: INFO: namespace replication-controller-3590 deletion completed in 22.088916758s • [SLOW TEST:27.201 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:47:28.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller May 19 13:47:28.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1475' May 19 13:47:28.993: INFO: stderr: "" May 19 13:47:28.993: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 19 13:47:28.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1475' May 19 13:47:29.119: INFO: stderr: "" May 19 13:47:29.119: INFO: stdout: "update-demo-nautilus-52w2s update-demo-nautilus-68kz4 " May 19 13:47:29.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-52w2s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1475' May 19 13:47:29.218: INFO: stderr: "" May 19 13:47:29.218: INFO: stdout: "" May 19 13:47:29.218: INFO: update-demo-nautilus-52w2s is created but not running May 19 13:47:34.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1475' May 19 13:47:34.327: INFO: stderr: "" May 19 13:47:34.327: INFO: stdout: "update-demo-nautilus-52w2s update-demo-nautilus-68kz4 " May 19 13:47:34.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-52w2s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1475' May 19 13:47:34.424: INFO: stderr: "" May 19 13:47:34.424: INFO: stdout: "true" May 19 13:47:34.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-52w2s -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1475' May 19 13:47:34.511: INFO: stderr: "" May 19 13:47:34.511: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 19 13:47:34.511: INFO: validating pod update-demo-nautilus-52w2s May 19 13:47:34.516: INFO: got data: { "image": "nautilus.jpg" } May 19 13:47:34.516: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 19 13:47:34.516: INFO: update-demo-nautilus-52w2s is verified up and running May 19 13:47:34.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-68kz4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1475' May 19 13:47:34.596: INFO: stderr: "" May 19 13:47:34.596: INFO: stdout: "true" May 19 13:47:34.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-68kz4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1475' May 19 13:47:34.690: INFO: stderr: "" May 19 13:47:34.690: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 19 13:47:34.690: INFO: validating pod update-demo-nautilus-68kz4 May 19 13:47:34.693: INFO: got data: { "image": "nautilus.jpg" } May 19 13:47:34.693: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 19 13:47:34.693: INFO: update-demo-nautilus-68kz4 is verified up and running STEP: rolling-update to new replication controller May 19 13:47:34.694: INFO: scanned /root for discovery docs: May 19 13:47:34.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-1475' May 19 13:47:57.347: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 19 13:47:57.347: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 19 13:47:57.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1475' May 19 13:47:57.724: INFO: stderr: "" May 19 13:47:57.724: INFO: stdout: "update-demo-kitten-65xdh update-demo-kitten-vzrrf " May 19 13:47:57.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-65xdh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1475' May 19 13:47:57.811: INFO: stderr: "" May 19 13:47:57.812: INFO: stdout: "true" May 19 13:47:57.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-65xdh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1475' May 19 13:47:57.902: INFO: stderr: "" May 19 13:47:57.902: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 19 13:47:57.902: INFO: validating pod update-demo-kitten-65xdh May 19 13:47:57.906: INFO: got data: { "image": "kitten.jpg" } May 19 13:47:57.906: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 19 13:47:57.906: INFO: update-demo-kitten-65xdh is verified up and running May 19 13:47:57.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-vzrrf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1475' May 19 13:47:58.930: INFO: stderr: "" May 19 13:47:58.930: INFO: stdout: "true" May 19 13:47:58.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-vzrrf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1475' May 19 13:48:00.475: INFO: stderr: "" May 19 13:48:00.475: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 19 13:48:00.475: INFO: validating pod update-demo-kitten-vzrrf May 19 13:48:00.490: INFO: got data: { "image": "kitten.jpg" } May 19 13:48:00.490: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 19 13:48:00.490: INFO: update-demo-kitten-vzrrf is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:48:00.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1475" for this suite. May 19 13:48:22.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:48:22.626: INFO: namespace kubectl-1475 deletion completed in 22.13245369s • [SLOW TEST:54.081 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:48:22.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-7086 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-7086 STEP: Creating statefulset with conflicting port in namespace statefulset-7086 STEP: Waiting until pod test-pod will start running in namespace statefulset-7086 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7086 May 19 13:48:26.736: INFO: Observed stateful pod in namespace: statefulset-7086, name: ss-0, uid: 62349276-6ed3-45b8-b757-2cba863951fc, status phase: Pending. Waiting for statefulset controller to delete. May 19 13:48:32.150: INFO: Observed stateful pod in namespace: statefulset-7086, name: ss-0, uid: 62349276-6ed3-45b8-b757-2cba863951fc, status phase: Failed. Waiting for statefulset controller to delete. May 19 13:48:32.172: INFO: Observed stateful pod in namespace: statefulset-7086, name: ss-0, uid: 62349276-6ed3-45b8-b757-2cba863951fc, status phase: Failed. Waiting for statefulset controller to delete. May 19 13:48:32.223: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-7086 STEP: Removing pod with conflicting port in namespace statefulset-7086 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-7086 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 19 13:48:52.415: INFO: Deleting all statefulset in ns statefulset-7086 May 19 13:48:52.418: INFO: Scaling statefulset ss to 0 May 19 13:49:02.433: INFO: Waiting for statefulset status.replicas updated to 0 May 19 13:49:02.435: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:49:02.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7086" for this suite. May 19 13:49:08.498: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:49:08.580: INFO: namespace statefulset-7086 deletion completed in 6.114365336s • [SLOW TEST:45.953 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:49:08.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 19 13:49:08.661: INFO: Waiting up to 5m0s for pod "downwardapi-volume-59532fbb-261e-4f00-911b-d808f28dc3ad" in namespace "downward-api-6461" to be "success or failure" May 19 13:49:08.681: INFO: Pod "downwardapi-volume-59532fbb-261e-4f00-911b-d808f28dc3ad": Phase="Pending", Reason="", readiness=false. Elapsed: 20.661317ms May 19 13:49:10.686: INFO: Pod "downwardapi-volume-59532fbb-261e-4f00-911b-d808f28dc3ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024834059s May 19 13:49:12.690: INFO: Pod "downwardapi-volume-59532fbb-261e-4f00-911b-d808f28dc3ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028955961s STEP: Saw pod success May 19 13:49:12.690: INFO: Pod "downwardapi-volume-59532fbb-261e-4f00-911b-d808f28dc3ad" satisfied condition "success or failure" May 19 13:49:12.693: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-59532fbb-261e-4f00-911b-d808f28dc3ad container client-container: STEP: delete the pod May 19 13:49:12.708: INFO: Waiting for pod downwardapi-volume-59532fbb-261e-4f00-911b-d808f28dc3ad to disappear May 19 13:49:12.772: INFO: Pod downwardapi-volume-59532fbb-261e-4f00-911b-d808f28dc3ad no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:49:12.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6461" for this suite. May 19 13:49:18.951: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:49:19.030: INFO: namespace downward-api-6461 deletion completed in 6.253299286s • [SLOW TEST:10.449 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:49:19.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 19 13:49:19.147: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. May 19 13:49:19.730: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 19 13:49:22.349: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725492959, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725492959, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725492959, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725492959, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 13:49:24.352: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725492959, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725492959, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725492959, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725492959, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 13:49:26.982: INFO: Waited 623.253282ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:49:27.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-9650" for this suite. May 19 13:49:33.526: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:49:33.604: INFO: namespace aggregator-9650 deletion completed in 6.172128831s • [SLOW TEST:14.574 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:49:33.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 19 13:49:33.731: INFO: Create a RollingUpdate DaemonSet May 19 13:49:33.734: INFO: Check that daemon pods launch on every node of the cluster May 19 13:49:33.756: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 13:49:33.773: INFO: Number of nodes with available pods: 0 May 19 13:49:33.773: INFO: Node iruya-worker is running more than one daemon pod May 19 13:49:34.777: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 13:49:34.780: INFO: Number of nodes with available pods: 0 May 19 13:49:34.780: INFO: Node iruya-worker is running more than one daemon pod May 19 13:49:35.778: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 13:49:35.783: INFO: Number of nodes with available pods: 0 May 19 13:49:35.783: INFO: Node iruya-worker is running more than one daemon pod May 19 13:49:36.778: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 13:49:36.781: INFO: Number of nodes with available pods: 0 May 19 13:49:36.781: INFO: Node iruya-worker is running more than one daemon pod May 19 13:49:37.777: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 13:49:37.780: INFO: Number of nodes with available pods: 1 May 19 13:49:37.780: INFO: Node iruya-worker2 is running more than one daemon pod May 19 13:49:38.778: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 13:49:38.782: INFO: Number of nodes with available pods: 2 May 19 13:49:38.782: INFO: Number of running nodes: 2, number of available pods: 2 May 19 13:49:38.782: INFO: Update the DaemonSet to trigger a rollout May 19 13:49:38.789: INFO: Updating DaemonSet daemon-set May 19 13:49:52.826: INFO: Roll back the DaemonSet before rollout is complete May 19 13:49:52.832: INFO: Updating DaemonSet daemon-set May 19 13:49:52.832: INFO: Make sure DaemonSet rollback is complete May 19 13:49:52.844: INFO: Wrong image for pod: daemon-set-8p6qv. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 19 13:49:52.844: INFO: Pod daemon-set-8p6qv is not available May 19 13:49:52.868: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 13:49:53.873: INFO: Wrong image for pod: daemon-set-8p6qv. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 19 13:49:53.873: INFO: Pod daemon-set-8p6qv is not available May 19 13:49:53.878: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 13:49:54.872: INFO: Pod daemon-set-nfn6q is not available May 19 13:49:54.876: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7861, will wait for the garbage collector to delete the pods May 19 13:49:54.939: INFO: Deleting DaemonSet.extensions daemon-set took: 6.858439ms May 19 13:49:55.039: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.203957ms May 19 13:50:02.243: INFO: Number of nodes with available pods: 0 May 19 13:50:02.243: INFO: Number of running nodes: 0, number of available pods: 0 May 19 13:50:02.246: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7861/daemonsets","resourceVersion":"11760748"},"items":null} May 19 13:50:02.248: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7861/pods","resourceVersion":"11760748"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:50:02.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7861" for this suite. May 19 13:50:08.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:50:08.356: INFO: namespace daemonsets-7861 deletion completed in 6.094823138s • [SLOW TEST:34.751 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:50:08.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 19 13:50:08.416: INFO: Waiting up to 5m0s for pod "downwardapi-volume-13f7479a-f95e-4b46-a1bd-979ee7125e15" in namespace "downward-api-9629" to be "success or failure" May 19 13:50:08.453: INFO: Pod "downwardapi-volume-13f7479a-f95e-4b46-a1bd-979ee7125e15": Phase="Pending", Reason="", readiness=false. Elapsed: 36.700807ms May 19 13:50:10.457: INFO: Pod "downwardapi-volume-13f7479a-f95e-4b46-a1bd-979ee7125e15": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040228682s May 19 13:50:12.462: INFO: Pod "downwardapi-volume-13f7479a-f95e-4b46-a1bd-979ee7125e15": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045929927s STEP: Saw pod success May 19 13:50:12.462: INFO: Pod "downwardapi-volume-13f7479a-f95e-4b46-a1bd-979ee7125e15" satisfied condition "success or failure" May 19 13:50:12.466: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-13f7479a-f95e-4b46-a1bd-979ee7125e15 container client-container: STEP: delete the pod May 19 13:50:12.488: INFO: Waiting for pod downwardapi-volume-13f7479a-f95e-4b46-a1bd-979ee7125e15 to disappear May 19 13:50:12.507: INFO: Pod downwardapi-volume-13f7479a-f95e-4b46-a1bd-979ee7125e15 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:50:12.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9629" for this suite. May 19 13:50:18.526: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:50:18.605: INFO: namespace downward-api-9629 deletion completed in 6.093333352s • [SLOW TEST:10.249 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:50:18.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin May 19 13:50:18.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7604 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 19 13:50:22.273: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0519 13:50:22.207296 1897 log.go:172] (0xc000118bb0) (0xc000820140) Create stream\nI0519 13:50:22.207376 1897 log.go:172] (0xc000118bb0) (0xc000820140) Stream added, broadcasting: 1\nI0519 13:50:22.212255 1897 log.go:172] (0xc000118bb0) Reply frame received for 1\nI0519 13:50:22.212323 1897 log.go:172] (0xc000118bb0) (0xc0004ba0a0) Create stream\nI0519 13:50:22.212336 1897 log.go:172] (0xc000118bb0) (0xc0004ba0a0) Stream added, broadcasting: 3\nI0519 13:50:22.213597 1897 log.go:172] (0xc000118bb0) Reply frame received for 3\nI0519 13:50:22.213632 1897 log.go:172] (0xc000118bb0) (0xc000820000) Create stream\nI0519 13:50:22.213641 1897 log.go:172] (0xc000118bb0) (0xc000820000) Stream added, broadcasting: 5\nI0519 13:50:22.214788 1897 log.go:172] (0xc000118bb0) Reply frame received for 5\nI0519 13:50:22.214826 1897 log.go:172] (0xc000118bb0) (0xc0008200a0) Create stream\nI0519 13:50:22.214840 1897 log.go:172] (0xc000118bb0) (0xc0008200a0) Stream added, broadcasting: 7\nI0519 13:50:22.215766 1897 log.go:172] (0xc000118bb0) Reply frame received for 7\nI0519 13:50:22.215897 1897 log.go:172] (0xc0004ba0a0) (3) Writing data frame\nI0519 13:50:22.215990 1897 log.go:172] (0xc0004ba0a0) (3) Writing data frame\nI0519 13:50:22.216881 1897 log.go:172] (0xc000118bb0) Data frame received for 5\nI0519 13:50:22.216904 1897 log.go:172] (0xc000820000) (5) Data frame handling\nI0519 13:50:22.216922 1897 log.go:172] (0xc000820000) (5) Data frame sent\nI0519 13:50:22.217653 1897 log.go:172] (0xc000118bb0) Data frame received for 5\nI0519 13:50:22.217669 1897 log.go:172] (0xc000820000) (5) Data frame handling\nI0519 13:50:22.217682 1897 log.go:172] (0xc000820000) (5) Data frame sent\nI0519 13:50:22.251260 1897 log.go:172] (0xc000118bb0) Data frame received for 7\nI0519 13:50:22.251295 1897 log.go:172] (0xc0008200a0) (7) Data frame handling\nI0519 13:50:22.251311 1897 log.go:172] (0xc000118bb0) Data frame received for 5\nI0519 13:50:22.251317 1897 log.go:172] (0xc000820000) (5) Data frame handling\nI0519 13:50:22.251734 1897 log.go:172] (0xc000118bb0) Data frame received for 1\nI0519 13:50:22.251756 1897 log.go:172] (0xc000820140) (1) Data frame handling\nI0519 13:50:22.251764 1897 log.go:172] (0xc000820140) (1) Data frame sent\nI0519 13:50:22.251967 1897 log.go:172] (0xc000118bb0) (0xc000820140) Stream removed, broadcasting: 1\nI0519 13:50:22.252049 1897 log.go:172] (0xc000118bb0) (0xc000820140) Stream removed, broadcasting: 1\nI0519 13:50:22.252058 1897 log.go:172] (0xc000118bb0) (0xc0004ba0a0) Stream removed, broadcasting: 3\nI0519 13:50:22.252077 1897 log.go:172] (0xc000118bb0) (0xc000820000) Stream removed, broadcasting: 5\nI0519 13:50:22.252177 1897 log.go:172] (0xc000118bb0) (0xc0008200a0) Stream removed, broadcasting: 7\nI0519 13:50:22.252460 1897 log.go:172] (0xc000118bb0) Go away received\n" May 19 13:50:22.273: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:50:24.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7604" for this suite. May 19 13:50:30.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:50:30.391: INFO: namespace kubectl-7604 deletion completed in 6.108488536s • [SLOW TEST:11.786 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:50:30.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 19 13:50:30.469: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dddef236-4ac7-454f-b636-53635ddd663b" in namespace "downward-api-6315" to be "success or failure" May 19 13:50:30.480: INFO: Pod "downwardapi-volume-dddef236-4ac7-454f-b636-53635ddd663b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.215116ms May 19 13:50:32.484: INFO: Pod "downwardapi-volume-dddef236-4ac7-454f-b636-53635ddd663b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014294324s May 19 13:50:34.488: INFO: Pod "downwardapi-volume-dddef236-4ac7-454f-b636-53635ddd663b": Phase="Running", Reason="", readiness=true. Elapsed: 4.01887611s May 19 13:50:36.493: INFO: Pod "downwardapi-volume-dddef236-4ac7-454f-b636-53635ddd663b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.023836014s STEP: Saw pod success May 19 13:50:36.493: INFO: Pod "downwardapi-volume-dddef236-4ac7-454f-b636-53635ddd663b" satisfied condition "success or failure" May 19 13:50:36.496: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-dddef236-4ac7-454f-b636-53635ddd663b container client-container: STEP: delete the pod May 19 13:50:36.537: INFO: Waiting for pod downwardapi-volume-dddef236-4ac7-454f-b636-53635ddd663b to disappear May 19 13:50:36.547: INFO: Pod downwardapi-volume-dddef236-4ac7-454f-b636-53635ddd663b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:50:36.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6315" for this suite. May 19 13:50:42.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:50:42.642: INFO: namespace downward-api-6315 deletion completed in 6.09188841s • [SLOW TEST:12.251 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:50:42.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-551d209d-07d7-45f3-b7aa-c6e15ee730b7 STEP: Creating a pod to test consume secrets May 19 13:50:42.724: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6c55dcd8-685a-4faa-ad15-c7584bc38630" in namespace "projected-5531" to be "success or failure" May 19 13:50:42.733: INFO: Pod "pod-projected-secrets-6c55dcd8-685a-4faa-ad15-c7584bc38630": Phase="Pending", Reason="", readiness=false. Elapsed: 9.747232ms May 19 13:50:44.738: INFO: Pod "pod-projected-secrets-6c55dcd8-685a-4faa-ad15-c7584bc38630": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014545776s May 19 13:50:46.742: INFO: Pod "pod-projected-secrets-6c55dcd8-685a-4faa-ad15-c7584bc38630": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018830412s STEP: Saw pod success May 19 13:50:46.743: INFO: Pod "pod-projected-secrets-6c55dcd8-685a-4faa-ad15-c7584bc38630" satisfied condition "success or failure" May 19 13:50:46.745: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-6c55dcd8-685a-4faa-ad15-c7584bc38630 container projected-secret-volume-test: STEP: delete the pod May 19 13:50:46.804: INFO: Waiting for pod pod-projected-secrets-6c55dcd8-685a-4faa-ad15-c7584bc38630 to disappear May 19 13:50:46.828: INFO: Pod pod-projected-secrets-6c55dcd8-685a-4faa-ad15-c7584bc38630 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:50:46.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5531" for this suite. May 19 13:50:52.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:50:52.965: INFO: namespace projected-5531 deletion completed in 6.131321299s • [SLOW TEST:10.322 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:50:52.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 19 13:50:53.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 19 13:50:53.144: INFO: stderr: "" May 19 13:50:53.145: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.11\", GitCommit:\"d94a81c724ea8e1ccc9002d89b7fe81d58f89ede\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:43Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:28:37Z\", GoVersion:\"go1.12.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:50:53.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2311" for this suite. May 19 13:50:59.168: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:50:59.240: INFO: namespace kubectl-2311 deletion completed in 6.089544078s • [SLOW TEST:6.275 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:50:59.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-8e824c01-84ae-465d-a9e4-325cea3dee19 STEP: Creating a pod to test consume configMaps May 19 13:50:59.313: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-372ae4af-6682-4dcb-9b19-56444a715f5a" in namespace "projected-7465" to be "success or failure" May 19 13:50:59.363: INFO: Pod "pod-projected-configmaps-372ae4af-6682-4dcb-9b19-56444a715f5a": Phase="Pending", Reason="", readiness=false. Elapsed: 50.123607ms May 19 13:51:01.429: INFO: Pod "pod-projected-configmaps-372ae4af-6682-4dcb-9b19-56444a715f5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115878032s May 19 13:51:03.434: INFO: Pod "pod-projected-configmaps-372ae4af-6682-4dcb-9b19-56444a715f5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.120573121s STEP: Saw pod success May 19 13:51:03.434: INFO: Pod "pod-projected-configmaps-372ae4af-6682-4dcb-9b19-56444a715f5a" satisfied condition "success or failure" May 19 13:51:03.438: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-372ae4af-6682-4dcb-9b19-56444a715f5a container projected-configmap-volume-test: STEP: delete the pod May 19 13:51:03.476: INFO: Waiting for pod pod-projected-configmaps-372ae4af-6682-4dcb-9b19-56444a715f5a to disappear May 19 13:51:03.513: INFO: Pod pod-projected-configmaps-372ae4af-6682-4dcb-9b19-56444a715f5a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:51:03.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7465" for this suite. May 19 13:51:09.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:51:09.608: INFO: namespace projected-7465 deletion completed in 6.090922001s • [SLOW TEST:10.368 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:51:09.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-6386 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-6386 STEP: Deleting pre-stop pod May 19 13:51:22.715: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:51:22.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-6386" for this suite. May 19 13:52:02.765: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:52:02.840: INFO: namespace prestop-6386 deletion completed in 40.112608519s • [SLOW TEST:53.232 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:52:02.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0519 13:52:33.518722 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 19 13:52:33.518: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:52:33.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4289" for this suite. May 19 13:52:39.580: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:52:39.656: INFO: namespace gc-4289 deletion completed in 6.135947794s • [SLOW TEST:36.815 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:52:39.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults May 19 13:52:39.927: INFO: Waiting up to 5m0s for pod "client-containers-75b37eac-7115-4140-a469-6fc2b28d1677" in namespace "containers-8322" to be "success or failure" May 19 13:52:39.932: INFO: Pod "client-containers-75b37eac-7115-4140-a469-6fc2b28d1677": Phase="Pending", Reason="", readiness=false. Elapsed: 4.648274ms May 19 13:52:42.113: INFO: Pod "client-containers-75b37eac-7115-4140-a469-6fc2b28d1677": Phase="Pending", Reason="", readiness=false. Elapsed: 2.186232456s May 19 13:52:44.117: INFO: Pod "client-containers-75b37eac-7115-4140-a469-6fc2b28d1677": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.19047358s STEP: Saw pod success May 19 13:52:44.117: INFO: Pod "client-containers-75b37eac-7115-4140-a469-6fc2b28d1677" satisfied condition "success or failure" May 19 13:52:44.120: INFO: Trying to get logs from node iruya-worker pod client-containers-75b37eac-7115-4140-a469-6fc2b28d1677 container test-container: STEP: delete the pod May 19 13:52:44.341: INFO: Waiting for pod client-containers-75b37eac-7115-4140-a469-6fc2b28d1677 to disappear May 19 13:52:44.381: INFO: Pod client-containers-75b37eac-7115-4140-a469-6fc2b28d1677 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:52:44.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8322" for this suite. May 19 13:52:50.402: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:52:50.473: INFO: namespace containers-8322 deletion completed in 6.088401065s • [SLOW TEST:10.817 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:52:50.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 19 13:52:50.565: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:52:51.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5439" for this suite. May 19 13:52:57.635: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:52:57.760: INFO: namespace custom-resource-definition-5439 deletion completed in 6.137269563s • [SLOW TEST:7.287 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:52:57.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition May 19 13:52:57.840: INFO: Waiting up to 5m0s for pod "var-expansion-8e141b3a-9e60-432a-a168-38aa4ecc264a" in namespace "var-expansion-4142" to be "success or failure" May 19 13:52:57.848: INFO: Pod "var-expansion-8e141b3a-9e60-432a-a168-38aa4ecc264a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.788381ms May 19 13:52:59.857: INFO: Pod "var-expansion-8e141b3a-9e60-432a-a168-38aa4ecc264a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017042462s May 19 13:53:01.861: INFO: Pod "var-expansion-8e141b3a-9e60-432a-a168-38aa4ecc264a": Phase="Running", Reason="", readiness=true. Elapsed: 4.020702873s May 19 13:53:03.865: INFO: Pod "var-expansion-8e141b3a-9e60-432a-a168-38aa4ecc264a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.025089621s STEP: Saw pod success May 19 13:53:03.865: INFO: Pod "var-expansion-8e141b3a-9e60-432a-a168-38aa4ecc264a" satisfied condition "success or failure" May 19 13:53:03.868: INFO: Trying to get logs from node iruya-worker pod var-expansion-8e141b3a-9e60-432a-a168-38aa4ecc264a container dapi-container: STEP: delete the pod May 19 13:53:03.906: INFO: Waiting for pod var-expansion-8e141b3a-9e60-432a-a168-38aa4ecc264a to disappear May 19 13:53:03.914: INFO: Pod var-expansion-8e141b3a-9e60-432a-a168-38aa4ecc264a no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:53:03.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4142" for this suite. May 19 13:53:09.947: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:53:10.071: INFO: namespace var-expansion-4142 deletion completed in 6.153607095s • [SLOW TEST:12.311 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:53:10.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 19 13:53:14.191: INFO: Waiting up to 5m0s for pod "client-envvars-5250cc0b-551c-4f3d-8bbc-25c34d007420" in namespace "pods-2634" to be "success or failure" May 19 13:53:14.196: INFO: Pod "client-envvars-5250cc0b-551c-4f3d-8bbc-25c34d007420": Phase="Pending", Reason="", readiness=false. Elapsed: 5.177982ms May 19 13:53:16.245: INFO: Pod "client-envvars-5250cc0b-551c-4f3d-8bbc-25c34d007420": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054036565s May 19 13:53:18.250: INFO: Pod "client-envvars-5250cc0b-551c-4f3d-8bbc-25c34d007420": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058303943s STEP: Saw pod success May 19 13:53:18.250: INFO: Pod "client-envvars-5250cc0b-551c-4f3d-8bbc-25c34d007420" satisfied condition "success or failure" May 19 13:53:18.252: INFO: Trying to get logs from node iruya-worker pod client-envvars-5250cc0b-551c-4f3d-8bbc-25c34d007420 container env3cont: STEP: delete the pod May 19 13:53:18.275: INFO: Waiting for pod client-envvars-5250cc0b-551c-4f3d-8bbc-25c34d007420 to disappear May 19 13:53:18.286: INFO: Pod client-envvars-5250cc0b-551c-4f3d-8bbc-25c34d007420 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:53:18.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2634" for this suite. May 19 13:54:04.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:54:04.402: INFO: namespace pods-2634 deletion completed in 46.113109232s • [SLOW TEST:54.330 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:54:04.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 19 13:54:04.496: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 19 13:54:09.499: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 19 13:54:09.499: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 19 13:54:09.520: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-7584,SelfLink:/apis/apps/v1/namespaces/deployment-7584/deployments/test-cleanup-deployment,UID:a006579d-3f43-4a86-af3b-648b9ce89f81,ResourceVersion:11761622,Generation:1,CreationTimestamp:2020-05-19 13:54:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} May 19 13:54:09.527: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-7584,SelfLink:/apis/apps/v1/namespaces/deployment-7584/replicasets/test-cleanup-deployment-55bbcbc84c,UID:a96fe7cc-9460-4522-80be-a7e59cfbc611,ResourceVersion:11761624,Generation:1,CreationTimestamp:2020-05-19 13:54:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment a006579d-3f43-4a86-af3b-648b9ce89f81 0xc002594be7 0xc002594be8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 19 13:54:09.527: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 19 13:54:09.527: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-7584,SelfLink:/apis/apps/v1/namespaces/deployment-7584/replicasets/test-cleanup-controller,UID:14354cce-8229-4dcb-8694-de2cd87e0cd8,ResourceVersion:11761623,Generation:1,CreationTimestamp:2020-05-19 13:54:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment a006579d-3f43-4a86-af3b-648b9ce89f81 0xc002594b17 0xc002594b18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 19 13:54:09.578: INFO: Pod "test-cleanup-controller-kb7df" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-kb7df,GenerateName:test-cleanup-controller-,Namespace:deployment-7584,SelfLink:/api/v1/namespaces/deployment-7584/pods/test-cleanup-controller-kb7df,UID:3ef0fde3-3bdc-42c9-a218-29c84559246e,ResourceVersion:11761616,Generation:0,CreationTimestamp:2020-05-19 13:54:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 14354cce-8229-4dcb-8694-de2cd87e0cd8 0xc0025954d7 0xc0025954d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-d8vw5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-d8vw5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-d8vw5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002595550} {node.kubernetes.io/unreachable Exists NoExecute 0xc002595570}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:54:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:54:07 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:54:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:54:04 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.219,StartTime:2020-05-19 13:54:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-19 13:54:07 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://8c597d235e473b6b6858d17859e068a90b1f5077b6c0c5dd1d54aef4916f656c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 19 13:54:09.578: INFO: Pod "test-cleanup-deployment-55bbcbc84c-nlnjg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-nlnjg,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-7584,SelfLink:/api/v1/namespaces/deployment-7584/pods/test-cleanup-deployment-55bbcbc84c-nlnjg,UID:315ee000-58ed-4956-b607-b509ab8400b3,ResourceVersion:11761630,Generation:0,CreationTimestamp:2020-05-19 13:54:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c a96fe7cc-9460-4522-80be-a7e59cfbc611 0xc002595657 0xc002595658}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-d8vw5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-d8vw5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-d8vw5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025956d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025956f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 13:54:09 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:54:09.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7584" for this suite. May 19 13:54:15.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:54:15.734: INFO: namespace deployment-7584 deletion completed in 6.097550601s • [SLOW TEST:11.331 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:54:15.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy May 19 13:54:15.768: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix786585470/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:54:15.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4841" for this suite. May 19 13:54:21.853: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:54:21.919: INFO: namespace kubectl-4841 deletion completed in 6.077146853s • [SLOW TEST:6.185 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:54:21.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-19a4fccc-a1c9-4876-b502-d0abff229564 STEP: Creating a pod to test consume secrets May 19 13:54:22.020: INFO: Waiting up to 5m0s for pod "pod-secrets-286288a8-182b-40f6-b1ee-2ae3242bcb89" in namespace "secrets-5801" to be "success or failure" May 19 13:54:22.023: INFO: Pod "pod-secrets-286288a8-182b-40f6-b1ee-2ae3242bcb89": Phase="Pending", Reason="", readiness=false. Elapsed: 2.891041ms May 19 13:54:24.026: INFO: Pod "pod-secrets-286288a8-182b-40f6-b1ee-2ae3242bcb89": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006020016s May 19 13:54:26.031: INFO: Pod "pod-secrets-286288a8-182b-40f6-b1ee-2ae3242bcb89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010201374s STEP: Saw pod success May 19 13:54:26.031: INFO: Pod "pod-secrets-286288a8-182b-40f6-b1ee-2ae3242bcb89" satisfied condition "success or failure" May 19 13:54:26.034: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-286288a8-182b-40f6-b1ee-2ae3242bcb89 container secret-volume-test: STEP: delete the pod May 19 13:54:26.074: INFO: Waiting for pod pod-secrets-286288a8-182b-40f6-b1ee-2ae3242bcb89 to disappear May 19 13:54:26.096: INFO: Pod pod-secrets-286288a8-182b-40f6-b1ee-2ae3242bcb89 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:54:26.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5801" for this suite. May 19 13:54:32.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:54:32.322: INFO: namespace secrets-5801 deletion completed in 6.223131324s • [SLOW TEST:10.403 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:54:32.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 19 13:54:32.392: INFO: Waiting up to 5m0s for pod "downward-api-96d805d9-415f-4f8e-939a-1920b8dcc189" in namespace "downward-api-6770" to be "success or failure" May 19 13:54:32.395: INFO: Pod "downward-api-96d805d9-415f-4f8e-939a-1920b8dcc189": Phase="Pending", Reason="", readiness=false. Elapsed: 2.825688ms May 19 13:54:34.420: INFO: Pod "downward-api-96d805d9-415f-4f8e-939a-1920b8dcc189": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027797185s May 19 13:54:36.498: INFO: Pod "downward-api-96d805d9-415f-4f8e-939a-1920b8dcc189": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.105852107s STEP: Saw pod success May 19 13:54:36.498: INFO: Pod "downward-api-96d805d9-415f-4f8e-939a-1920b8dcc189" satisfied condition "success or failure" May 19 13:54:36.502: INFO: Trying to get logs from node iruya-worker pod downward-api-96d805d9-415f-4f8e-939a-1920b8dcc189 container dapi-container: STEP: delete the pod May 19 13:54:36.529: INFO: Waiting for pod downward-api-96d805d9-415f-4f8e-939a-1920b8dcc189 to disappear May 19 13:54:36.539: INFO: Pod downward-api-96d805d9-415f-4f8e-939a-1920b8dcc189 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:54:36.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6770" for this suite. May 19 13:54:42.555: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:54:42.629: INFO: namespace downward-api-6770 deletion completed in 6.087041121s • [SLOW TEST:10.306 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:54:42.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4227.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-4227.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4227.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4227.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-4227.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4227.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 19 13:54:50.788: INFO: DNS probes using dns-4227/dns-test-91fec3c6-9e5e-410f-bbd3-05c5aaaf9343 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:54:50.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4227" for this suite. May 19 13:54:56.871: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:54:56.942: INFO: namespace dns-4227 deletion completed in 6.119611777s • [SLOW TEST:14.312 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:54:56.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-e9278c9b-096e-49d6-a0d6-0e3688f189e5 STEP: Creating a pod to test consume configMaps May 19 13:54:57.026: INFO: Waiting up to 5m0s for pod "pod-configmaps-ce906082-1c8e-40ed-a256-8c116ece91a0" in namespace "configmap-4403" to be "success or failure" May 19 13:54:57.053: INFO: Pod "pod-configmaps-ce906082-1c8e-40ed-a256-8c116ece91a0": Phase="Pending", Reason="", readiness=false. Elapsed: 26.639506ms May 19 13:54:59.057: INFO: Pod "pod-configmaps-ce906082-1c8e-40ed-a256-8c116ece91a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030848278s May 19 13:55:01.061: INFO: Pod "pod-configmaps-ce906082-1c8e-40ed-a256-8c116ece91a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035131114s STEP: Saw pod success May 19 13:55:01.061: INFO: Pod "pod-configmaps-ce906082-1c8e-40ed-a256-8c116ece91a0" satisfied condition "success or failure" May 19 13:55:01.064: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-ce906082-1c8e-40ed-a256-8c116ece91a0 container configmap-volume-test: STEP: delete the pod May 19 13:55:01.080: INFO: Waiting for pod pod-configmaps-ce906082-1c8e-40ed-a256-8c116ece91a0 to disappear May 19 13:55:01.084: INFO: Pod pod-configmaps-ce906082-1c8e-40ed-a256-8c116ece91a0 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:55:01.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4403" for this suite. May 19 13:55:07.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:55:07.178: INFO: namespace configmap-4403 deletion completed in 6.091203133s • [SLOW TEST:10.236 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:55:07.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:55:33.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1924" for this suite. May 19 13:55:39.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:55:39.574: INFO: namespace namespaces-1924 deletion completed in 6.108330397s STEP: Destroying namespace "nsdeletetest-5335" for this suite. May 19 13:55:39.577: INFO: Namespace nsdeletetest-5335 was already deleted STEP: Destroying namespace "nsdeletetest-9896" for this suite. May 19 13:55:45.592: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:55:45.672: INFO: namespace nsdeletetest-9896 deletion completed in 6.095338306s • [SLOW TEST:38.494 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:55:45.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 19 13:55:45.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-7251' May 19 13:55:45.807: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 19 13:55:45.807: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 May 19 13:55:49.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-7251' May 19 13:55:49.959: INFO: stderr: "" May 19 13:55:49.959: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:55:49.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7251" for this suite. May 19 13:56:11.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:56:12.055: INFO: namespace kubectl-7251 deletion completed in 22.092096499s • [SLOW TEST:26.383 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:56:12.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs May 19 13:56:12.142: INFO: Waiting up to 5m0s for pod "pod-409bc071-f6df-4529-91a8-d9cb2ff644d3" in namespace "emptydir-3868" to be "success or failure" May 19 13:56:12.147: INFO: Pod "pod-409bc071-f6df-4529-91a8-d9cb2ff644d3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.376542ms May 19 13:56:14.151: INFO: Pod "pod-409bc071-f6df-4529-91a8-d9cb2ff644d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008293801s May 19 13:56:16.155: INFO: Pod "pod-409bc071-f6df-4529-91a8-d9cb2ff644d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012516494s STEP: Saw pod success May 19 13:56:16.155: INFO: Pod "pod-409bc071-f6df-4529-91a8-d9cb2ff644d3" satisfied condition "success or failure" May 19 13:56:16.157: INFO: Trying to get logs from node iruya-worker2 pod pod-409bc071-f6df-4529-91a8-d9cb2ff644d3 container test-container: STEP: delete the pod May 19 13:56:16.286: INFO: Waiting for pod pod-409bc071-f6df-4529-91a8-d9cb2ff644d3 to disappear May 19 13:56:16.313: INFO: Pod pod-409bc071-f6df-4529-91a8-d9cb2ff644d3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:56:16.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3868" for this suite. May 19 13:56:22.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:56:22.410: INFO: namespace emptydir-3868 deletion completed in 6.092461112s • [SLOW TEST:10.354 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:56:22.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-e2166e03-f042-4e5a-a5f5-e0994995297e STEP: Creating a pod to test consume configMaps May 19 13:56:22.524: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a946e2df-6bb3-4f2c-a47f-88c9f1938b3f" in namespace "projected-1582" to be "success or failure" May 19 13:56:22.528: INFO: Pod "pod-projected-configmaps-a946e2df-6bb3-4f2c-a47f-88c9f1938b3f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.25256ms May 19 13:56:24.533: INFO: Pod "pod-projected-configmaps-a946e2df-6bb3-4f2c-a47f-88c9f1938b3f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009269199s May 19 13:56:26.538: INFO: Pod "pod-projected-configmaps-a946e2df-6bb3-4f2c-a47f-88c9f1938b3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013473359s STEP: Saw pod success May 19 13:56:26.538: INFO: Pod "pod-projected-configmaps-a946e2df-6bb3-4f2c-a47f-88c9f1938b3f" satisfied condition "success or failure" May 19 13:56:26.541: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-a946e2df-6bb3-4f2c-a47f-88c9f1938b3f container projected-configmap-volume-test: STEP: delete the pod May 19 13:56:26.705: INFO: Waiting for pod pod-projected-configmaps-a946e2df-6bb3-4f2c-a47f-88c9f1938b3f to disappear May 19 13:56:26.858: INFO: Pod pod-projected-configmaps-a946e2df-6bb3-4f2c-a47f-88c9f1938b3f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:56:26.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1582" for this suite. May 19 13:56:32.879: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:56:32.995: INFO: namespace projected-1582 deletion completed in 6.133334066s • [SLOW TEST:10.585 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:56:32.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-8784/configmap-test-b2ef414d-dc9e-4885-9feb-aee498058114 STEP: Creating a pod to test consume configMaps May 19 13:56:33.119: INFO: Waiting up to 5m0s for pod "pod-configmaps-faa8345e-ce63-48f9-82eb-cbd35257e750" in namespace "configmap-8784" to be "success or failure" May 19 13:56:33.123: INFO: Pod "pod-configmaps-faa8345e-ce63-48f9-82eb-cbd35257e750": Phase="Pending", Reason="", readiness=false. Elapsed: 3.273093ms May 19 13:56:35.148: INFO: Pod "pod-configmaps-faa8345e-ce63-48f9-82eb-cbd35257e750": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029064832s May 19 13:56:37.152: INFO: Pod "pod-configmaps-faa8345e-ce63-48f9-82eb-cbd35257e750": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032560156s STEP: Saw pod success May 19 13:56:37.152: INFO: Pod "pod-configmaps-faa8345e-ce63-48f9-82eb-cbd35257e750" satisfied condition "success or failure" May 19 13:56:37.155: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-faa8345e-ce63-48f9-82eb-cbd35257e750 container env-test: STEP: delete the pod May 19 13:56:37.192: INFO: Waiting for pod pod-configmaps-faa8345e-ce63-48f9-82eb-cbd35257e750 to disappear May 19 13:56:37.207: INFO: Pod pod-configmaps-faa8345e-ce63-48f9-82eb-cbd35257e750 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:56:37.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8784" for this suite. May 19 13:56:43.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:56:43.464: INFO: namespace configmap-8784 deletion completed in 6.253348754s • [SLOW TEST:10.469 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:56:43.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-bf2a4785-16bb-4489-98a1-2c1b45ccc26f STEP: Creating a pod to test consume configMaps May 19 13:56:43.568: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a947b7a8-427c-4599-9fa0-30c67892cc1d" in namespace "projected-1567" to be "success or failure" May 19 13:56:43.572: INFO: Pod "pod-projected-configmaps-a947b7a8-427c-4599-9fa0-30c67892cc1d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.978776ms May 19 13:56:45.576: INFO: Pod "pod-projected-configmaps-a947b7a8-427c-4599-9fa0-30c67892cc1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008352035s May 19 13:56:47.579: INFO: Pod "pod-projected-configmaps-a947b7a8-427c-4599-9fa0-30c67892cc1d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011430589s STEP: Saw pod success May 19 13:56:47.579: INFO: Pod "pod-projected-configmaps-a947b7a8-427c-4599-9fa0-30c67892cc1d" satisfied condition "success or failure" May 19 13:56:47.582: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-a947b7a8-427c-4599-9fa0-30c67892cc1d container projected-configmap-volume-test: STEP: delete the pod May 19 13:56:47.610: INFO: Waiting for pod pod-projected-configmaps-a947b7a8-427c-4599-9fa0-30c67892cc1d to disappear May 19 13:56:47.614: INFO: Pod pod-projected-configmaps-a947b7a8-427c-4599-9fa0-30c67892cc1d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:56:47.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1567" for this suite. May 19 13:56:53.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:56:53.723: INFO: namespace projected-1567 deletion completed in 6.105917955s • [SLOW TEST:10.258 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:56:53.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 19 13:56:53.809: INFO: Waiting up to 5m0s for pod "downward-api-a62400c7-9f7f-4d59-944d-d0eb8be9d28d" in namespace "downward-api-2086" to be "success or failure" May 19 13:56:53.820: INFO: Pod "downward-api-a62400c7-9f7f-4d59-944d-d0eb8be9d28d": Phase="Pending", Reason="", readiness=false. Elapsed: 11.269756ms May 19 13:56:55.823: INFO: Pod "downward-api-a62400c7-9f7f-4d59-944d-d0eb8be9d28d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014929313s May 19 13:56:57.827: INFO: Pod "downward-api-a62400c7-9f7f-4d59-944d-d0eb8be9d28d": Phase="Running", Reason="", readiness=true. Elapsed: 4.01869059s May 19 13:56:59.859: INFO: Pod "downward-api-a62400c7-9f7f-4d59-944d-d0eb8be9d28d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.050690733s STEP: Saw pod success May 19 13:56:59.859: INFO: Pod "downward-api-a62400c7-9f7f-4d59-944d-d0eb8be9d28d" satisfied condition "success or failure" May 19 13:56:59.862: INFO: Trying to get logs from node iruya-worker2 pod downward-api-a62400c7-9f7f-4d59-944d-d0eb8be9d28d container dapi-container: STEP: delete the pod May 19 13:56:59.904: INFO: Waiting for pod downward-api-a62400c7-9f7f-4d59-944d-d0eb8be9d28d to disappear May 19 13:56:59.912: INFO: Pod downward-api-a62400c7-9f7f-4d59-944d-d0eb8be9d28d no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:56:59.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2086" for this suite. May 19 13:57:05.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:57:06.032: INFO: namespace downward-api-2086 deletion completed in 6.117533981s • [SLOW TEST:12.309 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:57:06.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs May 19 13:57:06.101: INFO: Waiting up to 5m0s for pod "pod-fed4a341-6886-4598-b626-3f4bcac00d68" in namespace "emptydir-3111" to be "success or failure" May 19 13:57:06.105: INFO: Pod "pod-fed4a341-6886-4598-b626-3f4bcac00d68": Phase="Pending", Reason="", readiness=false. Elapsed: 4.506698ms May 19 13:57:08.110: INFO: Pod "pod-fed4a341-6886-4598-b626-3f4bcac00d68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009244868s May 19 13:57:10.116: INFO: Pod "pod-fed4a341-6886-4598-b626-3f4bcac00d68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014625419s STEP: Saw pod success May 19 13:57:10.116: INFO: Pod "pod-fed4a341-6886-4598-b626-3f4bcac00d68" satisfied condition "success or failure" May 19 13:57:10.118: INFO: Trying to get logs from node iruya-worker pod pod-fed4a341-6886-4598-b626-3f4bcac00d68 container test-container: STEP: delete the pod May 19 13:57:10.153: INFO: Waiting for pod pod-fed4a341-6886-4598-b626-3f4bcac00d68 to disappear May 19 13:57:10.155: INFO: Pod pod-fed4a341-6886-4598-b626-3f4bcac00d68 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:57:10.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3111" for this suite. May 19 13:57:16.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:57:16.251: INFO: namespace emptydir-3111 deletion completed in 6.092107424s • [SLOW TEST:10.218 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:57:16.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 19 13:57:16.336: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9487,SelfLink:/api/v1/namespaces/watch-9487/configmaps/e2e-watch-test-watch-closed,UID:0b950a4a-0f7f-4fdb-96e5-b7e23e401c6e,ResourceVersion:11762382,Generation:0,CreationTimestamp:2020-05-19 13:57:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 19 13:57:16.336: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9487,SelfLink:/api/v1/namespaces/watch-9487/configmaps/e2e-watch-test-watch-closed,UID:0b950a4a-0f7f-4fdb-96e5-b7e23e401c6e,ResourceVersion:11762383,Generation:0,CreationTimestamp:2020-05-19 13:57:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 19 13:57:16.349: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9487,SelfLink:/api/v1/namespaces/watch-9487/configmaps/e2e-watch-test-watch-closed,UID:0b950a4a-0f7f-4fdb-96e5-b7e23e401c6e,ResourceVersion:11762384,Generation:0,CreationTimestamp:2020-05-19 13:57:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 19 13:57:16.349: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9487,SelfLink:/api/v1/namespaces/watch-9487/configmaps/e2e-watch-test-watch-closed,UID:0b950a4a-0f7f-4fdb-96e5-b7e23e401c6e,ResourceVersion:11762385,Generation:0,CreationTimestamp:2020-05-19 13:57:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:57:16.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9487" for this suite. May 19 13:57:22.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:57:22.464: INFO: namespace watch-9487 deletion completed in 6.094711361s • [SLOW TEST:6.212 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:57:22.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-5bafdee2-83dc-4716-8100-2a2baeaaf8d4 in namespace container-probe-4075 May 19 13:57:26.587: INFO: Started pod liveness-5bafdee2-83dc-4716-8100-2a2baeaaf8d4 in namespace container-probe-4075 STEP: checking the pod's current state and verifying that restartCount is present May 19 13:57:26.589: INFO: Initial restart count of pod liveness-5bafdee2-83dc-4716-8100-2a2baeaaf8d4 is 0 May 19 13:57:46.631: INFO: Restart count of pod container-probe-4075/liveness-5bafdee2-83dc-4716-8100-2a2baeaaf8d4 is now 1 (20.041625239s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:57:46.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4075" for this suite. May 19 13:57:52.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:57:52.736: INFO: namespace container-probe-4075 deletion completed in 6.085584866s • [SLOW TEST:30.272 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:57:52.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-709db7d4-9cd4-4755-b3be-ca11793eed9d STEP: Creating a pod to test consume configMaps May 19 13:57:52.802: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-eb809e0d-ee33-4141-949e-6b3646602187" in namespace "projected-4720" to be "success or failure" May 19 13:57:52.860: INFO: Pod "pod-projected-configmaps-eb809e0d-ee33-4141-949e-6b3646602187": Phase="Pending", Reason="", readiness=false. Elapsed: 58.4384ms May 19 13:57:54.863: INFO: Pod "pod-projected-configmaps-eb809e0d-ee33-4141-949e-6b3646602187": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061367586s May 19 13:57:56.867: INFO: Pod "pod-projected-configmaps-eb809e0d-ee33-4141-949e-6b3646602187": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065037956s STEP: Saw pod success May 19 13:57:56.867: INFO: Pod "pod-projected-configmaps-eb809e0d-ee33-4141-949e-6b3646602187" satisfied condition "success or failure" May 19 13:57:56.869: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-eb809e0d-ee33-4141-949e-6b3646602187 container projected-configmap-volume-test: STEP: delete the pod May 19 13:57:56.903: INFO: Waiting for pod pod-projected-configmaps-eb809e0d-ee33-4141-949e-6b3646602187 to disappear May 19 13:57:56.907: INFO: Pod pod-projected-configmaps-eb809e0d-ee33-4141-949e-6b3646602187 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:57:56.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4720" for this suite. May 19 13:58:02.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:58:02.995: INFO: namespace projected-4720 deletion completed in 6.085177449s • [SLOW TEST:10.258 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:58:02.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 19 13:58:07.589: INFO: Successfully updated pod "labelsupdate476f2012-dff8-4f9d-8c95-2afbcdf906b1" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:58:09.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2320" for this suite. May 19 13:58:31.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:58:31.842: INFO: namespace projected-2320 deletion completed in 22.154887736s • [SLOW TEST:28.846 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:58:31.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-8762 I0519 13:58:31.919460 6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-8762, replica count: 1 I0519 13:58:32.969894 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 13:58:33.970091 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 13:58:34.970280 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 13:58:35.970512 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 19 13:58:36.104: INFO: Created: latency-svc-vrb9b May 19 13:58:36.120: INFO: Got endpoints: latency-svc-vrb9b [49.330935ms] May 19 13:58:36.178: INFO: Created: latency-svc-gwhgh May 19 13:58:36.181: INFO: Got endpoints: latency-svc-gwhgh [61.132933ms] May 19 13:58:36.232: INFO: Created: latency-svc-c9jl9 May 19 13:58:36.258: INFO: Got endpoints: latency-svc-c9jl9 [137.677324ms] May 19 13:58:36.274: INFO: Created: latency-svc-r4wdx May 19 13:58:36.339: INFO: Got endpoints: latency-svc-r4wdx [219.533462ms] May 19 13:58:36.343: INFO: Created: latency-svc-w84wm May 19 13:58:36.354: INFO: Got endpoints: latency-svc-w84wm [233.910557ms] May 19 13:58:36.386: INFO: Created: latency-svc-nxg5h May 19 13:58:36.418: INFO: Got endpoints: latency-svc-nxg5h [298.000704ms] May 19 13:58:36.483: INFO: Created: latency-svc-764t7 May 19 13:58:36.487: INFO: Got endpoints: latency-svc-764t7 [366.828134ms] May 19 13:58:36.514: INFO: Created: latency-svc-bp6nb May 19 13:58:36.522: INFO: Got endpoints: latency-svc-bp6nb [402.086079ms] May 19 13:58:36.548: INFO: Created: latency-svc-9dp2g May 19 13:58:36.565: INFO: Got endpoints: latency-svc-9dp2g [445.228872ms] May 19 13:58:36.621: INFO: Created: latency-svc-j6kxb May 19 13:58:36.624: INFO: Got endpoints: latency-svc-j6kxb [504.370444ms] May 19 13:58:36.707: INFO: Created: latency-svc-2mm9v May 19 13:58:36.721: INFO: Got endpoints: latency-svc-2mm9v [601.114687ms] May 19 13:58:36.777: INFO: Created: latency-svc-spgqx May 19 13:58:36.783: INFO: Got endpoints: latency-svc-spgqx [663.120157ms] May 19 13:58:36.863: INFO: Created: latency-svc-dg9mk May 19 13:58:36.950: INFO: Got endpoints: latency-svc-dg9mk [830.004404ms] May 19 13:58:36.952: INFO: Created: latency-svc-dc65b May 19 13:58:36.960: INFO: Got endpoints: latency-svc-dc65b [839.804138ms] May 19 13:58:36.980: INFO: Created: latency-svc-n9lbn May 19 13:58:37.012: INFO: Got endpoints: latency-svc-n9lbn [891.998334ms] May 19 13:58:37.088: INFO: Created: latency-svc-lx8g4 May 19 13:58:37.091: INFO: Got endpoints: latency-svc-lx8g4 [970.788549ms] May 19 13:58:37.112: INFO: Created: latency-svc-6lqck May 19 13:58:37.129: INFO: Got endpoints: latency-svc-6lqck [948.361459ms] May 19 13:58:37.154: INFO: Created: latency-svc-9jwnj May 19 13:58:37.165: INFO: Got endpoints: latency-svc-9jwnj [907.774992ms] May 19 13:58:37.238: INFO: Created: latency-svc-ch555 May 19 13:58:37.270: INFO: Got endpoints: latency-svc-ch555 [930.787046ms] May 19 13:58:37.271: INFO: Created: latency-svc-v6mpv May 19 13:58:37.312: INFO: Got endpoints: latency-svc-v6mpv [958.331624ms] May 19 13:58:37.412: INFO: Created: latency-svc-txzts May 19 13:58:37.450: INFO: Got endpoints: latency-svc-txzts [1.031818031s] May 19 13:58:37.456: INFO: Created: latency-svc-vsqvg May 19 13:58:37.466: INFO: Got endpoints: latency-svc-vsqvg [979.138235ms] May 19 13:58:37.487: INFO: Created: latency-svc-g8bdn May 19 13:58:37.502: INFO: Got endpoints: latency-svc-g8bdn [979.903987ms] May 19 13:58:37.555: INFO: Created: latency-svc-lpns2 May 19 13:58:37.567: INFO: Got endpoints: latency-svc-lpns2 [1.002166058s] May 19 13:58:37.612: INFO: Created: latency-svc-mc8t8 May 19 13:58:37.623: INFO: Got endpoints: latency-svc-mc8t8 [998.471974ms] May 19 13:58:37.711: INFO: Created: latency-svc-pbcvg May 19 13:58:37.714: INFO: Got endpoints: latency-svc-pbcvg [992.603242ms] May 19 13:58:37.763: INFO: Created: latency-svc-4l5bz May 19 13:58:37.798: INFO: Got endpoints: latency-svc-4l5bz [1.014743s] May 19 13:58:37.849: INFO: Created: latency-svc-9ws5t May 19 13:58:37.864: INFO: Got endpoints: latency-svc-9ws5t [913.540767ms] May 19 13:58:37.895: INFO: Created: latency-svc-d2vdg May 19 13:58:37.912: INFO: Got endpoints: latency-svc-d2vdg [951.888883ms] May 19 13:58:37.931: INFO: Created: latency-svc-2tb4h May 19 13:58:37.949: INFO: Got endpoints: latency-svc-2tb4h [936.988343ms] May 19 13:58:37.999: INFO: Created: latency-svc-l5l9q May 19 13:58:38.002: INFO: Got endpoints: latency-svc-l5l9q [910.949502ms] May 19 13:58:38.036: INFO: Created: latency-svc-4m7rh May 19 13:58:38.068: INFO: Got endpoints: latency-svc-4m7rh [938.398043ms] May 19 13:58:38.144: INFO: Created: latency-svc-ggwqb May 19 13:58:38.150: INFO: Got endpoints: latency-svc-ggwqb [984.118973ms] May 19 13:58:38.204: INFO: Created: latency-svc-tkvhf May 19 13:58:38.221: INFO: Got endpoints: latency-svc-tkvhf [951.00226ms] May 19 13:58:38.311: INFO: Created: latency-svc-ncw4v May 19 13:58:38.337: INFO: Got endpoints: latency-svc-ncw4v [1.024725071s] May 19 13:58:38.381: INFO: Created: latency-svc-4dvxw May 19 13:58:38.408: INFO: Got endpoints: latency-svc-4dvxw [957.652609ms] May 19 13:58:38.447: INFO: Created: latency-svc-jc8ld May 19 13:58:38.468: INFO: Got endpoints: latency-svc-jc8ld [1.00146768s] May 19 13:58:38.492: INFO: Created: latency-svc-dj5z4 May 19 13:58:38.509: INFO: Got endpoints: latency-svc-dj5z4 [1.007089133s] May 19 13:58:38.528: INFO: Created: latency-svc-qz9bk May 19 13:58:38.540: INFO: Got endpoints: latency-svc-qz9bk [972.321266ms] May 19 13:58:38.580: INFO: Created: latency-svc-qndhl May 19 13:58:38.596: INFO: Got endpoints: latency-svc-qndhl [973.106078ms] May 19 13:58:38.627: INFO: Created: latency-svc-ljgwj May 19 13:58:38.643: INFO: Got endpoints: latency-svc-ljgwj [928.71837ms] May 19 13:58:38.730: INFO: Created: latency-svc-xs5sp May 19 13:58:38.738: INFO: Got endpoints: latency-svc-xs5sp [940.284763ms] May 19 13:58:38.764: INFO: Created: latency-svc-n67nn May 19 13:58:38.781: INFO: Got endpoints: latency-svc-n67nn [916.7391ms] May 19 13:58:38.828: INFO: Created: latency-svc-qv7tq May 19 13:58:38.866: INFO: Got endpoints: latency-svc-qv7tq [954.320973ms] May 19 13:58:38.900: INFO: Created: latency-svc-ksbr4 May 19 13:58:38.915: INFO: Got endpoints: latency-svc-ksbr4 [965.399926ms] May 19 13:58:38.945: INFO: Created: latency-svc-xnrw7 May 19 13:58:38.963: INFO: Got endpoints: latency-svc-xnrw7 [960.806219ms] May 19 13:58:39.015: INFO: Created: latency-svc-89czx May 19 13:58:39.030: INFO: Got endpoints: latency-svc-89czx [962.105432ms] May 19 13:58:39.056: INFO: Created: latency-svc-v7p2s May 19 13:58:39.071: INFO: Got endpoints: latency-svc-v7p2s [921.830533ms] May 19 13:58:39.136: INFO: Created: latency-svc-8gcn9 May 19 13:58:39.167: INFO: Got endpoints: latency-svc-8gcn9 [945.957065ms] May 19 13:58:39.168: INFO: Created: latency-svc-qb4qj May 19 13:58:39.180: INFO: Got endpoints: latency-svc-qb4qj [842.708904ms] May 19 13:58:39.203: INFO: Created: latency-svc-sb569 May 19 13:58:39.216: INFO: Got endpoints: latency-svc-sb569 [808.578616ms] May 19 13:58:39.280: INFO: Created: latency-svc-pclp2 May 19 13:58:39.290: INFO: Got endpoints: latency-svc-pclp2 [821.894464ms] May 19 13:58:39.329: INFO: Created: latency-svc-nk4kt May 19 13:58:39.343: INFO: Got endpoints: latency-svc-nk4kt [833.573367ms] May 19 13:58:39.365: INFO: Created: latency-svc-76xrh May 19 13:58:39.373: INFO: Got endpoints: latency-svc-76xrh [833.321031ms] May 19 13:58:39.442: INFO: Created: latency-svc-dx24m May 19 13:58:39.446: INFO: Got endpoints: latency-svc-dx24m [849.534322ms] May 19 13:58:39.512: INFO: Created: latency-svc-bmhgx May 19 13:58:39.530: INFO: Got endpoints: latency-svc-bmhgx [886.775807ms] May 19 13:58:39.586: INFO: Created: latency-svc-cwhv2 May 19 13:58:39.589: INFO: Got endpoints: latency-svc-cwhv2 [851.038572ms] May 19 13:58:39.623: INFO: Created: latency-svc-cdtvg May 19 13:58:39.632: INFO: Got endpoints: latency-svc-cdtvg [851.088206ms] May 19 13:58:39.668: INFO: Created: latency-svc-wxwxg May 19 13:58:39.680: INFO: Got endpoints: latency-svc-wxwxg [813.956504ms] May 19 13:58:39.730: INFO: Created: latency-svc-r26ll May 19 13:58:39.772: INFO: Got endpoints: latency-svc-r26ll [857.447348ms] May 19 13:58:39.773: INFO: Created: latency-svc-xrzz4 May 19 13:58:39.795: INFO: Got endpoints: latency-svc-xrzz4 [832.075403ms] May 19 13:58:39.820: INFO: Created: latency-svc-89f9v May 19 13:58:39.866: INFO: Got endpoints: latency-svc-89f9v [836.206279ms] May 19 13:58:39.902: INFO: Created: latency-svc-4pp6l May 19 13:58:39.916: INFO: Got endpoints: latency-svc-4pp6l [844.233426ms] May 19 13:58:39.944: INFO: Created: latency-svc-4gjd9 May 19 13:58:39.958: INFO: Got endpoints: latency-svc-4gjd9 [790.375347ms] May 19 13:58:40.004: INFO: Created: latency-svc-5ds4z May 19 13:58:40.008: INFO: Got endpoints: latency-svc-5ds4z [827.808571ms] May 19 13:58:40.037: INFO: Created: latency-svc-5h8xx May 19 13:58:40.054: INFO: Got endpoints: latency-svc-5h8xx [837.840523ms] May 19 13:58:40.102: INFO: Created: latency-svc-v28p5 May 19 13:58:40.160: INFO: Got endpoints: latency-svc-v28p5 [870.620837ms] May 19 13:58:40.162: INFO: Created: latency-svc-nk62z May 19 13:58:40.186: INFO: Got endpoints: latency-svc-nk62z [843.143916ms] May 19 13:58:40.220: INFO: Created: latency-svc-694sl May 19 13:58:40.235: INFO: Got endpoints: latency-svc-694sl [862.039484ms] May 19 13:58:40.304: INFO: Created: latency-svc-lxl29 May 19 13:58:40.307: INFO: Got endpoints: latency-svc-lxl29 [861.352238ms] May 19 13:58:40.331: INFO: Created: latency-svc-8pwr9 May 19 13:58:40.355: INFO: Got endpoints: latency-svc-8pwr9 [825.763973ms] May 19 13:58:40.379: INFO: Created: latency-svc-n85mq May 19 13:58:40.392: INFO: Got endpoints: latency-svc-n85mq [802.270765ms] May 19 13:58:40.460: INFO: Created: latency-svc-vbm2t May 19 13:58:40.464: INFO: Got endpoints: latency-svc-vbm2t [832.089755ms] May 19 13:58:40.494: INFO: Created: latency-svc-5zxrz May 19 13:58:40.506: INFO: Got endpoints: latency-svc-5zxrz [825.581019ms] May 19 13:58:40.529: INFO: Created: latency-svc-frcsv May 19 13:58:40.542: INFO: Got endpoints: latency-svc-frcsv [769.98313ms] May 19 13:58:40.607: INFO: Created: latency-svc-c77zc May 19 13:58:40.607: INFO: Got endpoints: latency-svc-c77zc [101.216997ms] May 19 13:58:40.637: INFO: Created: latency-svc-674p4 May 19 13:58:40.654: INFO: Got endpoints: latency-svc-674p4 [858.676827ms] May 19 13:58:40.670: INFO: Created: latency-svc-gcqgl May 19 13:58:40.687: INFO: Got endpoints: latency-svc-gcqgl [821.307884ms] May 19 13:58:40.754: INFO: Created: latency-svc-2sf74 May 19 13:58:40.787: INFO: Got endpoints: latency-svc-2sf74 [871.165432ms] May 19 13:58:40.820: INFO: Created: latency-svc-z7c6m May 19 13:58:40.833: INFO: Got endpoints: latency-svc-z7c6m [875.788904ms] May 19 13:58:40.897: INFO: Created: latency-svc-c9bgr May 19 13:58:40.900: INFO: Got endpoints: latency-svc-c9bgr [892.338311ms] May 19 13:58:40.980: INFO: Created: latency-svc-qzjml May 19 13:58:41.047: INFO: Got endpoints: latency-svc-qzjml [992.470119ms] May 19 13:58:41.060: INFO: Created: latency-svc-g258v May 19 13:58:41.073: INFO: Got endpoints: latency-svc-g258v [912.863271ms] May 19 13:58:41.096: INFO: Created: latency-svc-tc2w6 May 19 13:58:41.109: INFO: Got endpoints: latency-svc-tc2w6 [922.612945ms] May 19 13:58:41.140: INFO: Created: latency-svc-vwmgd May 19 13:58:41.208: INFO: Got endpoints: latency-svc-vwmgd [972.606643ms] May 19 13:58:41.211: INFO: Created: latency-svc-22twn May 19 13:58:41.246: INFO: Got endpoints: latency-svc-22twn [938.815719ms] May 19 13:58:41.247: INFO: Created: latency-svc-2vgnd May 19 13:58:41.259: INFO: Got endpoints: latency-svc-2vgnd [904.123731ms] May 19 13:58:41.282: INFO: Created: latency-svc-82d5f May 19 13:58:41.296: INFO: Got endpoints: latency-svc-82d5f [903.927292ms] May 19 13:58:41.361: INFO: Created: latency-svc-sdxwp May 19 13:58:41.364: INFO: Got endpoints: latency-svc-sdxwp [900.225488ms] May 19 13:58:41.410: INFO: Created: latency-svc-8c2w8 May 19 13:58:41.430: INFO: Got endpoints: latency-svc-8c2w8 [887.320093ms] May 19 13:58:41.456: INFO: Created: latency-svc-drjhl May 19 13:58:41.519: INFO: Got endpoints: latency-svc-drjhl [911.762923ms] May 19 13:58:41.521: INFO: Created: latency-svc-pf6qm May 19 13:58:41.531: INFO: Got endpoints: latency-svc-pf6qm [877.0403ms] May 19 13:58:41.572: INFO: Created: latency-svc-twf62 May 19 13:58:41.611: INFO: Got endpoints: latency-svc-twf62 [923.123779ms] May 19 13:58:41.669: INFO: Created: latency-svc-95p8x May 19 13:58:41.675: INFO: Got endpoints: latency-svc-95p8x [888.419735ms] May 19 13:58:41.704: INFO: Created: latency-svc-dd9vd May 19 13:58:41.718: INFO: Got endpoints: latency-svc-dd9vd [884.598331ms] May 19 13:58:41.738: INFO: Created: latency-svc-t6c5h May 19 13:58:41.819: INFO: Got endpoints: latency-svc-t6c5h [918.678374ms] May 19 13:58:41.849: INFO: Created: latency-svc-mb677 May 19 13:58:41.862: INFO: Got endpoints: latency-svc-mb677 [815.379977ms] May 19 13:58:41.894: INFO: Created: latency-svc-sg9k8 May 19 13:58:41.910: INFO: Got endpoints: latency-svc-sg9k8 [837.112259ms] May 19 13:58:41.963: INFO: Created: latency-svc-rm47h May 19 13:58:41.999: INFO: Got endpoints: latency-svc-rm47h [889.734462ms] May 19 13:58:42.029: INFO: Created: latency-svc-nfbxm May 19 13:58:42.058: INFO: Got endpoints: latency-svc-nfbxm [849.774967ms] May 19 13:58:42.116: INFO: Created: latency-svc-69c24 May 19 13:58:42.131: INFO: Got endpoints: latency-svc-69c24 [885.229131ms] May 19 13:58:42.158: INFO: Created: latency-svc-w2bkr May 19 13:58:42.186: INFO: Got endpoints: latency-svc-w2bkr [926.619319ms] May 19 13:58:42.250: INFO: Created: latency-svc-mvvkh May 19 13:58:42.274: INFO: Got endpoints: latency-svc-mvvkh [978.655032ms] May 19 13:58:42.275: INFO: Created: latency-svc-8hvl7 May 19 13:58:42.294: INFO: Got endpoints: latency-svc-8hvl7 [929.640819ms] May 19 13:58:42.333: INFO: Created: latency-svc-8nwtq May 19 13:58:42.411: INFO: Got endpoints: latency-svc-8nwtq [981.671334ms] May 19 13:58:42.414: INFO: Created: latency-svc-6728m May 19 13:58:42.420: INFO: Got endpoints: latency-svc-6728m [900.995768ms] May 19 13:58:42.443: INFO: Created: latency-svc-2zcfn May 19 13:58:42.457: INFO: Got endpoints: latency-svc-2zcfn [926.252793ms] May 19 13:58:42.479: INFO: Created: latency-svc-9t9q5 May 19 13:58:42.499: INFO: Got endpoints: latency-svc-9t9q5 [888.620833ms] May 19 13:58:42.562: INFO: Created: latency-svc-5kfmj May 19 13:58:42.565: INFO: Got endpoints: latency-svc-5kfmj [889.638452ms] May 19 13:58:42.602: INFO: Created: latency-svc-kj56g May 19 13:58:42.628: INFO: Got endpoints: latency-svc-kj56g [910.047742ms] May 19 13:58:42.659: INFO: Created: latency-svc-fmgq5 May 19 13:58:42.699: INFO: Got endpoints: latency-svc-fmgq5 [880.24865ms] May 19 13:58:42.711: INFO: Created: latency-svc-5m29h May 19 13:58:42.722: INFO: Got endpoints: latency-svc-5m29h [860.245735ms] May 19 13:58:42.752: INFO: Created: latency-svc-n4d9c May 19 13:58:42.788: INFO: Got endpoints: latency-svc-n4d9c [877.574958ms] May 19 13:58:42.850: INFO: Created: latency-svc-8c22z May 19 13:58:42.879: INFO: Got endpoints: latency-svc-8c22z [880.100787ms] May 19 13:58:42.980: INFO: Created: latency-svc-nkwsf May 19 13:58:43.030: INFO: Got endpoints: latency-svc-nkwsf [972.378536ms] May 19 13:58:43.032: INFO: Created: latency-svc-v9gz8 May 19 13:58:43.047: INFO: Got endpoints: latency-svc-v9gz8 [915.237997ms] May 19 13:58:43.070: INFO: Created: latency-svc-5m6b6 May 19 13:58:43.142: INFO: Got endpoints: latency-svc-5m6b6 [955.790153ms] May 19 13:58:43.145: INFO: Created: latency-svc-4vl4h May 19 13:58:43.155: INFO: Got endpoints: latency-svc-4vl4h [880.600623ms] May 19 13:58:43.181: INFO: Created: latency-svc-8dpm8 May 19 13:58:43.198: INFO: Got endpoints: latency-svc-8dpm8 [903.666695ms] May 19 13:58:43.224: INFO: Created: latency-svc-qqgg8 May 19 13:58:43.240: INFO: Got endpoints: latency-svc-qqgg8 [828.471692ms] May 19 13:58:43.280: INFO: Created: latency-svc-7bzlx May 19 13:58:43.282: INFO: Got endpoints: latency-svc-7bzlx [862.184069ms] May 19 13:58:43.310: INFO: Created: latency-svc-vkp72 May 19 13:58:43.319: INFO: Got endpoints: latency-svc-vkp72 [861.641604ms] May 19 13:58:43.350: INFO: Created: latency-svc-5ndks May 19 13:58:43.360: INFO: Got endpoints: latency-svc-5ndks [861.156848ms] May 19 13:58:43.430: INFO: Created: latency-svc-ksdth May 19 13:58:43.448: INFO: Got endpoints: latency-svc-ksdth [882.966968ms] May 19 13:58:43.478: INFO: Created: latency-svc-p9sss May 19 13:58:43.487: INFO: Got endpoints: latency-svc-p9sss [858.80281ms] May 19 13:58:43.511: INFO: Created: latency-svc-h6qch May 19 13:58:43.517: INFO: Got endpoints: latency-svc-h6qch [818.232265ms] May 19 13:58:43.567: INFO: Created: latency-svc-ntkzm May 19 13:58:43.570: INFO: Got endpoints: latency-svc-ntkzm [847.191531ms] May 19 13:58:43.634: INFO: Created: latency-svc-5hgn6 May 19 13:58:43.650: INFO: Got endpoints: latency-svc-5hgn6 [861.80363ms] May 19 13:58:43.706: INFO: Created: latency-svc-4mrzj May 19 13:58:43.710: INFO: Got endpoints: latency-svc-4mrzj [831.009835ms] May 19 13:58:43.730: INFO: Created: latency-svc-wtq4v May 19 13:58:43.740: INFO: Got endpoints: latency-svc-wtq4v [710.261267ms] May 19 13:58:43.768: INFO: Created: latency-svc-xrbn9 May 19 13:58:43.782: INFO: Got endpoints: latency-svc-xrbn9 [735.815283ms] May 19 13:58:43.805: INFO: Created: latency-svc-shpc6 May 19 13:58:43.872: INFO: Got endpoints: latency-svc-shpc6 [730.43291ms] May 19 13:58:43.874: INFO: Created: latency-svc-sd8lh May 19 13:58:43.879: INFO: Got endpoints: latency-svc-sd8lh [723.887154ms] May 19 13:58:43.904: INFO: Created: latency-svc-jqjcx May 19 13:58:43.928: INFO: Got endpoints: latency-svc-jqjcx [730.215747ms] May 19 13:58:43.955: INFO: Created: latency-svc-m6lmc May 19 13:58:43.970: INFO: Got endpoints: latency-svc-m6lmc [729.838136ms] May 19 13:58:44.017: INFO: Created: latency-svc-4wd5j May 19 13:58:44.020: INFO: Got endpoints: latency-svc-4wd5j [737.345344ms] May 19 13:58:44.048: INFO: Created: latency-svc-clrtm May 19 13:58:44.075: INFO: Got endpoints: latency-svc-clrtm [755.957314ms] May 19 13:58:44.172: INFO: Created: latency-svc-dxwbk May 19 13:58:44.176: INFO: Got endpoints: latency-svc-dxwbk [815.022181ms] May 19 13:58:44.201: INFO: Created: latency-svc-lhhn8 May 19 13:58:44.217: INFO: Got endpoints: latency-svc-lhhn8 [768.693081ms] May 19 13:58:44.234: INFO: Created: latency-svc-l9hgn May 19 13:58:44.247: INFO: Got endpoints: latency-svc-l9hgn [760.102493ms] May 19 13:58:44.270: INFO: Created: latency-svc-fc9z9 May 19 13:58:44.303: INFO: Got endpoints: latency-svc-fc9z9 [785.87075ms] May 19 13:58:44.314: INFO: Created: latency-svc-pvnlp May 19 13:58:44.332: INFO: Got endpoints: latency-svc-pvnlp [762.000119ms] May 19 13:58:44.357: INFO: Created: latency-svc-dp9lp May 19 13:58:44.374: INFO: Got endpoints: latency-svc-dp9lp [724.15197ms] May 19 13:58:44.398: INFO: Created: latency-svc-htpmz May 19 13:58:44.453: INFO: Got endpoints: latency-svc-htpmz [743.479245ms] May 19 13:58:44.478: INFO: Created: latency-svc-f9xnz May 19 13:58:44.494: INFO: Got endpoints: latency-svc-f9xnz [753.699987ms] May 19 13:58:44.520: INFO: Created: latency-svc-dxd2c May 19 13:58:44.531: INFO: Got endpoints: latency-svc-dxd2c [748.888956ms] May 19 13:58:44.592: INFO: Created: latency-svc-9snbg May 19 13:58:44.595: INFO: Got endpoints: latency-svc-9snbg [722.725942ms] May 19 13:58:44.624: INFO: Created: latency-svc-ktp46 May 19 13:58:44.654: INFO: Got endpoints: latency-svc-ktp46 [775.063759ms] May 19 13:58:44.686: INFO: Created: latency-svc-2x4gl May 19 13:58:44.754: INFO: Got endpoints: latency-svc-2x4gl [825.845916ms] May 19 13:58:44.758: INFO: Created: latency-svc-lzxrf May 19 13:58:44.760: INFO: Got endpoints: latency-svc-lzxrf [790.117111ms] May 19 13:58:44.805: INFO: Created: latency-svc-xc744 May 19 13:58:44.834: INFO: Got endpoints: latency-svc-xc744 [814.703684ms] May 19 13:58:44.915: INFO: Created: latency-svc-rz57d May 19 13:58:44.918: INFO: Got endpoints: latency-svc-rz57d [843.21373ms] May 19 13:58:44.969: INFO: Created: latency-svc-782rs May 19 13:58:44.983: INFO: Got endpoints: latency-svc-782rs [807.182011ms] May 19 13:58:45.004: INFO: Created: latency-svc-99gbk May 19 13:58:45.058: INFO: Got endpoints: latency-svc-99gbk [841.313671ms] May 19 13:58:45.119: INFO: Created: latency-svc-gf5v9 May 19 13:58:45.129: INFO: Got endpoints: latency-svc-gf5v9 [882.202397ms] May 19 13:58:45.203: INFO: Created: latency-svc-9fnmk May 19 13:58:45.206: INFO: Got endpoints: latency-svc-9fnmk [902.648367ms] May 19 13:58:45.236: INFO: Created: latency-svc-rbhfw May 19 13:58:45.250: INFO: Got endpoints: latency-svc-rbhfw [917.940719ms] May 19 13:58:45.272: INFO: Created: latency-svc-xzjgd May 19 13:58:45.292: INFO: Got endpoints: latency-svc-xzjgd [918.272977ms] May 19 13:58:45.340: INFO: Created: latency-svc-lr4rw May 19 13:58:45.346: INFO: Got endpoints: latency-svc-lr4rw [892.611559ms] May 19 13:58:45.395: INFO: Created: latency-svc-lvz4z May 19 13:58:45.427: INFO: Got endpoints: latency-svc-lvz4z [932.497012ms] May 19 13:58:45.478: INFO: Created: latency-svc-nzngx May 19 13:58:45.488: INFO: Got endpoints: latency-svc-nzngx [956.33011ms] May 19 13:58:45.516: INFO: Created: latency-svc-8rs45 May 19 13:58:45.540: INFO: Got endpoints: latency-svc-8rs45 [944.244998ms] May 19 13:58:45.575: INFO: Created: latency-svc-j85fq May 19 13:58:45.639: INFO: Got endpoints: latency-svc-j85fq [984.491578ms] May 19 13:58:45.658: INFO: Created: latency-svc-d9cmc May 19 13:58:45.672: INFO: Got endpoints: latency-svc-d9cmc [917.821962ms] May 19 13:58:45.696: INFO: Created: latency-svc-kkc9h May 19 13:58:45.724: INFO: Got endpoints: latency-svc-kkc9h [964.458ms] May 19 13:58:45.789: INFO: Created: latency-svc-jxkzx May 19 13:58:45.805: INFO: Got endpoints: latency-svc-jxkzx [970.142474ms] May 19 13:58:45.837: INFO: Created: latency-svc-5k5zk May 19 13:58:45.852: INFO: Got endpoints: latency-svc-5k5zk [934.337842ms] May 19 13:58:45.875: INFO: Created: latency-svc-82x54 May 19 13:58:45.932: INFO: Got endpoints: latency-svc-82x54 [949.603628ms] May 19 13:58:45.953: INFO: Created: latency-svc-4mpvh May 19 13:58:45.967: INFO: Got endpoints: latency-svc-4mpvh [908.565911ms] May 19 13:58:45.992: INFO: Created: latency-svc-jwgs2 May 19 13:58:46.010: INFO: Got endpoints: latency-svc-jwgs2 [880.530598ms] May 19 13:58:46.028: INFO: Created: latency-svc-spbxc May 19 13:58:46.070: INFO: Got endpoints: latency-svc-spbxc [864.43488ms] May 19 13:58:46.080: INFO: Created: latency-svc-gc2xq May 19 13:58:46.094: INFO: Got endpoints: latency-svc-gc2xq [844.171764ms] May 19 13:58:46.122: INFO: Created: latency-svc-lxrxf May 19 13:58:46.130: INFO: Got endpoints: latency-svc-lxrxf [837.391305ms] May 19 13:58:46.167: INFO: Created: latency-svc-2tw46 May 19 13:58:46.208: INFO: Got endpoints: latency-svc-2tw46 [861.545756ms] May 19 13:58:46.220: INFO: Created: latency-svc-d5s95 May 19 13:58:46.232: INFO: Got endpoints: latency-svc-d5s95 [805.661914ms] May 19 13:58:46.253: INFO: Created: latency-svc-j5sm2 May 19 13:58:46.270: INFO: Got endpoints: latency-svc-j5sm2 [781.881987ms] May 19 13:58:46.295: INFO: Created: latency-svc-hxcjh May 19 13:58:46.306: INFO: Got endpoints: latency-svc-hxcjh [766.162158ms] May 19 13:58:46.352: INFO: Created: latency-svc-84nzk May 19 13:58:46.355: INFO: Got endpoints: latency-svc-84nzk [716.718357ms] May 19 13:58:46.401: INFO: Created: latency-svc-nmhmb May 19 13:58:46.414: INFO: Got endpoints: latency-svc-nmhmb [742.321849ms] May 19 13:58:46.445: INFO: Created: latency-svc-g6nqg May 19 13:58:46.477: INFO: Got endpoints: latency-svc-g6nqg [752.65231ms] May 19 13:58:46.494: INFO: Created: latency-svc-hxjlx May 19 13:58:46.505: INFO: Got endpoints: latency-svc-hxjlx [699.556284ms] May 19 13:58:46.526: INFO: Created: latency-svc-dxrhr May 19 13:58:46.557: INFO: Got endpoints: latency-svc-dxrhr [704.131786ms] May 19 13:58:46.615: INFO: Created: latency-svc-vq5br May 19 13:58:46.619: INFO: Got endpoints: latency-svc-vq5br [686.126866ms] May 19 13:58:46.643: INFO: Created: latency-svc-z82sq May 19 13:58:46.655: INFO: Got endpoints: latency-svc-z82sq [688.322521ms] May 19 13:58:46.679: INFO: Created: latency-svc-8gl99 May 19 13:58:46.692: INFO: Got endpoints: latency-svc-8gl99 [681.562212ms] May 19 13:58:46.759: INFO: Created: latency-svc-hbdqq May 19 13:58:46.764: INFO: Got endpoints: latency-svc-hbdqq [693.161188ms] May 19 13:58:46.799: INFO: Created: latency-svc-hgpwk May 19 13:58:46.812: INFO: Got endpoints: latency-svc-hgpwk [718.205071ms] May 19 13:58:46.853: INFO: Created: latency-svc-x89gd May 19 13:58:46.890: INFO: Got endpoints: latency-svc-x89gd [760.320769ms] May 19 13:58:46.923: INFO: Created: latency-svc-rs8bv May 19 13:58:46.958: INFO: Got endpoints: latency-svc-rs8bv [750.491318ms] May 19 13:58:47.047: INFO: Created: latency-svc-dgxfd May 19 13:58:47.049: INFO: Got endpoints: latency-svc-dgxfd [816.531134ms] May 19 13:58:47.087: INFO: Created: latency-svc-ckbnf May 19 13:58:47.101: INFO: Got endpoints: latency-svc-ckbnf [831.463708ms] May 19 13:58:47.139: INFO: Created: latency-svc-62gd2 May 19 13:58:47.178: INFO: Got endpoints: latency-svc-62gd2 [872.290314ms] May 19 13:58:47.192: INFO: Created: latency-svc-zv7l4 May 19 13:58:47.210: INFO: Got endpoints: latency-svc-zv7l4 [854.81556ms] May 19 13:58:47.231: INFO: Created: latency-svc-m6pb7 May 19 13:58:47.240: INFO: Got endpoints: latency-svc-m6pb7 [825.671455ms] May 19 13:58:47.328: INFO: Created: latency-svc-rkzwp May 19 13:58:47.366: INFO: Got endpoints: latency-svc-rkzwp [889.216202ms] May 19 13:58:47.367: INFO: Created: latency-svc-c7pct May 19 13:58:47.396: INFO: Got endpoints: latency-svc-c7pct [891.601717ms] May 19 13:58:47.466: INFO: Created: latency-svc-rkfwn May 19 13:58:47.468: INFO: Got endpoints: latency-svc-rkfwn [911.098413ms] May 19 13:58:47.495: INFO: Created: latency-svc-b9vm2 May 19 13:58:47.505: INFO: Got endpoints: latency-svc-b9vm2 [885.935242ms] May 19 13:58:47.528: INFO: Created: latency-svc-8kk8s May 19 13:58:47.547: INFO: Got endpoints: latency-svc-8kk8s [891.943129ms] May 19 13:58:47.616: INFO: Created: latency-svc-df6l8 May 19 13:58:47.619: INFO: Got endpoints: latency-svc-df6l8 [927.141923ms] May 19 13:58:47.657: INFO: Created: latency-svc-ps7bs May 19 13:58:47.680: INFO: Got endpoints: latency-svc-ps7bs [916.16098ms] May 19 13:58:47.680: INFO: Latencies: [61.132933ms 101.216997ms 137.677324ms 219.533462ms 233.910557ms 298.000704ms 366.828134ms 402.086079ms 445.228872ms 504.370444ms 601.114687ms 663.120157ms 681.562212ms 686.126866ms 688.322521ms 693.161188ms 699.556284ms 704.131786ms 710.261267ms 716.718357ms 718.205071ms 722.725942ms 723.887154ms 724.15197ms 729.838136ms 730.215747ms 730.43291ms 735.815283ms 737.345344ms 742.321849ms 743.479245ms 748.888956ms 750.491318ms 752.65231ms 753.699987ms 755.957314ms 760.102493ms 760.320769ms 762.000119ms 766.162158ms 768.693081ms 769.98313ms 775.063759ms 781.881987ms 785.87075ms 790.117111ms 790.375347ms 802.270765ms 805.661914ms 807.182011ms 808.578616ms 813.956504ms 814.703684ms 815.022181ms 815.379977ms 816.531134ms 818.232265ms 821.307884ms 821.894464ms 825.581019ms 825.671455ms 825.763973ms 825.845916ms 827.808571ms 828.471692ms 830.004404ms 831.009835ms 831.463708ms 832.075403ms 832.089755ms 833.321031ms 833.573367ms 836.206279ms 837.112259ms 837.391305ms 837.840523ms 839.804138ms 841.313671ms 842.708904ms 843.143916ms 843.21373ms 844.171764ms 844.233426ms 847.191531ms 849.534322ms 849.774967ms 851.038572ms 851.088206ms 854.81556ms 857.447348ms 858.676827ms 858.80281ms 860.245735ms 861.156848ms 861.352238ms 861.545756ms 861.641604ms 861.80363ms 862.039484ms 862.184069ms 864.43488ms 870.620837ms 871.165432ms 872.290314ms 875.788904ms 877.0403ms 877.574958ms 880.100787ms 880.24865ms 880.530598ms 880.600623ms 882.202397ms 882.966968ms 884.598331ms 885.229131ms 885.935242ms 886.775807ms 887.320093ms 888.419735ms 888.620833ms 889.216202ms 889.638452ms 889.734462ms 891.601717ms 891.943129ms 891.998334ms 892.338311ms 892.611559ms 900.225488ms 900.995768ms 902.648367ms 903.666695ms 903.927292ms 904.123731ms 907.774992ms 908.565911ms 910.047742ms 910.949502ms 911.098413ms 911.762923ms 912.863271ms 913.540767ms 915.237997ms 916.16098ms 916.7391ms 917.821962ms 917.940719ms 918.272977ms 918.678374ms 921.830533ms 922.612945ms 923.123779ms 926.252793ms 926.619319ms 927.141923ms 928.71837ms 929.640819ms 930.787046ms 932.497012ms 934.337842ms 936.988343ms 938.398043ms 938.815719ms 940.284763ms 944.244998ms 945.957065ms 948.361459ms 949.603628ms 951.00226ms 951.888883ms 954.320973ms 955.790153ms 956.33011ms 957.652609ms 958.331624ms 960.806219ms 962.105432ms 964.458ms 965.399926ms 970.142474ms 970.788549ms 972.321266ms 972.378536ms 972.606643ms 973.106078ms 978.655032ms 979.138235ms 979.903987ms 981.671334ms 984.118973ms 984.491578ms 992.470119ms 992.603242ms 998.471974ms 1.00146768s 1.002166058s 1.007089133s 1.014743s 1.024725071s 1.031818031s] May 19 13:58:47.680: INFO: 50 %ile: 864.43488ms May 19 13:58:47.680: INFO: 90 %ile: 970.788549ms May 19 13:58:47.680: INFO: 99 %ile: 1.024725071s May 19 13:58:47.680: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:58:47.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-8762" for this suite. May 19 13:59:09.713: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:59:09.792: INFO: namespace svc-latency-8762 deletion completed in 22.099890966s • [SLOW TEST:37.949 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:59:09.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 19 13:59:09.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2703' May 19 13:59:13.037: INFO: stderr: "" May 19 13:59:13.038: INFO: stdout: "replicationcontroller/redis-master created\n" May 19 13:59:13.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2703' May 19 13:59:13.294: INFO: stderr: "" May 19 13:59:13.294: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. May 19 13:59:14.298: INFO: Selector matched 1 pods for map[app:redis] May 19 13:59:14.298: INFO: Found 0 / 1 May 19 13:59:15.425: INFO: Selector matched 1 pods for map[app:redis] May 19 13:59:15.425: INFO: Found 0 / 1 May 19 13:59:16.316: INFO: Selector matched 1 pods for map[app:redis] May 19 13:59:16.316: INFO: Found 0 / 1 May 19 13:59:17.298: INFO: Selector matched 1 pods for map[app:redis] May 19 13:59:17.298: INFO: Found 0 / 1 May 19 13:59:18.299: INFO: Selector matched 1 pods for map[app:redis] May 19 13:59:18.299: INFO: Found 1 / 1 May 19 13:59:18.299: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 19 13:59:18.303: INFO: Selector matched 1 pods for map[app:redis] May 19 13:59:18.303: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 19 13:59:18.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-j855f --namespace=kubectl-2703' May 19 13:59:18.426: INFO: stderr: "" May 19 13:59:18.426: INFO: stdout: "Name: redis-master-j855f\nNamespace: kubectl-2703\nPriority: 0\nNode: iruya-worker2/172.17.0.5\nStart Time: Tue, 19 May 2020 13:59:13 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.230\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://824cd20f53ead4025070c892673269ed308f9bba67218fc1a00a0c49b8dd2b56\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 19 May 2020 13:59:16 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-87j9h (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-87j9h:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-87j9h\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 5s default-scheduler Successfully assigned kubectl-2703/redis-master-j855f to iruya-worker2\n Normal Pulled 4s kubelet, iruya-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, iruya-worker2 Created container redis-master\n Normal Started 2s kubelet, iruya-worker2 Started container redis-master\n" May 19 13:59:18.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-2703' May 19 13:59:18.546: INFO: stderr: "" May 19 13:59:18.546: INFO: stdout: "Name: redis-master\nNamespace: kubectl-2703\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: redis-master-j855f\n" May 19 13:59:18.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-2703' May 19 13:59:18.650: INFO: stderr: "" May 19 13:59:18.651: INFO: stdout: "Name: redis-master\nNamespace: kubectl-2703\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.105.102.251\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.1.230:6379\nSession Affinity: None\nEvents: \n" May 19 13:59:18.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane' May 19 13:59:18.792: INFO: stderr: "" May 19 13:59:18.792: INFO: stdout: "Name: iruya-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:24:20 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Tue, 19 May 2020 13:58:25 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 19 May 2020 13:58:25 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 19 May 2020 13:58:25 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 19 May 2020 13:58:25 +0000 Sun, 15 Mar 2020 18:25:00 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.7\n Hostname: iruya-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 09f14f6f4d1640fcaab2243401c9f154\n System UUID: 7c6ca533-492e-400c-b058-c282f97a69ec\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.15.7\n Kube-Proxy Version: v1.15.7\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-iruya-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 64d\n kube-system kindnet-zn8sx 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 64d\n kube-system kube-apiserver-iruya-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 64d\n kube-system kube-controller-manager-iruya-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 64d\n kube-system kube-proxy-46nsr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 64d\n kube-system kube-scheduler-iruya-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 64d\n local-path-storage local-path-provisioner-d4947b89c-72frh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 64d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 19 13:59:18.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-2703' May 19 13:59:18.899: INFO: stderr: "" May 19 13:59:18.899: INFO: stdout: "Name: kubectl-2703\nLabels: e2e-framework=kubectl\n e2e-run=69bc9db8-c6b7-4504-81b1-cd1210366734\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:59:18.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2703" for this suite. May 19 13:59:40.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:59:41.001: INFO: namespace kubectl-2703 deletion completed in 22.098071498s • [SLOW TEST:31.209 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:59:41.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 19 13:59:41.089: INFO: Waiting up to 5m0s for pod "downwardapi-volume-28353081-ef49-45a6-825d-4f15ebfd76cf" in namespace "projected-8127" to be "success or failure" May 19 13:59:41.101: INFO: Pod "downwardapi-volume-28353081-ef49-45a6-825d-4f15ebfd76cf": Phase="Pending", Reason="", readiness=false. Elapsed: 12.746405ms May 19 13:59:43.135: INFO: Pod "downwardapi-volume-28353081-ef49-45a6-825d-4f15ebfd76cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046247596s May 19 13:59:45.139: INFO: Pod "downwardapi-volume-28353081-ef49-45a6-825d-4f15ebfd76cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050762777s STEP: Saw pod success May 19 13:59:45.139: INFO: Pod "downwardapi-volume-28353081-ef49-45a6-825d-4f15ebfd76cf" satisfied condition "success or failure" May 19 13:59:45.142: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-28353081-ef49-45a6-825d-4f15ebfd76cf container client-container: STEP: delete the pod May 19 13:59:45.199: INFO: Waiting for pod downwardapi-volume-28353081-ef49-45a6-825d-4f15ebfd76cf to disappear May 19 13:59:45.208: INFO: Pod downwardapi-volume-28353081-ef49-45a6-825d-4f15ebfd76cf no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:59:45.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8127" for this suite. May 19 13:59:51.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 13:59:51.298: INFO: namespace projected-8127 deletion completed in 6.086299146s • [SLOW TEST:10.296 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 13:59:51.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 19 13:59:51.323: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 13:59:56.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3686" for this suite. May 19 14:00:02.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:00:02.866: INFO: namespace init-container-3686 deletion completed in 6.122059341s • [SLOW TEST:11.567 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:00:02.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 19 14:00:02.982: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fbafb7f5-b7f8-4dcf-8bd8-e37493a8dfa0" in namespace "downward-api-2619" to be "success or failure" May 19 14:00:02.985: INFO: Pod "downwardapi-volume-fbafb7f5-b7f8-4dcf-8bd8-e37493a8dfa0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.28905ms May 19 14:00:04.989: INFO: Pod "downwardapi-volume-fbafb7f5-b7f8-4dcf-8bd8-e37493a8dfa0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00695575s May 19 14:00:06.992: INFO: Pod "downwardapi-volume-fbafb7f5-b7f8-4dcf-8bd8-e37493a8dfa0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009787822s STEP: Saw pod success May 19 14:00:06.992: INFO: Pod "downwardapi-volume-fbafb7f5-b7f8-4dcf-8bd8-e37493a8dfa0" satisfied condition "success or failure" May 19 14:00:06.994: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-fbafb7f5-b7f8-4dcf-8bd8-e37493a8dfa0 container client-container: STEP: delete the pod May 19 14:00:07.028: INFO: Waiting for pod downwardapi-volume-fbafb7f5-b7f8-4dcf-8bd8-e37493a8dfa0 to disappear May 19 14:00:07.066: INFO: Pod downwardapi-volume-fbafb7f5-b7f8-4dcf-8bd8-e37493a8dfa0 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:00:07.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2619" for this suite. May 19 14:00:13.123: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:00:13.204: INFO: namespace downward-api-2619 deletion completed in 6.134025046s • [SLOW TEST:10.338 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:00:13.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 19 14:00:13.908: INFO: Pod name wrapped-volume-race-b6ffafab-7a2a-4536-b91e-36c53e7544bc: Found 0 pods out of 5 May 19 14:00:18.914: INFO: Pod name wrapped-volume-race-b6ffafab-7a2a-4536-b91e-36c53e7544bc: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-b6ffafab-7a2a-4536-b91e-36c53e7544bc in namespace emptydir-wrapper-3886, will wait for the garbage collector to delete the pods May 19 14:00:33.004: INFO: Deleting ReplicationController wrapped-volume-race-b6ffafab-7a2a-4536-b91e-36c53e7544bc took: 7.844866ms May 19 14:00:33.405: INFO: Terminating ReplicationController wrapped-volume-race-b6ffafab-7a2a-4536-b91e-36c53e7544bc pods took: 400.518118ms STEP: Creating RC which spawns configmap-volume pods May 19 14:01:13.366: INFO: Pod name wrapped-volume-race-d2b5258a-9905-4f8d-af89-fcf1741008d2: Found 0 pods out of 5 May 19 14:01:18.372: INFO: Pod name wrapped-volume-race-d2b5258a-9905-4f8d-af89-fcf1741008d2: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-d2b5258a-9905-4f8d-af89-fcf1741008d2 in namespace emptydir-wrapper-3886, will wait for the garbage collector to delete the pods May 19 14:01:32.466: INFO: Deleting ReplicationController wrapped-volume-race-d2b5258a-9905-4f8d-af89-fcf1741008d2 took: 18.39525ms May 19 14:01:32.766: INFO: Terminating ReplicationController wrapped-volume-race-d2b5258a-9905-4f8d-af89-fcf1741008d2 pods took: 300.260335ms STEP: Creating RC which spawns configmap-volume pods May 19 14:02:13.295: INFO: Pod name wrapped-volume-race-0438094d-e71e-4718-a53c-da2f57870258: Found 0 pods out of 5 May 19 14:02:18.303: INFO: Pod name wrapped-volume-race-0438094d-e71e-4718-a53c-da2f57870258: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-0438094d-e71e-4718-a53c-da2f57870258 in namespace emptydir-wrapper-3886, will wait for the garbage collector to delete the pods May 19 14:02:32.598: INFO: Deleting ReplicationController wrapped-volume-race-0438094d-e71e-4718-a53c-da2f57870258 took: 10.570107ms May 19 14:02:32.898: INFO: Terminating ReplicationController wrapped-volume-race-0438094d-e71e-4718-a53c-da2f57870258 pods took: 300.312457ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:03:13.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3886" for this suite. May 19 14:03:21.026: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:03:21.098: INFO: namespace emptydir-wrapper-3886 deletion completed in 8.087910016s • [SLOW TEST:187.893 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:03:21.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-mjhn STEP: Creating a pod to test atomic-volume-subpath May 19 14:03:21.191: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-mjhn" in namespace "subpath-1514" to be "success or failure" May 19 14:03:21.214: INFO: Pod "pod-subpath-test-downwardapi-mjhn": Phase="Pending", Reason="", readiness=false. Elapsed: 22.379866ms May 19 14:03:23.248: INFO: Pod "pod-subpath-test-downwardapi-mjhn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056495526s May 19 14:03:25.252: INFO: Pod "pod-subpath-test-downwardapi-mjhn": Phase="Running", Reason="", readiness=true. Elapsed: 4.060414655s May 19 14:03:27.256: INFO: Pod "pod-subpath-test-downwardapi-mjhn": Phase="Running", Reason="", readiness=true. Elapsed: 6.064277195s May 19 14:03:29.259: INFO: Pod "pod-subpath-test-downwardapi-mjhn": Phase="Running", Reason="", readiness=true. Elapsed: 8.068048911s May 19 14:03:31.263: INFO: Pod "pod-subpath-test-downwardapi-mjhn": Phase="Running", Reason="", readiness=true. Elapsed: 10.07184524s May 19 14:03:33.267: INFO: Pod "pod-subpath-test-downwardapi-mjhn": Phase="Running", Reason="", readiness=true. Elapsed: 12.075504522s May 19 14:03:35.271: INFO: Pod "pod-subpath-test-downwardapi-mjhn": Phase="Running", Reason="", readiness=true. Elapsed: 14.079894289s May 19 14:03:37.276: INFO: Pod "pod-subpath-test-downwardapi-mjhn": Phase="Running", Reason="", readiness=true. Elapsed: 16.084487071s May 19 14:03:39.284: INFO: Pod "pod-subpath-test-downwardapi-mjhn": Phase="Running", Reason="", readiness=true. Elapsed: 18.09234696s May 19 14:03:41.288: INFO: Pod "pod-subpath-test-downwardapi-mjhn": Phase="Running", Reason="", readiness=true. Elapsed: 20.096758167s May 19 14:03:43.292: INFO: Pod "pod-subpath-test-downwardapi-mjhn": Phase="Running", Reason="", readiness=true. Elapsed: 22.100522931s May 19 14:03:45.314: INFO: Pod "pod-subpath-test-downwardapi-mjhn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.122928047s STEP: Saw pod success May 19 14:03:45.314: INFO: Pod "pod-subpath-test-downwardapi-mjhn" satisfied condition "success or failure" May 19 14:03:45.316: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-downwardapi-mjhn container test-container-subpath-downwardapi-mjhn: STEP: delete the pod May 19 14:03:45.350: INFO: Waiting for pod pod-subpath-test-downwardapi-mjhn to disappear May 19 14:03:45.359: INFO: Pod pod-subpath-test-downwardapi-mjhn no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-mjhn May 19 14:03:45.359: INFO: Deleting pod "pod-subpath-test-downwardapi-mjhn" in namespace "subpath-1514" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:03:45.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1514" for this suite. May 19 14:03:51.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:03:51.449: INFO: namespace subpath-1514 deletion completed in 6.085409586s • [SLOW TEST:30.351 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:03:51.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-ac36934b-0e77-41d6-85f4-665899a2058c STEP: Creating a pod to test consume configMaps May 19 14:03:51.516: INFO: Waiting up to 5m0s for pod "pod-configmaps-ae2b040f-055f-46a9-a810-5b1fefae7f2d" in namespace "configmap-4978" to be "success or failure" May 19 14:03:51.521: INFO: Pod "pod-configmaps-ae2b040f-055f-46a9-a810-5b1fefae7f2d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.345107ms May 19 14:03:53.557: INFO: Pod "pod-configmaps-ae2b040f-055f-46a9-a810-5b1fefae7f2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041008864s May 19 14:03:55.561: INFO: Pod "pod-configmaps-ae2b040f-055f-46a9-a810-5b1fefae7f2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044938709s STEP: Saw pod success May 19 14:03:55.561: INFO: Pod "pod-configmaps-ae2b040f-055f-46a9-a810-5b1fefae7f2d" satisfied condition "success or failure" May 19 14:03:55.564: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-ae2b040f-055f-46a9-a810-5b1fefae7f2d container configmap-volume-test: STEP: delete the pod May 19 14:03:55.716: INFO: Waiting for pod pod-configmaps-ae2b040f-055f-46a9-a810-5b1fefae7f2d to disappear May 19 14:03:55.724: INFO: Pod pod-configmaps-ae2b040f-055f-46a9-a810-5b1fefae7f2d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:03:55.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4978" for this suite. May 19 14:04:01.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:04:01.814: INFO: namespace configmap-4978 deletion completed in 6.086548019s • [SLOW TEST:10.363 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:04:01.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-bb84222f-5ff3-4289-8db2-39e204fb92df STEP: Creating a pod to test consume secrets May 19 14:04:01.912: INFO: Waiting up to 5m0s for pod "pod-secrets-b43d1436-2c05-49d0-a197-533b01f3f8a6" in namespace "secrets-3203" to be "success or failure" May 19 14:04:01.961: INFO: Pod "pod-secrets-b43d1436-2c05-49d0-a197-533b01f3f8a6": Phase="Pending", Reason="", readiness=false. Elapsed: 48.89827ms May 19 14:04:03.965: INFO: Pod "pod-secrets-b43d1436-2c05-49d0-a197-533b01f3f8a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052707626s May 19 14:04:05.969: INFO: Pod "pod-secrets-b43d1436-2c05-49d0-a197-533b01f3f8a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05700859s STEP: Saw pod success May 19 14:04:05.969: INFO: Pod "pod-secrets-b43d1436-2c05-49d0-a197-533b01f3f8a6" satisfied condition "success or failure" May 19 14:04:05.972: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-b43d1436-2c05-49d0-a197-533b01f3f8a6 container secret-env-test: STEP: delete the pod May 19 14:04:06.011: INFO: Waiting for pod pod-secrets-b43d1436-2c05-49d0-a197-533b01f3f8a6 to disappear May 19 14:04:06.022: INFO: Pod pod-secrets-b43d1436-2c05-49d0-a197-533b01f3f8a6 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:04:06.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3203" for this suite. May 19 14:04:12.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:04:12.122: INFO: namespace secrets-3203 deletion completed in 6.095786591s • [SLOW TEST:10.309 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:04:12.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 19 14:04:12.175: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 19 14:04:12.184: INFO: Pod name sample-pod: Found 0 pods out of 1 May 19 14:04:17.213: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 19 14:04:17.213: INFO: Creating deployment "test-rolling-update-deployment" May 19 14:04:17.219: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 19 14:04:17.228: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 19 14:04:19.271: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 19 14:04:19.273: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725493857, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725493857, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725493857, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725493857, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 14:04:21.348: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 19 14:04:21.359: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-5245,SelfLink:/apis/apps/v1/namespaces/deployment-5245/deployments/test-rolling-update-deployment,UID:b1aa45da-82ae-42d6-81a0-697644b1e221,ResourceVersion:11765806,Generation:1,CreationTimestamp:2020-05-19 14:04:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-19 14:04:17 +0000 UTC 2020-05-19 14:04:17 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-19 14:04:21 +0000 UTC 2020-05-19 14:04:17 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 19 14:04:21.362: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-5245,SelfLink:/apis/apps/v1/namespaces/deployment-5245/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:d3f35d78-aff3-4fb2-820a-7c3f7dfe2c1d,ResourceVersion:11765795,Generation:1,CreationTimestamp:2020-05-19 14:04:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment b1aa45da-82ae-42d6-81a0-697644b1e221 0xc003060f37 0xc003060f38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 19 14:04:21.362: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 19 14:04:21.362: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-5245,SelfLink:/apis/apps/v1/namespaces/deployment-5245/replicasets/test-rolling-update-controller,UID:7ab58753-ff28-4525-8524-ca3c0c20193c,ResourceVersion:11765805,Generation:2,CreationTimestamp:2020-05-19 14:04:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment b1aa45da-82ae-42d6-81a0-697644b1e221 0xc003060de7 0xc003060de8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 19 14:04:21.365: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-mwhtb" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-mwhtb,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-5245,SelfLink:/api/v1/namespaces/deployment-5245/pods/test-rolling-update-deployment-79f6b9d75c-mwhtb,UID:c71a67f1-1e31-497a-9e96-36f810ff2ece,ResourceVersion:11765794,Generation:0,CreationTimestamp:2020-05-19 14:04:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c d3f35d78-aff3-4fb2-820a-7c3f7dfe2c1d 0xc002eaf437 0xc002eaf438}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-79hdn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-79hdn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-79hdn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002eaf4b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002eaf4d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 14:04:17 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 14:04:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 14:04:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 14:04:17 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.233,StartTime:2020-05-19 14:04:17 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-19 14:04:20 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://99fdd7ebd58bda4f510169b1e5c1b2c04ae09e138252a18d0d63648688df1170}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:04:21.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5245" for this suite. May 19 14:04:27.528: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:04:27.593: INFO: namespace deployment-5245 deletion completed in 6.225074838s • [SLOW TEST:15.470 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:04:27.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 19 14:04:27.786: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:04:42.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2571" for this suite. May 19 14:04:48.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:04:48.261: INFO: namespace pods-2571 deletion completed in 6.091163585s • [SLOW TEST:20.668 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:04:48.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 19 14:04:48.323: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 19 14:04:48.330: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 14:04:48.368: INFO: Number of nodes with available pods: 0 May 19 14:04:48.368: INFO: Node iruya-worker is running more than one daemon pod May 19 14:04:49.373: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 14:04:49.376: INFO: Number of nodes with available pods: 0 May 19 14:04:49.376: INFO: Node iruya-worker is running more than one daemon pod May 19 14:04:50.374: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 14:04:50.378: INFO: Number of nodes with available pods: 0 May 19 14:04:50.378: INFO: Node iruya-worker is running more than one daemon pod May 19 14:04:51.390: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 14:04:51.393: INFO: Number of nodes with available pods: 0 May 19 14:04:51.393: INFO: Node iruya-worker is running more than one daemon pod May 19 14:04:52.389: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 14:04:52.392: INFO: Number of nodes with available pods: 0 May 19 14:04:52.392: INFO: Node iruya-worker is running more than one daemon pod May 19 14:04:53.378: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 14:04:53.383: INFO: Number of nodes with available pods: 2 May 19 14:04:53.383: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 19 14:04:53.451: INFO: Wrong image for pod: daemon-set-52976. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 19 14:04:53.451: INFO: Wrong image for pod: daemon-set-5tgnw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 19 14:04:53.456: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 14:04:54.460: INFO: Wrong image for pod: daemon-set-52976. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 19 14:04:54.460: INFO: Wrong image for pod: daemon-set-5tgnw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 19 14:04:54.463: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 14:04:55.471: INFO: Wrong image for pod: daemon-set-52976. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 19 14:04:55.471: INFO: Wrong image for pod: daemon-set-5tgnw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 19 14:04:55.474: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 14:04:56.461: INFO: Wrong image for pod: daemon-set-52976. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 19 14:04:56.461: INFO: Wrong image for pod: daemon-set-5tgnw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 19 14:04:56.465: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 14:04:57.463: INFO: Wrong image for pod: daemon-set-52976. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 19 14:04:57.463: INFO: Pod daemon-set-52976 is not available May 19 14:04:57.463: INFO: Wrong image for pod: daemon-set-5tgnw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 19 14:04:57.466: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 14:04:58.461: INFO: Pod daemon-set-4fn74 is not available May 19 14:04:58.461: INFO: Wrong image for pod: daemon-set-5tgnw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 19 14:04:58.466: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 14:04:59.459: INFO: Pod daemon-set-4fn74 is not available May 19 14:04:59.459: INFO: Wrong image for pod: daemon-set-5tgnw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 19 14:04:59.468: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 14:05:00.477: INFO: Pod daemon-set-4fn74 is not available May 19 14:05:00.477: INFO: Wrong image for pod: daemon-set-5tgnw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 19 14:05:00.481: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 14:05:01.459: INFO: Wrong image for pod: daemon-set-5tgnw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 19 14:05:01.463: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 14:05:02.460: INFO: Wrong image for pod: daemon-set-5tgnw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 19 14:05:02.460: INFO: Pod daemon-set-5tgnw is not available May 19 14:05:02.464: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 14:05:03.460: INFO: Wrong image for pod: daemon-set-5tgnw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 19 14:05:03.460: INFO: Pod daemon-set-5tgnw is not available May 19 14:05:03.463: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 14:05:04.461: INFO: Wrong image for pod: daemon-set-5tgnw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 19 14:05:04.461: INFO: Pod daemon-set-5tgnw is not available May 19 14:05:04.465: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 14:05:05.477: INFO: Wrong image for pod: daemon-set-5tgnw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 19 14:05:05.477: INFO: Pod daemon-set-5tgnw is not available May 19 14:05:05.480: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 14:05:06.461: INFO: Wrong image for pod: daemon-set-5tgnw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 19 14:05:06.461: INFO: Pod daemon-set-5tgnw is not available May 19 14:05:06.465: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 14:05:07.462: INFO: Wrong image for pod: daemon-set-5tgnw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 19 14:05:07.462: INFO: Pod daemon-set-5tgnw is not available May 19 14:05:07.465: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 14:05:08.459: INFO: Wrong image for pod: daemon-set-5tgnw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 19 14:05:08.459: INFO: Pod daemon-set-5tgnw is not available May 19 14:05:08.463: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 14:05:09.463: INFO: Wrong image for pod: daemon-set-5tgnw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 19 14:05:09.463: INFO: Pod daemon-set-5tgnw is not available May 19 14:05:09.466: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 14:05:10.460: INFO: Wrong image for pod: daemon-set-5tgnw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 19 14:05:10.460: INFO: Pod daemon-set-5tgnw is not available May 19 14:05:10.464: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 14:05:11.460: INFO: Wrong image for pod: daemon-set-5tgnw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 19 14:05:11.460: INFO: Pod daemon-set-5tgnw is not available May 19 14:05:11.465: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 14:05:12.468: INFO: Pod daemon-set-h52qn is not available May 19 14:05:12.474: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 19 14:05:12.476: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 14:05:12.478: INFO: Number of nodes with available pods: 1 May 19 14:05:12.478: INFO: Node iruya-worker is running more than one daemon pod May 19 14:05:13.482: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 14:05:13.485: INFO: Number of nodes with available pods: 1 May 19 14:05:13.485: INFO: Node iruya-worker is running more than one daemon pod May 19 14:05:14.482: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 14:05:14.486: INFO: Number of nodes with available pods: 1 May 19 14:05:14.486: INFO: Node iruya-worker is running more than one daemon pod May 19 14:05:15.487: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 14:05:15.491: INFO: Number of nodes with available pods: 2 May 19 14:05:15.491: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9878, will wait for the garbage collector to delete the pods May 19 14:05:15.564: INFO: Deleting DaemonSet.extensions daemon-set took: 6.908433ms May 19 14:05:15.864: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.212594ms May 19 14:05:22.268: INFO: Number of nodes with available pods: 0 May 19 14:05:22.268: INFO: Number of running nodes: 0, number of available pods: 0 May 19 14:05:22.271: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9878/daemonsets","resourceVersion":"11766056"},"items":null} May 19 14:05:22.273: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9878/pods","resourceVersion":"11766056"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:05:22.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9878" for this suite. May 19 14:05:28.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:05:28.383: INFO: namespace daemonsets-9878 deletion completed in 6.096672349s • [SLOW TEST:40.120 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:05:28.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-7120 STEP: creating a selector STEP: Creating the service pods in kubernetes May 19 14:05:28.451: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 19 14:05:52.595: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.236:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7120 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 14:05:52.595: INFO: >>> kubeConfig: /root/.kube/config I0519 14:05:52.623657 6 log.go:172] (0xc0030fa8f0) (0xc001f34960) Create stream I0519 14:05:52.623691 6 log.go:172] (0xc0030fa8f0) (0xc001f34960) Stream added, broadcasting: 1 I0519 14:05:52.625667 6 log.go:172] (0xc0030fa8f0) Reply frame received for 1 I0519 14:05:52.625693 6 log.go:172] (0xc0030fa8f0) (0xc001f34a00) Create stream I0519 14:05:52.625705 6 log.go:172] (0xc0030fa8f0) (0xc001f34a00) Stream added, broadcasting: 3 I0519 14:05:52.626670 6 log.go:172] (0xc0030fa8f0) Reply frame received for 3 I0519 14:05:52.626697 6 log.go:172] (0xc0030fa8f0) (0xc001f34aa0) Create stream I0519 14:05:52.626707 6 log.go:172] (0xc0030fa8f0) (0xc001f34aa0) Stream added, broadcasting: 5 I0519 14:05:52.627467 6 log.go:172] (0xc0030fa8f0) Reply frame received for 5 I0519 14:05:52.733820 6 log.go:172] (0xc0030fa8f0) Data frame received for 3 I0519 14:05:52.733846 6 log.go:172] (0xc001f34a00) (3) Data frame handling I0519 14:05:52.733862 6 log.go:172] (0xc001f34a00) (3) Data frame sent I0519 14:05:52.733910 6 log.go:172] (0xc0030fa8f0) Data frame received for 3 I0519 14:05:52.733929 6 log.go:172] (0xc001f34a00) (3) Data frame handling I0519 14:05:52.734129 6 log.go:172] (0xc0030fa8f0) Data frame received for 5 I0519 14:05:52.734160 6 log.go:172] (0xc001f34aa0) (5) Data frame handling I0519 14:05:52.736387 6 log.go:172] (0xc0030fa8f0) Data frame received for 1 I0519 14:05:52.736408 6 log.go:172] (0xc001f34960) (1) Data frame handling I0519 14:05:52.736469 6 log.go:172] (0xc001f34960) (1) Data frame sent I0519 14:05:52.736491 6 log.go:172] (0xc0030fa8f0) (0xc001f34960) Stream removed, broadcasting: 1 I0519 14:05:52.736511 6 log.go:172] (0xc0030fa8f0) Go away received I0519 14:05:52.736599 6 log.go:172] (0xc0030fa8f0) (0xc001f34960) Stream removed, broadcasting: 1 I0519 14:05:52.736615 6 log.go:172] (0xc0030fa8f0) (0xc001f34a00) Stream removed, broadcasting: 3 I0519 14:05:52.736623 6 log.go:172] (0xc0030fa8f0) (0xc001f34aa0) Stream removed, broadcasting: 5 May 19 14:05:52.736: INFO: Found all expected endpoints: [netserver-0] May 19 14:05:52.740: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.134:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7120 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 14:05:52.740: INFO: >>> kubeConfig: /root/.kube/config I0519 14:05:52.767065 6 log.go:172] (0xc003152580) (0xc002a40320) Create stream I0519 14:05:52.767091 6 log.go:172] (0xc003152580) (0xc002a40320) Stream added, broadcasting: 1 I0519 14:05:52.769508 6 log.go:172] (0xc003152580) Reply frame received for 1 I0519 14:05:52.769544 6 log.go:172] (0xc003152580) (0xc001fed5e0) Create stream I0519 14:05:52.769561 6 log.go:172] (0xc003152580) (0xc001fed5e0) Stream added, broadcasting: 3 I0519 14:05:52.770481 6 log.go:172] (0xc003152580) Reply frame received for 3 I0519 14:05:52.770506 6 log.go:172] (0xc003152580) (0xc002a403c0) Create stream I0519 14:05:52.770514 6 log.go:172] (0xc003152580) (0xc002a403c0) Stream added, broadcasting: 5 I0519 14:05:52.771393 6 log.go:172] (0xc003152580) Reply frame received for 5 I0519 14:05:52.847592 6 log.go:172] (0xc003152580) Data frame received for 5 I0519 14:05:52.847620 6 log.go:172] (0xc002a403c0) (5) Data frame handling I0519 14:05:52.847644 6 log.go:172] (0xc003152580) Data frame received for 3 I0519 14:05:52.847666 6 log.go:172] (0xc001fed5e0) (3) Data frame handling I0519 14:05:52.847692 6 log.go:172] (0xc001fed5e0) (3) Data frame sent I0519 14:05:52.847719 6 log.go:172] (0xc003152580) Data frame received for 3 I0519 14:05:52.847740 6 log.go:172] (0xc001fed5e0) (3) Data frame handling I0519 14:05:52.849289 6 log.go:172] (0xc003152580) Data frame received for 1 I0519 14:05:52.849310 6 log.go:172] (0xc002a40320) (1) Data frame handling I0519 14:05:52.849338 6 log.go:172] (0xc002a40320) (1) Data frame sent I0519 14:05:52.849353 6 log.go:172] (0xc003152580) (0xc002a40320) Stream removed, broadcasting: 1 I0519 14:05:52.849404 6 log.go:172] (0xc003152580) Go away received I0519 14:05:52.849436 6 log.go:172] (0xc003152580) (0xc002a40320) Stream removed, broadcasting: 1 I0519 14:05:52.849449 6 log.go:172] (0xc003152580) (0xc001fed5e0) Stream removed, broadcasting: 3 I0519 14:05:52.849458 6 log.go:172] (0xc003152580) (0xc002a403c0) Stream removed, broadcasting: 5 May 19 14:05:52.849: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:05:52.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7120" for this suite. May 19 14:06:14.878: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:06:14.973: INFO: namespace pod-network-test-7120 deletion completed in 22.120301598s • [SLOW TEST:46.590 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:06:14.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 19 14:06:22.440: INFO: 0 pods remaining May 19 14:06:22.440: INFO: 0 pods has nil DeletionTimestamp May 19 14:06:22.440: INFO: STEP: Gathering metrics W0519 14:06:24.009775 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 19 14:06:24.009: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:06:24.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8833" for this suite. May 19 14:06:30.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:06:30.180: INFO: namespace gc-8833 deletion completed in 6.134976742s • [SLOW TEST:15.207 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:06:30.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:06:30.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8786" for this suite. May 19 14:06:36.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:06:36.378: INFO: namespace services-8786 deletion completed in 6.092839259s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.198 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:06:36.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-8223c81b-97ed-4ba3-86a2-972778971c49 STEP: Creating a pod to test consume configMaps May 19 14:06:36.489: INFO: Waiting up to 5m0s for pod "pod-configmaps-121cd579-c61f-4159-af0c-2ecb2c516b27" in namespace "configmap-1680" to be "success or failure" May 19 14:06:36.504: INFO: Pod "pod-configmaps-121cd579-c61f-4159-af0c-2ecb2c516b27": Phase="Pending", Reason="", readiness=false. Elapsed: 15.174372ms May 19 14:06:38.509: INFO: Pod "pod-configmaps-121cd579-c61f-4159-af0c-2ecb2c516b27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01956528s May 19 14:06:40.513: INFO: Pod "pod-configmaps-121cd579-c61f-4159-af0c-2ecb2c516b27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02411542s STEP: Saw pod success May 19 14:06:40.513: INFO: Pod "pod-configmaps-121cd579-c61f-4159-af0c-2ecb2c516b27" satisfied condition "success or failure" May 19 14:06:40.516: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-121cd579-c61f-4159-af0c-2ecb2c516b27 container configmap-volume-test: STEP: delete the pod May 19 14:06:40.538: INFO: Waiting for pod pod-configmaps-121cd579-c61f-4159-af0c-2ecb2c516b27 to disappear May 19 14:06:40.547: INFO: Pod pod-configmaps-121cd579-c61f-4159-af0c-2ecb2c516b27 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:06:40.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1680" for this suite. May 19 14:06:46.581: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:06:46.651: INFO: namespace configmap-1680 deletion completed in 6.100816688s • [SLOW TEST:10.272 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:06:46.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token May 19 14:06:47.271: INFO: created pod pod-service-account-defaultsa May 19 14:06:47.271: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 19 14:06:47.279: INFO: created pod pod-service-account-mountsa May 19 14:06:47.279: INFO: pod pod-service-account-mountsa service account token volume mount: true May 19 14:06:47.306: INFO: created pod pod-service-account-nomountsa May 19 14:06:47.306: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 19 14:06:47.321: INFO: created pod pod-service-account-defaultsa-mountspec May 19 14:06:47.321: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 19 14:06:47.343: INFO: created pod pod-service-account-mountsa-mountspec May 19 14:06:47.344: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 19 14:06:47.389: INFO: created pod pod-service-account-nomountsa-mountspec May 19 14:06:47.389: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 19 14:06:47.398: INFO: created pod pod-service-account-defaultsa-nomountspec May 19 14:06:47.398: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 19 14:06:47.439: INFO: created pod pod-service-account-mountsa-nomountspec May 19 14:06:47.439: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 19 14:06:47.462: INFO: created pod pod-service-account-nomountsa-nomountspec May 19 14:06:47.462: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:06:47.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6138" for this suite. May 19 14:07:17.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:07:17.720: INFO: namespace svcaccounts-6138 deletion completed in 30.184396587s • [SLOW TEST:31.069 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:07:17.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:07:17.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6303" for this suite. May 19 14:07:39.840: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:07:39.956: INFO: namespace pods-6303 deletion completed in 22.137837518s • [SLOW TEST:22.235 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:07:39.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-1c1284c5-c9b2-4356-9fd9-b8b98d7fa102 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:07:40.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5740" for this suite. May 19 14:07:46.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:07:46.139: INFO: namespace configmap-5740 deletion completed in 6.104869895s • [SLOW TEST:6.182 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:07:46.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 19 14:07:51.379: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:07:51.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4044" for this suite. May 19 14:07:57.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:07:57.490: INFO: namespace container-runtime-4044 deletion completed in 6.069273625s • [SLOW TEST:11.350 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:07:57.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 19 14:07:57.527: INFO: Waiting up to 5m0s for pod "downward-api-e8997a2f-6fae-4f17-a179-0403bdc9b946" in namespace "downward-api-6163" to be "success or failure" May 19 14:07:57.547: INFO: Pod "downward-api-e8997a2f-6fae-4f17-a179-0403bdc9b946": Phase="Pending", Reason="", readiness=false. Elapsed: 20.324395ms May 19 14:07:59.623: INFO: Pod "downward-api-e8997a2f-6fae-4f17-a179-0403bdc9b946": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096548398s May 19 14:08:01.628: INFO: Pod "downward-api-e8997a2f-6fae-4f17-a179-0403bdc9b946": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.101307213s STEP: Saw pod success May 19 14:08:01.628: INFO: Pod "downward-api-e8997a2f-6fae-4f17-a179-0403bdc9b946" satisfied condition "success or failure" May 19 14:08:01.631: INFO: Trying to get logs from node iruya-worker2 pod downward-api-e8997a2f-6fae-4f17-a179-0403bdc9b946 container dapi-container: STEP: delete the pod May 19 14:08:01.658: INFO: Waiting for pod downward-api-e8997a2f-6fae-4f17-a179-0403bdc9b946 to disappear May 19 14:08:01.693: INFO: Pod downward-api-e8997a2f-6fae-4f17-a179-0403bdc9b946 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:08:01.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6163" for this suite. May 19 14:08:07.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:08:07.781: INFO: namespace downward-api-6163 deletion completed in 6.085288079s • [SLOW TEST:10.291 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:08:07.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions May 19 14:08:07.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 19 14:08:08.032: INFO: stderr: "" May 19 14:08:08.032: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:08:08.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6409" for this suite. May 19 14:08:14.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:08:14.143: INFO: namespace kubectl-6409 deletion completed in 6.100051301s • [SLOW TEST:6.361 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:08:14.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 19 14:08:14.193: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d19dc054-71af-4133-905f-04a7b2d7124e" in namespace "downward-api-5006" to be "success or failure" May 19 14:08:14.208: INFO: Pod "downwardapi-volume-d19dc054-71af-4133-905f-04a7b2d7124e": Phase="Pending", Reason="", readiness=false. Elapsed: 15.126236ms May 19 14:08:16.348: INFO: Pod "downwardapi-volume-d19dc054-71af-4133-905f-04a7b2d7124e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.154674602s May 19 14:08:18.352: INFO: Pod "downwardapi-volume-d19dc054-71af-4133-905f-04a7b2d7124e": Phase="Running", Reason="", readiness=true. Elapsed: 4.1588494s May 19 14:08:20.356: INFO: Pod "downwardapi-volume-d19dc054-71af-4133-905f-04a7b2d7124e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.1627963s STEP: Saw pod success May 19 14:08:20.356: INFO: Pod "downwardapi-volume-d19dc054-71af-4133-905f-04a7b2d7124e" satisfied condition "success or failure" May 19 14:08:20.359: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-d19dc054-71af-4133-905f-04a7b2d7124e container client-container: STEP: delete the pod May 19 14:08:20.397: INFO: Waiting for pod downwardapi-volume-d19dc054-71af-4133-905f-04a7b2d7124e to disappear May 19 14:08:20.412: INFO: Pod downwardapi-volume-d19dc054-71af-4133-905f-04a7b2d7124e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:08:20.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5006" for this suite. May 19 14:08:26.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:08:26.503: INFO: namespace downward-api-5006 deletion completed in 6.086555943s • [SLOW TEST:12.360 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:08:26.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:08:26.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8143" for this suite. May 19 14:08:32.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:08:32.788: INFO: namespace kubelet-test-8143 deletion completed in 6.111636644s • [SLOW TEST:6.284 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:08:32.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 May 19 14:08:32.821: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 19 14:08:32.829: INFO: Waiting for terminating namespaces to be deleted... May 19 14:08:32.845: INFO: Logging pods the kubelet thinks is on node iruya-worker before test May 19 14:08:32.850: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 19 14:08:32.850: INFO: Container kube-proxy ready: true, restart count 0 May 19 14:08:32.850: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 19 14:08:32.850: INFO: Container kindnet-cni ready: true, restart count 0 May 19 14:08:32.850: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test May 19 14:08:32.855: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) May 19 14:08:32.855: INFO: Container kube-proxy ready: true, restart count 0 May 19 14:08:32.855: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) May 19 14:08:32.855: INFO: Container kindnet-cni ready: true, restart count 0 May 19 14:08:32.855: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) May 19 14:08:32.855: INFO: Container coredns ready: true, restart count 0 May 19 14:08:32.855: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) May 19 14:08:32.855: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-c8d610b9-96b9-41aa-beb4-9c5ece87cebc 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-c8d610b9-96b9-41aa-beb4-9c5ece87cebc off the node iruya-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-c8d610b9-96b9-41aa-beb4-9c5ece87cebc [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:08:41.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7355" for this suite. May 19 14:08:55.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:08:55.191: INFO: namespace sched-pred-7355 deletion completed in 14.090031837s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:22.401 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:08:55.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0519 14:09:07.119743 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 19 14:09:07.119: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:09:07.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3914" for this suite. May 19 14:09:13.138: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:09:13.218: INFO: namespace gc-3914 deletion completed in 6.095707641s • [SLOW TEST:18.027 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:09:13.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 19 14:09:13.350: INFO: Waiting up to 5m0s for pod "downwardapi-volume-953a84f6-91df-4946-9c85-e95e62a9c7bc" in namespace "projected-9326" to be "success or failure" May 19 14:09:13.366: INFO: Pod "downwardapi-volume-953a84f6-91df-4946-9c85-e95e62a9c7bc": Phase="Pending", Reason="", readiness=false. Elapsed: 15.635442ms May 19 14:09:15.403: INFO: Pod "downwardapi-volume-953a84f6-91df-4946-9c85-e95e62a9c7bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053313851s May 19 14:09:17.407: INFO: Pod "downwardapi-volume-953a84f6-91df-4946-9c85-e95e62a9c7bc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056918731s May 19 14:09:19.412: INFO: Pod "downwardapi-volume-953a84f6-91df-4946-9c85-e95e62a9c7bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.061710246s STEP: Saw pod success May 19 14:09:19.412: INFO: Pod "downwardapi-volume-953a84f6-91df-4946-9c85-e95e62a9c7bc" satisfied condition "success or failure" May 19 14:09:19.416: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-953a84f6-91df-4946-9c85-e95e62a9c7bc container client-container: STEP: delete the pod May 19 14:09:19.447: INFO: Waiting for pod downwardapi-volume-953a84f6-91df-4946-9c85-e95e62a9c7bc to disappear May 19 14:09:19.467: INFO: Pod downwardapi-volume-953a84f6-91df-4946-9c85-e95e62a9c7bc no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:09:19.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9326" for this suite. May 19 14:09:25.489: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:09:25.566: INFO: namespace projected-9326 deletion completed in 6.095486327s • [SLOW TEST:12.347 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:09:25.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:09:29.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3439" for this suite. May 19 14:09:35.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:09:35.869: INFO: namespace emptydir-wrapper-3439 deletion completed in 6.082633063s • [SLOW TEST:10.303 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:09:35.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium May 19 14:09:35.933: INFO: Waiting up to 5m0s for pod "pod-1e81283f-aebc-440c-ab58-3036af55148e" in namespace "emptydir-5029" to be "success or failure" May 19 14:09:35.947: INFO: Pod "pod-1e81283f-aebc-440c-ab58-3036af55148e": Phase="Pending", Reason="", readiness=false. Elapsed: 13.689324ms May 19 14:09:37.951: INFO: Pod "pod-1e81283f-aebc-440c-ab58-3036af55148e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018306086s May 19 14:09:39.956: INFO: Pod "pod-1e81283f-aebc-440c-ab58-3036af55148e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022709631s STEP: Saw pod success May 19 14:09:39.956: INFO: Pod "pod-1e81283f-aebc-440c-ab58-3036af55148e" satisfied condition "success or failure" May 19 14:09:39.959: INFO: Trying to get logs from node iruya-worker pod pod-1e81283f-aebc-440c-ab58-3036af55148e container test-container: STEP: delete the pod May 19 14:09:40.008: INFO: Waiting for pod pod-1e81283f-aebc-440c-ab58-3036af55148e to disappear May 19 14:09:40.012: INFO: Pod pod-1e81283f-aebc-440c-ab58-3036af55148e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:09:40.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5029" for this suite. May 19 14:09:46.106: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:09:46.212: INFO: namespace emptydir-5029 deletion completed in 6.19573798s • [SLOW TEST:10.342 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:09:46.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC May 19 14:09:46.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6500' May 19 14:09:49.064: INFO: stderr: "" May 19 14:09:49.064: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 19 14:09:50.068: INFO: Selector matched 1 pods for map[app:redis] May 19 14:09:50.068: INFO: Found 0 / 1 May 19 14:09:51.069: INFO: Selector matched 1 pods for map[app:redis] May 19 14:09:51.069: INFO: Found 0 / 1 May 19 14:09:52.068: INFO: Selector matched 1 pods for map[app:redis] May 19 14:09:52.068: INFO: Found 0 / 1 May 19 14:09:53.068: INFO: Selector matched 1 pods for map[app:redis] May 19 14:09:53.068: INFO: Found 1 / 1 May 19 14:09:53.068: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 19 14:09:53.070: INFO: Selector matched 1 pods for map[app:redis] May 19 14:09:53.070: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 19 14:09:53.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-6ltw9 --namespace=kubectl-6500 -p {"metadata":{"annotations":{"x":"y"}}}' May 19 14:09:53.178: INFO: stderr: "" May 19 14:09:53.178: INFO: stdout: "pod/redis-master-6ltw9 patched\n" STEP: checking annotations May 19 14:09:53.180: INFO: Selector matched 1 pods for map[app:redis] May 19 14:09:53.181: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:09:53.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6500" for this suite. May 19 14:10:15.211: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:10:15.293: INFO: namespace kubectl-6500 deletion completed in 22.110042304s • [SLOW TEST:29.081 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:10:15.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-40d3a5ec-567e-4865-a0c7-e489fbba63d2 STEP: Creating a pod to test consume configMaps May 19 14:10:15.410: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d84c124a-8876-4af7-87da-cc00f47f8a42" in namespace "projected-3658" to be "success or failure" May 19 14:10:15.426: INFO: Pod "pod-projected-configmaps-d84c124a-8876-4af7-87da-cc00f47f8a42": Phase="Pending", Reason="", readiness=false. Elapsed: 16.056541ms May 19 14:10:17.430: INFO: Pod "pod-projected-configmaps-d84c124a-8876-4af7-87da-cc00f47f8a42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020042863s May 19 14:10:19.434: INFO: Pod "pod-projected-configmaps-d84c124a-8876-4af7-87da-cc00f47f8a42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023922659s STEP: Saw pod success May 19 14:10:19.434: INFO: Pod "pod-projected-configmaps-d84c124a-8876-4af7-87da-cc00f47f8a42" satisfied condition "success or failure" May 19 14:10:19.436: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-d84c124a-8876-4af7-87da-cc00f47f8a42 container projected-configmap-volume-test: STEP: delete the pod May 19 14:10:19.480: INFO: Waiting for pod pod-projected-configmaps-d84c124a-8876-4af7-87da-cc00f47f8a42 to disappear May 19 14:10:19.486: INFO: Pod pod-projected-configmaps-d84c124a-8876-4af7-87da-cc00f47f8a42 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:10:19.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3658" for this suite. May 19 14:10:25.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:10:25.580: INFO: namespace projected-3658 deletion completed in 6.08812846s • [SLOW TEST:10.287 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:10:25.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0519 14:10:26.716609 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 19 14:10:26.716: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:10:26.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9894" for this suite. May 19 14:10:32.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:10:32.860: INFO: namespace gc-9894 deletion completed in 6.140493818s • [SLOW TEST:7.279 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:10:32.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-2947, will wait for the garbage collector to delete the pods May 19 14:10:39.007: INFO: Deleting Job.batch foo took: 6.207408ms May 19 14:10:39.307: INFO: Terminating Job.batch foo pods took: 300.268834ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:11:22.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2947" for this suite. May 19 14:11:28.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:11:28.320: INFO: namespace job-2947 deletion completed in 6.10326718s • [SLOW TEST:55.459 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:11:28.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-4436 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-4436 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4436 May 19 14:11:28.398: INFO: Found 0 stateful pods, waiting for 1 May 19 14:11:38.403: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 19 14:11:38.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4436 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 19 14:11:38.672: INFO: stderr: "I0519 14:11:38.554126 2212 log.go:172] (0xc000a60420) (0xc0009d46e0) Create stream\nI0519 14:11:38.554207 2212 log.go:172] (0xc000a60420) (0xc0009d46e0) Stream added, broadcasting: 1\nI0519 14:11:38.556167 2212 log.go:172] (0xc000a60420) Reply frame received for 1\nI0519 14:11:38.556205 2212 log.go:172] (0xc000a60420) (0xc0001e2320) Create stream\nI0519 14:11:38.556216 2212 log.go:172] (0xc000a60420) (0xc0001e2320) Stream added, broadcasting: 3\nI0519 14:11:38.557076 2212 log.go:172] (0xc000a60420) Reply frame received for 3\nI0519 14:11:38.557314 2212 log.go:172] (0xc000a60420) (0xc0001e23c0) Create stream\nI0519 14:11:38.557343 2212 log.go:172] (0xc000a60420) (0xc0001e23c0) Stream added, broadcasting: 5\nI0519 14:11:38.558295 2212 log.go:172] (0xc000a60420) Reply frame received for 5\nI0519 14:11:38.630611 2212 log.go:172] (0xc000a60420) Data frame received for 5\nI0519 14:11:38.630667 2212 log.go:172] (0xc0001e23c0) (5) Data frame handling\nI0519 14:11:38.630694 2212 log.go:172] (0xc0001e23c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0519 14:11:38.664359 2212 log.go:172] (0xc000a60420) Data frame received for 3\nI0519 14:11:38.664390 2212 log.go:172] (0xc0001e2320) (3) Data frame handling\nI0519 14:11:38.664409 2212 log.go:172] (0xc0001e2320) (3) Data frame sent\nI0519 14:11:38.664471 2212 log.go:172] (0xc000a60420) Data frame received for 3\nI0519 14:11:38.664484 2212 log.go:172] (0xc0001e2320) (3) Data frame handling\nI0519 14:11:38.664636 2212 log.go:172] (0xc000a60420) Data frame received for 5\nI0519 14:11:38.664647 2212 log.go:172] (0xc0001e23c0) (5) Data frame handling\nI0519 14:11:38.667531 2212 log.go:172] (0xc000a60420) Data frame received for 1\nI0519 14:11:38.667543 2212 log.go:172] (0xc0009d46e0) (1) Data frame handling\nI0519 14:11:38.667549 2212 log.go:172] (0xc0009d46e0) (1) Data frame sent\nI0519 14:11:38.667556 2212 log.go:172] (0xc000a60420) (0xc0009d46e0) Stream removed, broadcasting: 1\nI0519 14:11:38.667620 2212 log.go:172] (0xc000a60420) Go away received\nI0519 14:11:38.667783 2212 log.go:172] (0xc000a60420) (0xc0009d46e0) Stream removed, broadcasting: 1\nI0519 14:11:38.667795 2212 log.go:172] (0xc000a60420) (0xc0001e2320) Stream removed, broadcasting: 3\nI0519 14:11:38.667802 2212 log.go:172] (0xc000a60420) (0xc0001e23c0) Stream removed, broadcasting: 5\n" May 19 14:11:38.672: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 19 14:11:38.672: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 19 14:11:38.680: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 19 14:11:48.685: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 19 14:11:48.685: INFO: Waiting for statefulset status.replicas updated to 0 May 19 14:11:48.699: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999376s May 19 14:11:49.702: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.99567764s May 19 14:11:50.707: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.991887379s May 19 14:11:51.711: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.987282905s May 19 14:11:52.715: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.983270505s May 19 14:11:53.719: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.979081667s May 19 14:11:54.723: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.975551854s May 19 14:11:55.728: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.971165385s May 19 14:11:56.733: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.966569341s May 19 14:11:57.737: INFO: Verifying statefulset ss doesn't scale past 1 for another 961.246306ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4436 May 19 14:11:58.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4436 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 19 14:11:58.959: INFO: stderr: "I0519 14:11:58.873587 2233 log.go:172] (0xc000118fd0) (0xc0003b2aa0) Create stream\nI0519 14:11:58.873675 2233 log.go:172] (0xc000118fd0) (0xc0003b2aa0) Stream added, broadcasting: 1\nI0519 14:11:58.875647 2233 log.go:172] (0xc000118fd0) Reply frame received for 1\nI0519 14:11:58.875716 2233 log.go:172] (0xc000118fd0) (0xc0008b4000) Create stream\nI0519 14:11:58.875735 2233 log.go:172] (0xc000118fd0) (0xc0008b4000) Stream added, broadcasting: 3\nI0519 14:11:58.876779 2233 log.go:172] (0xc000118fd0) Reply frame received for 3\nI0519 14:11:58.876819 2233 log.go:172] (0xc000118fd0) (0xc0003b2b40) Create stream\nI0519 14:11:58.876834 2233 log.go:172] (0xc000118fd0) (0xc0003b2b40) Stream added, broadcasting: 5\nI0519 14:11:58.878148 2233 log.go:172] (0xc000118fd0) Reply frame received for 5\nI0519 14:11:58.951766 2233 log.go:172] (0xc000118fd0) Data frame received for 3\nI0519 14:11:58.951801 2233 log.go:172] (0xc0008b4000) (3) Data frame handling\nI0519 14:11:58.951814 2233 log.go:172] (0xc0008b4000) (3) Data frame sent\nI0519 14:11:58.951830 2233 log.go:172] (0xc000118fd0) Data frame received for 5\nI0519 14:11:58.951845 2233 log.go:172] (0xc0003b2b40) (5) Data frame handling\nI0519 14:11:58.951853 2233 log.go:172] (0xc0003b2b40) (5) Data frame sent\nI0519 14:11:58.951870 2233 log.go:172] (0xc000118fd0) Data frame received for 5\nI0519 14:11:58.951876 2233 log.go:172] (0xc0003b2b40) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0519 14:11:58.951885 2233 log.go:172] (0xc000118fd0) Data frame received for 3\nI0519 14:11:58.951907 2233 log.go:172] (0xc0008b4000) (3) Data frame handling\nI0519 14:11:58.953326 2233 log.go:172] (0xc000118fd0) Data frame received for 1\nI0519 14:11:58.953352 2233 log.go:172] (0xc0003b2aa0) (1) Data frame handling\nI0519 14:11:58.953366 2233 log.go:172] (0xc0003b2aa0) (1) Data frame sent\nI0519 14:11:58.953412 2233 log.go:172] (0xc000118fd0) (0xc0003b2aa0) Stream removed, broadcasting: 1\nI0519 14:11:58.953453 2233 log.go:172] (0xc000118fd0) Go away received\nI0519 14:11:58.953731 2233 log.go:172] (0xc000118fd0) (0xc0003b2aa0) Stream removed, broadcasting: 1\nI0519 14:11:58.953745 2233 log.go:172] (0xc000118fd0) (0xc0008b4000) Stream removed, broadcasting: 3\nI0519 14:11:58.953753 2233 log.go:172] (0xc000118fd0) (0xc0003b2b40) Stream removed, broadcasting: 5\n" May 19 14:11:58.959: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 19 14:11:58.959: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 19 14:11:58.962: INFO: Found 1 stateful pods, waiting for 3 May 19 14:12:08.968: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 19 14:12:08.968: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 19 14:12:08.968: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 19 14:12:08.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4436 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 19 14:12:09.223: INFO: stderr: "I0519 14:12:09.121304 2254 log.go:172] (0xc000a0a420) (0xc0006f6960) Create stream\nI0519 14:12:09.121362 2254 log.go:172] (0xc000a0a420) (0xc0006f6960) Stream added, broadcasting: 1\nI0519 14:12:09.124026 2254 log.go:172] (0xc000a0a420) Reply frame received for 1\nI0519 14:12:09.124072 2254 log.go:172] (0xc000a0a420) (0xc000928000) Create stream\nI0519 14:12:09.124091 2254 log.go:172] (0xc000a0a420) (0xc000928000) Stream added, broadcasting: 3\nI0519 14:12:09.125437 2254 log.go:172] (0xc000a0a420) Reply frame received for 3\nI0519 14:12:09.125491 2254 log.go:172] (0xc000a0a420) (0xc0009280a0) Create stream\nI0519 14:12:09.125512 2254 log.go:172] (0xc000a0a420) (0xc0009280a0) Stream added, broadcasting: 5\nI0519 14:12:09.126448 2254 log.go:172] (0xc000a0a420) Reply frame received for 5\nI0519 14:12:09.218165 2254 log.go:172] (0xc000a0a420) Data frame received for 5\nI0519 14:12:09.218188 2254 log.go:172] (0xc0009280a0) (5) Data frame handling\nI0519 14:12:09.218197 2254 log.go:172] (0xc0009280a0) (5) Data frame sent\nI0519 14:12:09.218203 2254 log.go:172] (0xc000a0a420) Data frame received for 5\nI0519 14:12:09.218207 2254 log.go:172] (0xc0009280a0) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0519 14:12:09.218229 2254 log.go:172] (0xc000a0a420) Data frame received for 3\nI0519 14:12:09.218266 2254 log.go:172] (0xc000928000) (3) Data frame handling\nI0519 14:12:09.218282 2254 log.go:172] (0xc000928000) (3) Data frame sent\nI0519 14:12:09.218291 2254 log.go:172] (0xc000a0a420) Data frame received for 3\nI0519 14:12:09.218311 2254 log.go:172] (0xc000928000) (3) Data frame handling\nI0519 14:12:09.219130 2254 log.go:172] (0xc000a0a420) Data frame received for 1\nI0519 14:12:09.219151 2254 log.go:172] (0xc0006f6960) (1) Data frame handling\nI0519 14:12:09.219160 2254 log.go:172] (0xc0006f6960) (1) Data frame sent\nI0519 14:12:09.219177 2254 log.go:172] (0xc000a0a420) (0xc0006f6960) Stream removed, broadcasting: 1\nI0519 14:12:09.219193 2254 log.go:172] (0xc000a0a420) Go away received\nI0519 14:12:09.219596 2254 log.go:172] (0xc000a0a420) (0xc0006f6960) Stream removed, broadcasting: 1\nI0519 14:12:09.219608 2254 log.go:172] (0xc000a0a420) (0xc000928000) Stream removed, broadcasting: 3\nI0519 14:12:09.219614 2254 log.go:172] (0xc000a0a420) (0xc0009280a0) Stream removed, broadcasting: 5\n" May 19 14:12:09.223: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 19 14:12:09.223: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 19 14:12:09.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4436 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 19 14:12:09.515: INFO: stderr: "I0519 14:12:09.348788 2274 log.go:172] (0xc000118dc0) (0xc0003bc6e0) Create stream\nI0519 14:12:09.348863 2274 log.go:172] (0xc000118dc0) (0xc0003bc6e0) Stream added, broadcasting: 1\nI0519 14:12:09.351538 2274 log.go:172] (0xc000118dc0) Reply frame received for 1\nI0519 14:12:09.351604 2274 log.go:172] (0xc000118dc0) (0xc00066a320) Create stream\nI0519 14:12:09.351651 2274 log.go:172] (0xc000118dc0) (0xc00066a320) Stream added, broadcasting: 3\nI0519 14:12:09.353504 2274 log.go:172] (0xc000118dc0) Reply frame received for 3\nI0519 14:12:09.353545 2274 log.go:172] (0xc000118dc0) (0xc0003bc000) Create stream\nI0519 14:12:09.353558 2274 log.go:172] (0xc000118dc0) (0xc0003bc000) Stream added, broadcasting: 5\nI0519 14:12:09.354576 2274 log.go:172] (0xc000118dc0) Reply frame received for 5\nI0519 14:12:09.420079 2274 log.go:172] (0xc000118dc0) Data frame received for 5\nI0519 14:12:09.420116 2274 log.go:172] (0xc0003bc000) (5) Data frame handling\nI0519 14:12:09.420143 2274 log.go:172] (0xc0003bc000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0519 14:12:09.506959 2274 log.go:172] (0xc000118dc0) Data frame received for 3\nI0519 14:12:09.507004 2274 log.go:172] (0xc00066a320) (3) Data frame handling\nI0519 14:12:09.507016 2274 log.go:172] (0xc00066a320) (3) Data frame sent\nI0519 14:12:09.507056 2274 log.go:172] (0xc000118dc0) Data frame received for 5\nI0519 14:12:09.507091 2274 log.go:172] (0xc0003bc000) (5) Data frame handling\nI0519 14:12:09.507132 2274 log.go:172] (0xc000118dc0) Data frame received for 3\nI0519 14:12:09.507150 2274 log.go:172] (0xc00066a320) (3) Data frame handling\nI0519 14:12:09.509540 2274 log.go:172] (0xc000118dc0) Data frame received for 1\nI0519 14:12:09.509584 2274 log.go:172] (0xc0003bc6e0) (1) Data frame handling\nI0519 14:12:09.509607 2274 log.go:172] (0xc0003bc6e0) (1) Data frame sent\nI0519 14:12:09.509653 2274 log.go:172] (0xc000118dc0) (0xc0003bc6e0) Stream removed, broadcasting: 1\nI0519 14:12:09.509713 2274 log.go:172] (0xc000118dc0) Go away received\nI0519 14:12:09.509972 2274 log.go:172] (0xc000118dc0) (0xc0003bc6e0) Stream removed, broadcasting: 1\nI0519 14:12:09.509991 2274 log.go:172] (0xc000118dc0) (0xc00066a320) Stream removed, broadcasting: 3\nI0519 14:12:09.509997 2274 log.go:172] (0xc000118dc0) (0xc0003bc000) Stream removed, broadcasting: 5\n" May 19 14:12:09.515: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 19 14:12:09.515: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 19 14:12:09.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4436 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 19 14:12:09.774: INFO: stderr: "I0519 14:12:09.647459 2295 log.go:172] (0xc000a1c420) (0xc0004b8820) Create stream\nI0519 14:12:09.647519 2295 log.go:172] (0xc000a1c420) (0xc0004b8820) Stream added, broadcasting: 1\nI0519 14:12:09.649866 2295 log.go:172] (0xc000a1c420) Reply frame received for 1\nI0519 14:12:09.649908 2295 log.go:172] (0xc000a1c420) (0xc000954000) Create stream\nI0519 14:12:09.649922 2295 log.go:172] (0xc000a1c420) (0xc000954000) Stream added, broadcasting: 3\nI0519 14:12:09.650910 2295 log.go:172] (0xc000a1c420) Reply frame received for 3\nI0519 14:12:09.650940 2295 log.go:172] (0xc000a1c420) (0xc0004b88c0) Create stream\nI0519 14:12:09.650952 2295 log.go:172] (0xc000a1c420) (0xc0004b88c0) Stream added, broadcasting: 5\nI0519 14:12:09.651866 2295 log.go:172] (0xc000a1c420) Reply frame received for 5\nI0519 14:12:09.722155 2295 log.go:172] (0xc000a1c420) Data frame received for 5\nI0519 14:12:09.722186 2295 log.go:172] (0xc0004b88c0) (5) Data frame handling\nI0519 14:12:09.722203 2295 log.go:172] (0xc0004b88c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0519 14:12:09.765418 2295 log.go:172] (0xc000a1c420) Data frame received for 3\nI0519 14:12:09.765466 2295 log.go:172] (0xc000954000) (3) Data frame handling\nI0519 14:12:09.765490 2295 log.go:172] (0xc000954000) (3) Data frame sent\nI0519 14:12:09.765551 2295 log.go:172] (0xc000a1c420) Data frame received for 3\nI0519 14:12:09.765572 2295 log.go:172] (0xc000954000) (3) Data frame handling\nI0519 14:12:09.765811 2295 log.go:172] (0xc000a1c420) Data frame received for 5\nI0519 14:12:09.765822 2295 log.go:172] (0xc0004b88c0) (5) Data frame handling\nI0519 14:12:09.767855 2295 log.go:172] (0xc000a1c420) Data frame received for 1\nI0519 14:12:09.767898 2295 log.go:172] (0xc0004b8820) (1) Data frame handling\nI0519 14:12:09.767953 2295 log.go:172] (0xc0004b8820) (1) Data frame sent\nI0519 14:12:09.767982 2295 log.go:172] (0xc000a1c420) (0xc0004b8820) Stream removed, broadcasting: 1\nI0519 14:12:09.768066 2295 log.go:172] (0xc000a1c420) Go away received\nI0519 14:12:09.768449 2295 log.go:172] (0xc000a1c420) (0xc0004b8820) Stream removed, broadcasting: 1\nI0519 14:12:09.768475 2295 log.go:172] (0xc000a1c420) (0xc000954000) Stream removed, broadcasting: 3\nI0519 14:12:09.768488 2295 log.go:172] (0xc000a1c420) (0xc0004b88c0) Stream removed, broadcasting: 5\n" May 19 14:12:09.775: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 19 14:12:09.775: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 19 14:12:09.775: INFO: Waiting for statefulset status.replicas updated to 0 May 19 14:12:09.778: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 19 14:12:19.786: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 19 14:12:19.786: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 19 14:12:19.786: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 19 14:12:19.802: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999635s May 19 14:12:20.819: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.991642264s May 19 14:12:21.825: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.97396064s May 19 14:12:22.837: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.968477729s May 19 14:12:23.843: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.956547667s May 19 14:12:24.848: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.950444964s May 19 14:12:25.855: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.945546485s May 19 14:12:26.861: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.937287294s May 19 14:12:27.866: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.932022367s May 19 14:12:28.880: INFO: Verifying statefulset ss doesn't scale past 3 for another 926.753827ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4436 May 19 14:12:29.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4436 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 19 14:12:30.110: INFO: stderr: "I0519 14:12:30.016255 2318 log.go:172] (0xc000a0e2c0) (0xc0008c8640) Create stream\nI0519 14:12:30.016314 2318 log.go:172] (0xc000a0e2c0) (0xc0008c8640) Stream added, broadcasting: 1\nI0519 14:12:30.018553 2318 log.go:172] (0xc000a0e2c0) Reply frame received for 1\nI0519 14:12:30.018603 2318 log.go:172] (0xc000a0e2c0) (0xc0008ea000) Create stream\nI0519 14:12:30.018623 2318 log.go:172] (0xc000a0e2c0) (0xc0008ea000) Stream added, broadcasting: 3\nI0519 14:12:30.019593 2318 log.go:172] (0xc000a0e2c0) Reply frame received for 3\nI0519 14:12:30.019625 2318 log.go:172] (0xc000a0e2c0) (0xc0008c86e0) Create stream\nI0519 14:12:30.019635 2318 log.go:172] (0xc000a0e2c0) (0xc0008c86e0) Stream added, broadcasting: 5\nI0519 14:12:30.020428 2318 log.go:172] (0xc000a0e2c0) Reply frame received for 5\nI0519 14:12:30.103264 2318 log.go:172] (0xc000a0e2c0) Data frame received for 5\nI0519 14:12:30.103299 2318 log.go:172] (0xc0008c86e0) (5) Data frame handling\nI0519 14:12:30.103309 2318 log.go:172] (0xc0008c86e0) (5) Data frame sent\nI0519 14:12:30.103317 2318 log.go:172] (0xc000a0e2c0) Data frame received for 5\nI0519 14:12:30.103324 2318 log.go:172] (0xc0008c86e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0519 14:12:30.103344 2318 log.go:172] (0xc000a0e2c0) Data frame received for 3\nI0519 14:12:30.103352 2318 log.go:172] (0xc0008ea000) (3) Data frame handling\nI0519 14:12:30.103360 2318 log.go:172] (0xc0008ea000) (3) Data frame sent\nI0519 14:12:30.103374 2318 log.go:172] (0xc000a0e2c0) Data frame received for 3\nI0519 14:12:30.103381 2318 log.go:172] (0xc0008ea000) (3) Data frame handling\nI0519 14:12:30.104618 2318 log.go:172] (0xc000a0e2c0) Data frame received for 1\nI0519 14:12:30.104693 2318 log.go:172] (0xc0008c8640) (1) Data frame handling\nI0519 14:12:30.104717 2318 log.go:172] (0xc0008c8640) (1) Data frame sent\nI0519 14:12:30.104740 2318 log.go:172] (0xc000a0e2c0) (0xc0008c8640) Stream removed, broadcasting: 1\nI0519 14:12:30.104792 2318 log.go:172] (0xc000a0e2c0) Go away received\nI0519 14:12:30.105291 2318 log.go:172] (0xc000a0e2c0) (0xc0008c8640) Stream removed, broadcasting: 1\nI0519 14:12:30.105321 2318 log.go:172] (0xc000a0e2c0) (0xc0008ea000) Stream removed, broadcasting: 3\nI0519 14:12:30.105334 2318 log.go:172] (0xc000a0e2c0) (0xc0008c86e0) Stream removed, broadcasting: 5\n" May 19 14:12:30.110: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 19 14:12:30.111: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 19 14:12:30.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4436 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 19 14:12:30.347: INFO: stderr: "I0519 14:12:30.262752 2338 log.go:172] (0xc00093a420) (0xc0002c0820) Create stream\nI0519 14:12:30.262829 2338 log.go:172] (0xc00093a420) (0xc0002c0820) Stream added, broadcasting: 1\nI0519 14:12:30.265592 2338 log.go:172] (0xc00093a420) Reply frame received for 1\nI0519 14:12:30.265669 2338 log.go:172] (0xc00093a420) (0xc000784000) Create stream\nI0519 14:12:30.265709 2338 log.go:172] (0xc00093a420) (0xc000784000) Stream added, broadcasting: 3\nI0519 14:12:30.266986 2338 log.go:172] (0xc00093a420) Reply frame received for 3\nI0519 14:12:30.267037 2338 log.go:172] (0xc00093a420) (0xc0003c0320) Create stream\nI0519 14:12:30.267052 2338 log.go:172] (0xc00093a420) (0xc0003c0320) Stream added, broadcasting: 5\nI0519 14:12:30.268099 2338 log.go:172] (0xc00093a420) Reply frame received for 5\nI0519 14:12:30.336522 2338 log.go:172] (0xc00093a420) Data frame received for 3\nI0519 14:12:30.336563 2338 log.go:172] (0xc000784000) (3) Data frame handling\nI0519 14:12:30.336582 2338 log.go:172] (0xc000784000) (3) Data frame sent\nI0519 14:12:30.336594 2338 log.go:172] (0xc00093a420) Data frame received for 3\nI0519 14:12:30.336604 2338 log.go:172] (0xc000784000) (3) Data frame handling\nI0519 14:12:30.336641 2338 log.go:172] (0xc00093a420) Data frame received for 5\nI0519 14:12:30.336652 2338 log.go:172] (0xc0003c0320) (5) Data frame handling\nI0519 14:12:30.336665 2338 log.go:172] (0xc0003c0320) (5) Data frame sent\nI0519 14:12:30.336676 2338 log.go:172] (0xc00093a420) Data frame received for 5\nI0519 14:12:30.336687 2338 log.go:172] (0xc0003c0320) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0519 14:12:30.339519 2338 log.go:172] (0xc00093a420) Data frame received for 1\nI0519 14:12:30.339628 2338 log.go:172] (0xc0002c0820) (1) Data frame handling\nI0519 14:12:30.339672 2338 log.go:172] (0xc0002c0820) (1) Data frame sent\nI0519 14:12:30.339754 2338 log.go:172] (0xc00093a420) (0xc0002c0820) Stream removed, broadcasting: 1\nI0519 14:12:30.339823 2338 log.go:172] (0xc00093a420) Go away received\nI0519 14:12:30.340887 2338 log.go:172] (0xc00093a420) (0xc0002c0820) Stream removed, broadcasting: 1\nI0519 14:12:30.340908 2338 log.go:172] (0xc00093a420) (0xc000784000) Stream removed, broadcasting: 3\nI0519 14:12:30.340921 2338 log.go:172] (0xc00093a420) (0xc0003c0320) Stream removed, broadcasting: 5\n" May 19 14:12:30.347: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 19 14:12:30.347: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 19 14:12:30.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4436 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 19 14:12:30.552: INFO: stderr: "I0519 14:12:30.475336 2361 log.go:172] (0xc0001166e0) (0xc0009d2640) Create stream\nI0519 14:12:30.475393 2361 log.go:172] (0xc0001166e0) (0xc0009d2640) Stream added, broadcasting: 1\nI0519 14:12:30.477966 2361 log.go:172] (0xc0001166e0) Reply frame received for 1\nI0519 14:12:30.477999 2361 log.go:172] (0xc0001166e0) (0xc000940000) Create stream\nI0519 14:12:30.478016 2361 log.go:172] (0xc0001166e0) (0xc000940000) Stream added, broadcasting: 3\nI0519 14:12:30.479320 2361 log.go:172] (0xc0001166e0) Reply frame received for 3\nI0519 14:12:30.479404 2361 log.go:172] (0xc0001166e0) (0xc000638280) Create stream\nI0519 14:12:30.479445 2361 log.go:172] (0xc0001166e0) (0xc000638280) Stream added, broadcasting: 5\nI0519 14:12:30.480524 2361 log.go:172] (0xc0001166e0) Reply frame received for 5\nI0519 14:12:30.545673 2361 log.go:172] (0xc0001166e0) Data frame received for 3\nI0519 14:12:30.545703 2361 log.go:172] (0xc000940000) (3) Data frame handling\nI0519 14:12:30.545715 2361 log.go:172] (0xc000940000) (3) Data frame sent\nI0519 14:12:30.545737 2361 log.go:172] (0xc0001166e0) Data frame received for 3\nI0519 14:12:30.545762 2361 log.go:172] (0xc000940000) (3) Data frame handling\nI0519 14:12:30.545804 2361 log.go:172] (0xc0001166e0) Data frame received for 5\nI0519 14:12:30.545827 2361 log.go:172] (0xc000638280) (5) Data frame handling\nI0519 14:12:30.545851 2361 log.go:172] (0xc000638280) (5) Data frame sent\nI0519 14:12:30.545870 2361 log.go:172] (0xc0001166e0) Data frame received for 5\nI0519 14:12:30.545888 2361 log.go:172] (0xc000638280) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0519 14:12:30.547465 2361 log.go:172] (0xc0001166e0) Data frame received for 1\nI0519 14:12:30.547483 2361 log.go:172] (0xc0009d2640) (1) Data frame handling\nI0519 14:12:30.547494 2361 log.go:172] (0xc0009d2640) (1) Data frame sent\nI0519 14:12:30.547526 2361 log.go:172] (0xc0001166e0) (0xc0009d2640) Stream removed, broadcasting: 1\nI0519 14:12:30.547602 2361 log.go:172] (0xc0001166e0) Go away received\nI0519 14:12:30.547806 2361 log.go:172] (0xc0001166e0) (0xc0009d2640) Stream removed, broadcasting: 1\nI0519 14:12:30.547822 2361 log.go:172] (0xc0001166e0) (0xc000940000) Stream removed, broadcasting: 3\nI0519 14:12:30.547833 2361 log.go:172] (0xc0001166e0) (0xc000638280) Stream removed, broadcasting: 5\n" May 19 14:12:30.553: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 19 14:12:30.553: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 19 14:12:30.553: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 19 14:13:10.568: INFO: Deleting all statefulset in ns statefulset-4436 May 19 14:13:10.571: INFO: Scaling statefulset ss to 0 May 19 14:13:10.580: INFO: Waiting for statefulset status.replicas updated to 0 May 19 14:13:10.583: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:13:10.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4436" for this suite. May 19 14:13:16.616: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:13:16.716: INFO: namespace statefulset-4436 deletion completed in 6.118380154s • [SLOW TEST:108.396 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:13:16.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs May 19 14:13:16.791: INFO: Waiting up to 5m0s for pod "pod-868118b9-66d7-4355-85e8-0116a08cbf52" in namespace "emptydir-1590" to be "success or failure" May 19 14:13:16.796: INFO: Pod "pod-868118b9-66d7-4355-85e8-0116a08cbf52": Phase="Pending", Reason="", readiness=false. Elapsed: 5.284798ms May 19 14:13:18.800: INFO: Pod "pod-868118b9-66d7-4355-85e8-0116a08cbf52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008993902s May 19 14:13:20.804: INFO: Pod "pod-868118b9-66d7-4355-85e8-0116a08cbf52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013266716s STEP: Saw pod success May 19 14:13:20.804: INFO: Pod "pod-868118b9-66d7-4355-85e8-0116a08cbf52" satisfied condition "success or failure" May 19 14:13:20.807: INFO: Trying to get logs from node iruya-worker2 pod pod-868118b9-66d7-4355-85e8-0116a08cbf52 container test-container: STEP: delete the pod May 19 14:13:20.844: INFO: Waiting for pod pod-868118b9-66d7-4355-85e8-0116a08cbf52 to disappear May 19 14:13:20.856: INFO: Pod pod-868118b9-66d7-4355-85e8-0116a08cbf52 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:13:20.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1590" for this suite. May 19 14:13:26.888: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:13:26.962: INFO: namespace emptydir-1590 deletion completed in 6.102961036s • [SLOW TEST:10.246 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:13:26.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 19 14:13:27.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-226' May 19 14:13:27.171: INFO: stderr: "" May 19 14:13:27.171: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 May 19 14:13:27.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-226' May 19 14:13:41.868: INFO: stderr: "" May 19 14:13:41.868: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:13:41.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-226" for this suite. May 19 14:13:47.882: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:13:47.955: INFO: namespace kubectl-226 deletion completed in 6.081658114s • [SLOW TEST:20.992 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:13:47.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 19 14:13:48.044: INFO: Waiting up to 5m0s for pod "downward-api-bac47a41-258f-4765-8759-976a060fb651" in namespace "downward-api-2235" to be "success or failure" May 19 14:13:48.048: INFO: Pod "downward-api-bac47a41-258f-4765-8759-976a060fb651": Phase="Pending", Reason="", readiness=false. Elapsed: 3.941498ms May 19 14:13:50.052: INFO: Pod "downward-api-bac47a41-258f-4765-8759-976a060fb651": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008080559s May 19 14:13:52.057: INFO: Pod "downward-api-bac47a41-258f-4765-8759-976a060fb651": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012684963s STEP: Saw pod success May 19 14:13:52.057: INFO: Pod "downward-api-bac47a41-258f-4765-8759-976a060fb651" satisfied condition "success or failure" May 19 14:13:52.060: INFO: Trying to get logs from node iruya-worker pod downward-api-bac47a41-258f-4765-8759-976a060fb651 container dapi-container: STEP: delete the pod May 19 14:13:52.080: INFO: Waiting for pod downward-api-bac47a41-258f-4765-8759-976a060fb651 to disappear May 19 14:13:52.113: INFO: Pod downward-api-bac47a41-258f-4765-8759-976a060fb651 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:13:52.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2235" for this suite. May 19 14:13:58.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:13:58.211: INFO: namespace downward-api-2235 deletion completed in 6.094547868s • [SLOW TEST:10.256 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:13:58.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components May 19 14:13:58.260: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend May 19 14:13:58.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-658' May 19 14:13:58.578: INFO: stderr: "" May 19 14:13:58.578: INFO: stdout: "service/redis-slave created\n" May 19 14:13:58.579: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend May 19 14:13:58.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-658' May 19 14:13:58.867: INFO: stderr: "" May 19 14:13:58.867: INFO: stdout: "service/redis-master created\n" May 19 14:13:58.867: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 19 14:13:58.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-658' May 19 14:13:59.165: INFO: stderr: "" May 19 14:13:59.165: INFO: stdout: "service/frontend created\n" May 19 14:13:59.165: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 May 19 14:13:59.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-658' May 19 14:13:59.435: INFO: stderr: "" May 19 14:13:59.435: INFO: stdout: "deployment.apps/frontend created\n" May 19 14:13:59.436: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 19 14:13:59.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-658' May 19 14:13:59.727: INFO: stderr: "" May 19 14:13:59.727: INFO: stdout: "deployment.apps/redis-master created\n" May 19 14:13:59.727: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 May 19 14:13:59.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-658' May 19 14:14:00.003: INFO: stderr: "" May 19 14:14:00.003: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app May 19 14:14:00.003: INFO: Waiting for all frontend pods to be Running. May 19 14:14:10.053: INFO: Waiting for frontend to serve content. May 19 14:14:10.071: INFO: Trying to add a new entry to the guestbook. May 19 14:14:10.105: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 19 14:14:10.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-658' May 19 14:14:10.254: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 19 14:14:10.255: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources May 19 14:14:10.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-658' May 19 14:14:10.408: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 19 14:14:10.408: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 19 14:14:10.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-658' May 19 14:14:10.523: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 19 14:14:10.523: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 19 14:14:10.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-658' May 19 14:14:10.647: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 19 14:14:10.647: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 19 14:14:10.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-658' May 19 14:14:10.747: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 19 14:14:10.747: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 19 14:14:10.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-658' May 19 14:14:10.838: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 19 14:14:10.838: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:14:10.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-658" for this suite. May 19 14:14:52.862: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:14:52.961: INFO: namespace kubectl-658 deletion completed in 42.119911089s • [SLOW TEST:54.750 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:14:52.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc May 19 14:14:52.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8433' May 19 14:14:53.259: INFO: stderr: "" May 19 14:14:53.259: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. May 19 14:14:54.265: INFO: Selector matched 1 pods for map[app:redis] May 19 14:14:54.265: INFO: Found 0 / 1 May 19 14:14:55.306: INFO: Selector matched 1 pods for map[app:redis] May 19 14:14:55.306: INFO: Found 0 / 1 May 19 14:14:56.264: INFO: Selector matched 1 pods for map[app:redis] May 19 14:14:56.264: INFO: Found 0 / 1 May 19 14:14:57.263: INFO: Selector matched 1 pods for map[app:redis] May 19 14:14:57.263: INFO: Found 1 / 1 May 19 14:14:57.263: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 19 14:14:57.273: INFO: Selector matched 1 pods for map[app:redis] May 19 14:14:57.273: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings May 19 14:14:57.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-jjrzx redis-master --namespace=kubectl-8433' May 19 14:14:57.379: INFO: stderr: "" May 19 14:14:57.379: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 19 May 14:14:56.374 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 19 May 14:14:56.374 # Server started, Redis version 3.2.12\n1:M 19 May 14:14:56.374 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 19 May 14:14:56.374 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines May 19 14:14:57.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-jjrzx redis-master --namespace=kubectl-8433 --tail=1' May 19 14:14:57.477: INFO: stderr: "" May 19 14:14:57.477: INFO: stdout: "1:M 19 May 14:14:56.374 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes May 19 14:14:57.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-jjrzx redis-master --namespace=kubectl-8433 --limit-bytes=1' May 19 14:14:57.608: INFO: stderr: "" May 19 14:14:57.608: INFO: stdout: " " STEP: exposing timestamps May 19 14:14:57.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-jjrzx redis-master --namespace=kubectl-8433 --tail=1 --timestamps' May 19 14:14:57.712: INFO: stderr: "" May 19 14:14:57.712: INFO: stdout: "2020-05-19T14:14:56.375192492Z 1:M 19 May 14:14:56.374 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range May 19 14:15:00.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-jjrzx redis-master --namespace=kubectl-8433 --since=1s' May 19 14:15:00.322: INFO: stderr: "" May 19 14:15:00.323: INFO: stdout: "" May 19 14:15:00.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-jjrzx redis-master --namespace=kubectl-8433 --since=24h' May 19 14:15:00.432: INFO: stderr: "" May 19 14:15:00.432: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 19 May 14:14:56.374 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 19 May 14:14:56.374 # Server started, Redis version 3.2.12\n1:M 19 May 14:14:56.374 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 19 May 14:14:56.374 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources May 19 14:15:00.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8433' May 19 14:15:00.529: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 19 14:15:00.529: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" May 19 14:15:00.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-8433' May 19 14:15:00.615: INFO: stderr: "No resources found.\n" May 19 14:15:00.615: INFO: stdout: "" May 19 14:15:00.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-8433 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 19 14:15:00.711: INFO: stderr: "" May 19 14:15:00.711: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:15:00.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8433" for this suite. May 19 14:15:22.726: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:15:22.802: INFO: namespace kubectl-8433 deletion completed in 22.086700221s • [SLOW TEST:29.840 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:15:22.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-2lvf STEP: Creating a pod to test atomic-volume-subpath May 19 14:15:22.890: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-2lvf" in namespace "subpath-3419" to be "success or failure" May 19 14:15:22.915: INFO: Pod "pod-subpath-test-configmap-2lvf": Phase="Pending", Reason="", readiness=false. Elapsed: 25.080715ms May 19 14:15:24.919: INFO: Pod "pod-subpath-test-configmap-2lvf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029090364s May 19 14:15:26.924: INFO: Pod "pod-subpath-test-configmap-2lvf": Phase="Running", Reason="", readiness=true. Elapsed: 4.033953964s May 19 14:15:28.928: INFO: Pod "pod-subpath-test-configmap-2lvf": Phase="Running", Reason="", readiness=true. Elapsed: 6.038759933s May 19 14:15:30.933: INFO: Pod "pod-subpath-test-configmap-2lvf": Phase="Running", Reason="", readiness=true. Elapsed: 8.043284969s May 19 14:15:32.938: INFO: Pod "pod-subpath-test-configmap-2lvf": Phase="Running", Reason="", readiness=true. Elapsed: 10.048505632s May 19 14:15:34.943: INFO: Pod "pod-subpath-test-configmap-2lvf": Phase="Running", Reason="", readiness=true. Elapsed: 12.053023184s May 19 14:15:36.947: INFO: Pod "pod-subpath-test-configmap-2lvf": Phase="Running", Reason="", readiness=true. Elapsed: 14.057880974s May 19 14:15:38.952: INFO: Pod "pod-subpath-test-configmap-2lvf": Phase="Running", Reason="", readiness=true. Elapsed: 16.062098191s May 19 14:15:40.995: INFO: Pod "pod-subpath-test-configmap-2lvf": Phase="Running", Reason="", readiness=true. Elapsed: 18.105736775s May 19 14:15:43.000: INFO: Pod "pod-subpath-test-configmap-2lvf": Phase="Running", Reason="", readiness=true. Elapsed: 20.110413869s May 19 14:15:45.004: INFO: Pod "pod-subpath-test-configmap-2lvf": Phase="Running", Reason="", readiness=true. Elapsed: 22.114614089s May 19 14:15:47.009: INFO: Pod "pod-subpath-test-configmap-2lvf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.119629709s STEP: Saw pod success May 19 14:15:47.009: INFO: Pod "pod-subpath-test-configmap-2lvf" satisfied condition "success or failure" May 19 14:15:47.013: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-2lvf container test-container-subpath-configmap-2lvf: STEP: delete the pod May 19 14:15:47.038: INFO: Waiting for pod pod-subpath-test-configmap-2lvf to disappear May 19 14:15:47.046: INFO: Pod pod-subpath-test-configmap-2lvf no longer exists STEP: Deleting pod pod-subpath-test-configmap-2lvf May 19 14:15:47.046: INFO: Deleting pod "pod-subpath-test-configmap-2lvf" in namespace "subpath-3419" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:15:47.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3419" for this suite. May 19 14:15:53.062: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:15:53.179: INFO: namespace subpath-3419 deletion completed in 6.128675955s • [SLOW TEST:30.377 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:15:53.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-1a2f7da6-2d27-4d97-9431-df087ff9f212 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-1a2f7da6-2d27-4d97-9431-df087ff9f212 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:15:59.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7600" for this suite. May 19 14:16:21.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:16:21.435: INFO: namespace projected-7600 deletion completed in 22.09314767s • [SLOW TEST:28.256 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:16:21.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-3018 STEP: creating a selector STEP: Creating the service pods in kubernetes May 19 14:16:21.495: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 19 14:16:49.640: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.15:8080/dial?request=hostName&protocol=http&host=10.244.2.168&port=8080&tries=1'] Namespace:pod-network-test-3018 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 14:16:49.640: INFO: >>> kubeConfig: /root/.kube/config I0519 14:16:49.670915 6 log.go:172] (0xc000ac3970) (0xc00247aa00) Create stream I0519 14:16:49.670948 6 log.go:172] (0xc000ac3970) (0xc00247aa00) Stream added, broadcasting: 1 I0519 14:16:49.673312 6 log.go:172] (0xc000ac3970) Reply frame received for 1 I0519 14:16:49.673366 6 log.go:172] (0xc000ac3970) (0xc0030c2500) Create stream I0519 14:16:49.673385 6 log.go:172] (0xc000ac3970) (0xc0030c2500) Stream added, broadcasting: 3 I0519 14:16:49.674526 6 log.go:172] (0xc000ac3970) Reply frame received for 3 I0519 14:16:49.674560 6 log.go:172] (0xc000ac3970) (0xc00247aaa0) Create stream I0519 14:16:49.674572 6 log.go:172] (0xc000ac3970) (0xc00247aaa0) Stream added, broadcasting: 5 I0519 14:16:49.675339 6 log.go:172] (0xc000ac3970) Reply frame received for 5 I0519 14:16:49.767948 6 log.go:172] (0xc000ac3970) Data frame received for 3 I0519 14:16:49.768007 6 log.go:172] (0xc0030c2500) (3) Data frame handling I0519 14:16:49.768045 6 log.go:172] (0xc0030c2500) (3) Data frame sent I0519 14:16:49.768904 6 log.go:172] (0xc000ac3970) Data frame received for 5 I0519 14:16:49.768939 6 log.go:172] (0xc00247aaa0) (5) Data frame handling I0519 14:16:49.768999 6 log.go:172] (0xc000ac3970) Data frame received for 3 I0519 14:16:49.769033 6 log.go:172] (0xc0030c2500) (3) Data frame handling I0519 14:16:49.771418 6 log.go:172] (0xc000ac3970) Data frame received for 1 I0519 14:16:49.771452 6 log.go:172] (0xc00247aa00) (1) Data frame handling I0519 14:16:49.771470 6 log.go:172] (0xc00247aa00) (1) Data frame sent I0519 14:16:49.771493 6 log.go:172] (0xc000ac3970) (0xc00247aa00) Stream removed, broadcasting: 1 I0519 14:16:49.771573 6 log.go:172] (0xc000ac3970) Go away received I0519 14:16:49.771623 6 log.go:172] (0xc000ac3970) (0xc00247aa00) Stream removed, broadcasting: 1 I0519 14:16:49.771647 6 log.go:172] (0xc000ac3970) (0xc0030c2500) Stream removed, broadcasting: 3 I0519 14:16:49.772069 6 log.go:172] (0xc000ac3970) (0xc00247aaa0) Stream removed, broadcasting: 5 May 19 14:16:49.772: INFO: Waiting for endpoints: map[] May 19 14:16:49.776: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.15:8080/dial?request=hostName&protocol=http&host=10.244.1.14&port=8080&tries=1'] Namespace:pod-network-test-3018 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 14:16:49.776: INFO: >>> kubeConfig: /root/.kube/config I0519 14:16:49.800133 6 log.go:172] (0xc001828a50) (0xc00247af00) Create stream I0519 14:16:49.800155 6 log.go:172] (0xc001828a50) (0xc00247af00) Stream added, broadcasting: 1 I0519 14:16:49.802069 6 log.go:172] (0xc001828a50) Reply frame received for 1 I0519 14:16:49.802105 6 log.go:172] (0xc001828a50) (0xc00247afa0) Create stream I0519 14:16:49.802119 6 log.go:172] (0xc001828a50) (0xc00247afa0) Stream added, broadcasting: 3 I0519 14:16:49.803044 6 log.go:172] (0xc001828a50) Reply frame received for 3 I0519 14:16:49.803085 6 log.go:172] (0xc001828a50) (0xc0030420a0) Create stream I0519 14:16:49.803096 6 log.go:172] (0xc001828a50) (0xc0030420a0) Stream added, broadcasting: 5 I0519 14:16:49.803817 6 log.go:172] (0xc001828a50) Reply frame received for 5 I0519 14:16:49.864692 6 log.go:172] (0xc001828a50) Data frame received for 3 I0519 14:16:49.864717 6 log.go:172] (0xc00247afa0) (3) Data frame handling I0519 14:16:49.864729 6 log.go:172] (0xc00247afa0) (3) Data frame sent I0519 14:16:49.865398 6 log.go:172] (0xc001828a50) Data frame received for 3 I0519 14:16:49.865415 6 log.go:172] (0xc00247afa0) (3) Data frame handling I0519 14:16:49.865507 6 log.go:172] (0xc001828a50) Data frame received for 5 I0519 14:16:49.865519 6 log.go:172] (0xc0030420a0) (5) Data frame handling I0519 14:16:49.867236 6 log.go:172] (0xc001828a50) Data frame received for 1 I0519 14:16:49.867247 6 log.go:172] (0xc00247af00) (1) Data frame handling I0519 14:16:49.867257 6 log.go:172] (0xc00247af00) (1) Data frame sent I0519 14:16:49.867269 6 log.go:172] (0xc001828a50) (0xc00247af00) Stream removed, broadcasting: 1 I0519 14:16:49.867298 6 log.go:172] (0xc001828a50) Go away received I0519 14:16:49.867371 6 log.go:172] (0xc001828a50) (0xc00247af00) Stream removed, broadcasting: 1 I0519 14:16:49.867403 6 log.go:172] (0xc001828a50) (0xc00247afa0) Stream removed, broadcasting: 3 I0519 14:16:49.867424 6 log.go:172] (0xc001828a50) (0xc0030420a0) Stream removed, broadcasting: 5 May 19 14:16:49.867: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:16:49.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3018" for this suite. May 19 14:17:11.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:17:11.967: INFO: namespace pod-network-test-3018 deletion completed in 22.096132507s • [SLOW TEST:50.530 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:17:11.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-1ca38f26-f57d-47a3-bbc4-5e5410a5e5fe STEP: Creating a pod to test consume secrets May 19 14:17:12.317: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-62962101-6a36-457a-a7a4-29b5dd85f178" in namespace "projected-3734" to be "success or failure" May 19 14:17:12.362: INFO: Pod "pod-projected-secrets-62962101-6a36-457a-a7a4-29b5dd85f178": Phase="Pending", Reason="", readiness=false. Elapsed: 45.216675ms May 19 14:17:14.366: INFO: Pod "pod-projected-secrets-62962101-6a36-457a-a7a4-29b5dd85f178": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049095852s May 19 14:17:16.370: INFO: Pod "pod-projected-secrets-62962101-6a36-457a-a7a4-29b5dd85f178": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053619974s STEP: Saw pod success May 19 14:17:16.371: INFO: Pod "pod-projected-secrets-62962101-6a36-457a-a7a4-29b5dd85f178" satisfied condition "success or failure" May 19 14:17:16.374: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-62962101-6a36-457a-a7a4-29b5dd85f178 container secret-volume-test: STEP: delete the pod May 19 14:17:16.657: INFO: Waiting for pod pod-projected-secrets-62962101-6a36-457a-a7a4-29b5dd85f178 to disappear May 19 14:17:16.662: INFO: Pod pod-projected-secrets-62962101-6a36-457a-a7a4-29b5dd85f178 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:17:16.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3734" for this suite. May 19 14:17:22.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:17:22.788: INFO: namespace projected-3734 deletion completed in 6.122042799s • [SLOW TEST:10.821 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:17:22.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes May 19 14:17:26.908: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 19 14:17:32.006: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:17:32.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6578" for this suite. May 19 14:17:38.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:17:38.108: INFO: namespace pods-6578 deletion completed in 6.09308502s • [SLOW TEST:15.319 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:17:38.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 May 19 14:17:38.187: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 19 14:17:38.195: INFO: Waiting for terminating namespaces to be deleted... May 19 14:17:38.197: INFO: Logging pods the kubelet thinks is on node iruya-worker before test May 19 14:17:38.201: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 19 14:17:38.201: INFO: Container kube-proxy ready: true, restart count 0 May 19 14:17:38.201: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 19 14:17:38.201: INFO: Container kindnet-cni ready: true, restart count 0 May 19 14:17:38.201: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test May 19 14:17:38.205: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) May 19 14:17:38.205: INFO: Container kube-proxy ready: true, restart count 0 May 19 14:17:38.205: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) May 19 14:17:38.205: INFO: Container kindnet-cni ready: true, restart count 0 May 19 14:17:38.205: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) May 19 14:17:38.205: INFO: Container coredns ready: true, restart count 0 May 19 14:17:38.205: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) May 19 14:17:38.205: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.161073adac646948], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:17:39.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3314" for this suite. May 19 14:17:45.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:17:45.318: INFO: namespace sched-pred-3314 deletion completed in 6.091652339s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.210 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:17:45.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-13b1c10a-dabf-4398-9305-8c66366b21a4 STEP: Creating a pod to test consume secrets May 19 14:17:45.399: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-deb1d72c-38fd-49b0-9221-a1e907ed99d4" in namespace "projected-7081" to be "success or failure" May 19 14:17:45.402: INFO: Pod "pod-projected-secrets-deb1d72c-38fd-49b0-9221-a1e907ed99d4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.3866ms May 19 14:17:47.470: INFO: Pod "pod-projected-secrets-deb1d72c-38fd-49b0-9221-a1e907ed99d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070970335s May 19 14:17:49.473: INFO: Pod "pod-projected-secrets-deb1d72c-38fd-49b0-9221-a1e907ed99d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.073874249s STEP: Saw pod success May 19 14:17:49.473: INFO: Pod "pod-projected-secrets-deb1d72c-38fd-49b0-9221-a1e907ed99d4" satisfied condition "success or failure" May 19 14:17:49.475: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-deb1d72c-38fd-49b0-9221-a1e907ed99d4 container projected-secret-volume-test: STEP: delete the pod May 19 14:17:49.885: INFO: Waiting for pod pod-projected-secrets-deb1d72c-38fd-49b0-9221-a1e907ed99d4 to disappear May 19 14:17:49.896: INFO: Pod pod-projected-secrets-deb1d72c-38fd-49b0-9221-a1e907ed99d4 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:17:49.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7081" for this suite. May 19 14:17:55.911: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:17:55.987: INFO: namespace projected-7081 deletion completed in 6.08787269s • [SLOW TEST:10.668 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:17:55.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-1242 STEP: creating a selector STEP: Creating the service pods in kubernetes May 19 14:17:56.112: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 19 14:18:18.203: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.171:8080/dial?request=hostName&protocol=udp&host=10.244.1.18&port=8081&tries=1'] Namespace:pod-network-test-1242 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 14:18:18.204: INFO: >>> kubeConfig: /root/.kube/config I0519 14:18:18.241920 6 log.go:172] (0xc002e44d10) (0xc002db0640) Create stream I0519 14:18:18.242007 6 log.go:172] (0xc002e44d10) (0xc002db0640) Stream added, broadcasting: 1 I0519 14:18:18.244759 6 log.go:172] (0xc002e44d10) Reply frame received for 1 I0519 14:18:18.244787 6 log.go:172] (0xc002e44d10) (0xc00247b5e0) Create stream I0519 14:18:18.244797 6 log.go:172] (0xc002e44d10) (0xc00247b5e0) Stream added, broadcasting: 3 I0519 14:18:18.246156 6 log.go:172] (0xc002e44d10) Reply frame received for 3 I0519 14:18:18.246196 6 log.go:172] (0xc002e44d10) (0xc00247b680) Create stream I0519 14:18:18.246228 6 log.go:172] (0xc002e44d10) (0xc00247b680) Stream added, broadcasting: 5 I0519 14:18:18.247295 6 log.go:172] (0xc002e44d10) Reply frame received for 5 I0519 14:18:18.310503 6 log.go:172] (0xc002e44d10) Data frame received for 3 I0519 14:18:18.310536 6 log.go:172] (0xc00247b5e0) (3) Data frame handling I0519 14:18:18.310550 6 log.go:172] (0xc00247b5e0) (3) Data frame sent I0519 14:18:18.311433 6 log.go:172] (0xc002e44d10) Data frame received for 5 I0519 14:18:18.311470 6 log.go:172] (0xc00247b680) (5) Data frame handling I0519 14:18:18.311679 6 log.go:172] (0xc002e44d10) Data frame received for 3 I0519 14:18:18.311692 6 log.go:172] (0xc00247b5e0) (3) Data frame handling I0519 14:18:18.313665 6 log.go:172] (0xc002e44d10) Data frame received for 1 I0519 14:18:18.313686 6 log.go:172] (0xc002db0640) (1) Data frame handling I0519 14:18:18.313734 6 log.go:172] (0xc002db0640) (1) Data frame sent I0519 14:18:18.313936 6 log.go:172] (0xc002e44d10) (0xc002db0640) Stream removed, broadcasting: 1 I0519 14:18:18.314056 6 log.go:172] (0xc002e44d10) (0xc002db0640) Stream removed, broadcasting: 1 I0519 14:18:18.315618 6 log.go:172] (0xc002e44d10) (0xc00247b5e0) Stream removed, broadcasting: 3 I0519 14:18:18.315662 6 log.go:172] (0xc002e44d10) (0xc00247b680) Stream removed, broadcasting: 5 May 19 14:18:18.315: INFO: Waiting for endpoints: map[] I0519 14:18:18.315742 6 log.go:172] (0xc002e44d10) Go away received May 19 14:18:18.319: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.171:8080/dial?request=hostName&protocol=udp&host=10.244.2.170&port=8081&tries=1'] Namespace:pod-network-test-1242 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 14:18:18.319: INFO: >>> kubeConfig: /root/.kube/config I0519 14:18:18.351085 6 log.go:172] (0xc002ebe370) (0xc00247bae0) Create stream I0519 14:18:18.351123 6 log.go:172] (0xc002ebe370) (0xc00247bae0) Stream added, broadcasting: 1 I0519 14:18:18.353971 6 log.go:172] (0xc002ebe370) Reply frame received for 1 I0519 14:18:18.354006 6 log.go:172] (0xc002ebe370) (0xc00331c820) Create stream I0519 14:18:18.354018 6 log.go:172] (0xc002ebe370) (0xc00331c820) Stream added, broadcasting: 3 I0519 14:18:18.354998 6 log.go:172] (0xc002ebe370) Reply frame received for 3 I0519 14:18:18.355046 6 log.go:172] (0xc002ebe370) (0xc00247bb80) Create stream I0519 14:18:18.355071 6 log.go:172] (0xc002ebe370) (0xc00247bb80) Stream added, broadcasting: 5 I0519 14:18:18.356165 6 log.go:172] (0xc002ebe370) Reply frame received for 5 I0519 14:18:18.427640 6 log.go:172] (0xc002ebe370) Data frame received for 3 I0519 14:18:18.427672 6 log.go:172] (0xc00331c820) (3) Data frame handling I0519 14:18:18.427688 6 log.go:172] (0xc00331c820) (3) Data frame sent I0519 14:18:18.428535 6 log.go:172] (0xc002ebe370) Data frame received for 3 I0519 14:18:18.428558 6 log.go:172] (0xc00331c820) (3) Data frame handling I0519 14:18:18.428683 6 log.go:172] (0xc002ebe370) Data frame received for 5 I0519 14:18:18.428712 6 log.go:172] (0xc00247bb80) (5) Data frame handling I0519 14:18:18.430376 6 log.go:172] (0xc002ebe370) Data frame received for 1 I0519 14:18:18.430398 6 log.go:172] (0xc00247bae0) (1) Data frame handling I0519 14:18:18.430423 6 log.go:172] (0xc00247bae0) (1) Data frame sent I0519 14:18:18.430445 6 log.go:172] (0xc002ebe370) (0xc00247bae0) Stream removed, broadcasting: 1 I0519 14:18:18.430464 6 log.go:172] (0xc002ebe370) Go away received I0519 14:18:18.430618 6 log.go:172] (0xc002ebe370) (0xc00247bae0) Stream removed, broadcasting: 1 I0519 14:18:18.430646 6 log.go:172] (0xc002ebe370) (0xc00331c820) Stream removed, broadcasting: 3 I0519 14:18:18.430680 6 log.go:172] (0xc002ebe370) (0xc00247bb80) Stream removed, broadcasting: 5 May 19 14:18:18.430: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:18:18.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1242" for this suite. May 19 14:18:40.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:18:40.532: INFO: namespace pod-network-test-1242 deletion completed in 22.097568315s • [SLOW TEST:44.545 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:18:40.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-920ddcd4-99c6-44d5-9856-4e1725582041 STEP: Creating a pod to test consume secrets May 19 14:18:40.632: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-19f17b73-323f-4fa3-8537-d815ba1ad50e" in namespace "projected-6759" to be "success or failure" May 19 14:18:40.651: INFO: Pod "pod-projected-secrets-19f17b73-323f-4fa3-8537-d815ba1ad50e": Phase="Pending", Reason="", readiness=false. Elapsed: 19.402828ms May 19 14:18:42.656: INFO: Pod "pod-projected-secrets-19f17b73-323f-4fa3-8537-d815ba1ad50e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024016297s May 19 14:18:44.661: INFO: Pod "pod-projected-secrets-19f17b73-323f-4fa3-8537-d815ba1ad50e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029145839s STEP: Saw pod success May 19 14:18:44.661: INFO: Pod "pod-projected-secrets-19f17b73-323f-4fa3-8537-d815ba1ad50e" satisfied condition "success or failure" May 19 14:18:44.664: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-19f17b73-323f-4fa3-8537-d815ba1ad50e container projected-secret-volume-test: STEP: delete the pod May 19 14:18:44.735: INFO: Waiting for pod pod-projected-secrets-19f17b73-323f-4fa3-8537-d815ba1ad50e to disappear May 19 14:18:44.771: INFO: Pod pod-projected-secrets-19f17b73-323f-4fa3-8537-d815ba1ad50e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:18:44.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6759" for this suite. May 19 14:18:50.806: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:18:50.912: INFO: namespace projected-6759 deletion completed in 6.137562419s • [SLOW TEST:10.380 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:18:50.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-255d137a-4d26-4990-bd03-ee89bc349c76 May 19 14:18:51.082: INFO: Pod name my-hostname-basic-255d137a-4d26-4990-bd03-ee89bc349c76: Found 0 pods out of 1 May 19 14:18:56.087: INFO: Pod name my-hostname-basic-255d137a-4d26-4990-bd03-ee89bc349c76: Found 1 pods out of 1 May 19 14:18:56.087: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-255d137a-4d26-4990-bd03-ee89bc349c76" are running May 19 14:18:56.090: INFO: Pod "my-hostname-basic-255d137a-4d26-4990-bd03-ee89bc349c76-zsk8j" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-19 14:18:51 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-19 14:18:54 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-19 14:18:54 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-19 14:18:51 +0000 UTC Reason: Message:}]) May 19 14:18:56.090: INFO: Trying to dial the pod May 19 14:19:01.103: INFO: Controller my-hostname-basic-255d137a-4d26-4990-bd03-ee89bc349c76: Got expected result from replica 1 [my-hostname-basic-255d137a-4d26-4990-bd03-ee89bc349c76-zsk8j]: "my-hostname-basic-255d137a-4d26-4990-bd03-ee89bc349c76-zsk8j", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:19:01.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9725" for this suite. May 19 14:19:07.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:19:07.195: INFO: namespace replication-controller-9725 deletion completed in 6.088916658s • [SLOW TEST:16.282 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:19:07.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 May 19 14:19:07.238: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 19 14:19:07.247: INFO: Waiting for terminating namespaces to be deleted... May 19 14:19:07.250: INFO: Logging pods the kubelet thinks is on node iruya-worker before test May 19 14:19:07.255: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 19 14:19:07.255: INFO: Container kube-proxy ready: true, restart count 0 May 19 14:19:07.255: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 19 14:19:07.255: INFO: Container kindnet-cni ready: true, restart count 0 May 19 14:19:07.255: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test May 19 14:19:07.261: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) May 19 14:19:07.261: INFO: Container kindnet-cni ready: true, restart count 0 May 19 14:19:07.261: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) May 19 14:19:07.261: INFO: Container kube-proxy ready: true, restart count 0 May 19 14:19:07.261: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) May 19 14:19:07.261: INFO: Container coredns ready: true, restart count 0 May 19 14:19:07.261: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) May 19 14:19:07.261: INFO: Container coredns ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-worker STEP: verifying the node has the label node iruya-worker2 May 19 14:19:07.346: INFO: Pod coredns-5d4dd4b4db-6jcgz requesting resource cpu=100m on Node iruya-worker2 May 19 14:19:07.346: INFO: Pod coredns-5d4dd4b4db-gm7vr requesting resource cpu=100m on Node iruya-worker2 May 19 14:19:07.346: INFO: Pod kindnet-gwz5g requesting resource cpu=100m on Node iruya-worker May 19 14:19:07.346: INFO: Pod kindnet-mgd8b requesting resource cpu=100m on Node iruya-worker2 May 19 14:19:07.346: INFO: Pod kube-proxy-pmz4p requesting resource cpu=0m on Node iruya-worker May 19 14:19:07.346: INFO: Pod kube-proxy-vwbcj requesting resource cpu=0m on Node iruya-worker2 STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-261fb773-240f-4706-b74a-421be0e8970e.161073c26d5761b1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7735/filler-pod-261fb773-240f-4706-b74a-421be0e8970e to iruya-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-261fb773-240f-4706-b74a-421be0e8970e.161073c2e7e2840f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-261fb773-240f-4706-b74a-421be0e8970e.161073c3378a9bf8], Reason = [Created], Message = [Created container filler-pod-261fb773-240f-4706-b74a-421be0e8970e] STEP: Considering event: Type = [Normal], Name = [filler-pod-261fb773-240f-4706-b74a-421be0e8970e.161073c3485a4e56], Reason = [Started], Message = [Started container filler-pod-261fb773-240f-4706-b74a-421be0e8970e] STEP: Considering event: Type = [Normal], Name = [filler-pod-daf08a1b-b58b-45bf-8af5-ee2ba7c28d01.161073c2703b8a65], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7735/filler-pod-daf08a1b-b58b-45bf-8af5-ee2ba7c28d01 to iruya-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-daf08a1b-b58b-45bf-8af5-ee2ba7c28d01.161073c2f7902a0c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-daf08a1b-b58b-45bf-8af5-ee2ba7c28d01.161073c3338c183d], Reason = [Created], Message = [Created container filler-pod-daf08a1b-b58b-45bf-8af5-ee2ba7c28d01] STEP: Considering event: Type = [Normal], Name = [filler-pod-daf08a1b-b58b-45bf-8af5-ee2ba7c28d01.161073c3435aaca6], Reason = [Started], Message = [Started container filler-pod-daf08a1b-b58b-45bf-8af5-ee2ba7c28d01] STEP: Considering event: Type = [Warning], Name = [additional-pod.161073c3d72ad974], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node iruya-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:19:14.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7735" for this suite. May 19 14:19:22.601: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:19:22.681: INFO: namespace sched-pred-7735 deletion completed in 8.104112896s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:15.485 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:19:22.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-2048c663-b8c5-4f41-a9f6-8ae043e27760 STEP: Creating a pod to test consume configMaps May 19 14:19:22.756: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d773cd26-8262-4a16-8efc-b9b84d4cec51" in namespace "projected-2991" to be "success or failure" May 19 14:19:22.788: INFO: Pod "pod-projected-configmaps-d773cd26-8262-4a16-8efc-b9b84d4cec51": Phase="Pending", Reason="", readiness=false. Elapsed: 31.989123ms May 19 14:19:24.792: INFO: Pod "pod-projected-configmaps-d773cd26-8262-4a16-8efc-b9b84d4cec51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035903045s May 19 14:19:26.796: INFO: Pod "pod-projected-configmaps-d773cd26-8262-4a16-8efc-b9b84d4cec51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039253983s STEP: Saw pod success May 19 14:19:26.796: INFO: Pod "pod-projected-configmaps-d773cd26-8262-4a16-8efc-b9b84d4cec51" satisfied condition "success or failure" May 19 14:19:26.798: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-d773cd26-8262-4a16-8efc-b9b84d4cec51 container projected-configmap-volume-test: STEP: delete the pod May 19 14:19:26.993: INFO: Waiting for pod pod-projected-configmaps-d773cd26-8262-4a16-8efc-b9b84d4cec51 to disappear May 19 14:19:26.996: INFO: Pod pod-projected-configmaps-d773cd26-8262-4a16-8efc-b9b84d4cec51 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:19:26.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2991" for this suite. May 19 14:19:33.012: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:19:33.111: INFO: namespace projected-2991 deletion completed in 6.106452129s • [SLOW TEST:10.430 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:19:33.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-6230bd82-56f6-4c6a-bf70-ef82c5fc86c9 STEP: Creating a pod to test consume secrets May 19 14:19:33.204: INFO: Waiting up to 5m0s for pod "pod-secrets-29c633bf-169d-4b3c-bf9c-cfaef283cd8e" in namespace "secrets-6144" to be "success or failure" May 19 14:19:33.220: INFO: Pod "pod-secrets-29c633bf-169d-4b3c-bf9c-cfaef283cd8e": Phase="Pending", Reason="", readiness=false. Elapsed: 15.688541ms May 19 14:19:35.223: INFO: Pod "pod-secrets-29c633bf-169d-4b3c-bf9c-cfaef283cd8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01914371s May 19 14:19:37.227: INFO: Pod "pod-secrets-29c633bf-169d-4b3c-bf9c-cfaef283cd8e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023334054s STEP: Saw pod success May 19 14:19:37.227: INFO: Pod "pod-secrets-29c633bf-169d-4b3c-bf9c-cfaef283cd8e" satisfied condition "success or failure" May 19 14:19:37.229: INFO: Trying to get logs from node iruya-worker pod pod-secrets-29c633bf-169d-4b3c-bf9c-cfaef283cd8e container secret-volume-test: STEP: delete the pod May 19 14:19:37.287: INFO: Waiting for pod pod-secrets-29c633bf-169d-4b3c-bf9c-cfaef283cd8e to disappear May 19 14:19:37.291: INFO: Pod pod-secrets-29c633bf-169d-4b3c-bf9c-cfaef283cd8e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:19:37.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6144" for this suite. May 19 14:19:43.451: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:19:43.540: INFO: namespace secrets-6144 deletion completed in 6.246812425s • [SLOW TEST:10.430 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:19:43.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-c2609333-f534-4ab2-b113-a32c64efba26 STEP: Creating a pod to test consume configMaps May 19 14:19:43.624: INFO: Waiting up to 5m0s for pod "pod-configmaps-bf40e3f9-3de4-40f4-9dc7-dbf995c67ca6" in namespace "configmap-5157" to be "success or failure" May 19 14:19:43.643: INFO: Pod "pod-configmaps-bf40e3f9-3de4-40f4-9dc7-dbf995c67ca6": Phase="Pending", Reason="", readiness=false. Elapsed: 18.740074ms May 19 14:19:45.647: INFO: Pod "pod-configmaps-bf40e3f9-3de4-40f4-9dc7-dbf995c67ca6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022873818s May 19 14:19:47.651: INFO: Pod "pod-configmaps-bf40e3f9-3de4-40f4-9dc7-dbf995c67ca6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026927672s STEP: Saw pod success May 19 14:19:47.651: INFO: Pod "pod-configmaps-bf40e3f9-3de4-40f4-9dc7-dbf995c67ca6" satisfied condition "success or failure" May 19 14:19:47.653: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-bf40e3f9-3de4-40f4-9dc7-dbf995c67ca6 container configmap-volume-test: STEP: delete the pod May 19 14:19:47.688: INFO: Waiting for pod pod-configmaps-bf40e3f9-3de4-40f4-9dc7-dbf995c67ca6 to disappear May 19 14:19:47.735: INFO: Pod pod-configmaps-bf40e3f9-3de4-40f4-9dc7-dbf995c67ca6 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:19:47.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5157" for this suite. May 19 14:19:53.758: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:19:53.832: INFO: namespace configmap-5157 deletion completed in 6.093214904s • [SLOW TEST:10.292 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:19:53.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 19 14:20:02.012: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 19 14:20:02.030: INFO: Pod pod-with-poststart-http-hook still exists May 19 14:20:04.030: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 19 14:20:04.035: INFO: Pod pod-with-poststart-http-hook still exists May 19 14:20:06.030: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 19 14:20:06.035: INFO: Pod pod-with-poststart-http-hook still exists May 19 14:20:08.031: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 19 14:20:08.035: INFO: Pod pod-with-poststart-http-hook still exists May 19 14:20:10.031: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 19 14:20:10.034: INFO: Pod pod-with-poststart-http-hook still exists May 19 14:20:12.031: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 19 14:20:12.035: INFO: Pod pod-with-poststart-http-hook still exists May 19 14:20:14.030: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 19 14:20:14.035: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:20:14.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1069" for this suite. May 19 14:20:36.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:20:36.142: INFO: namespace container-lifecycle-hook-1069 deletion completed in 22.102241515s • [SLOW TEST:42.309 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:20:36.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 19 14:20:36.210: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.435821ms) May 19 14:20:36.227: INFO: (1) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 16.248455ms) May 19 14:20:36.230: INFO: (2) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.418671ms) May 19 14:20:36.233: INFO: (3) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.055437ms) May 19 14:20:36.236: INFO: (4) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.782726ms) May 19 14:20:36.239: INFO: (5) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.939259ms) May 19 14:20:36.242: INFO: (6) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.349937ms) May 19 14:20:36.245: INFO: (7) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.79762ms) May 19 14:20:36.248: INFO: (8) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.538456ms) May 19 14:20:36.250: INFO: (9) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.588058ms) May 19 14:20:36.253: INFO: (10) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.675778ms) May 19 14:20:36.256: INFO: (11) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.016197ms) May 19 14:20:36.259: INFO: (12) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.702781ms) May 19 14:20:36.262: INFO: (13) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.817564ms) May 19 14:20:36.264: INFO: (14) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.666264ms) May 19 14:20:36.267: INFO: (15) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.95058ms) May 19 14:20:36.271: INFO: (16) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.196077ms) May 19 14:20:36.274: INFO: (17) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.005956ms) May 19 14:20:36.276: INFO: (18) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.650158ms) May 19 14:20:36.279: INFO: (19) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.9016ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:20:36.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-6622" for this suite. May 19 14:20:42.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:20:42.376: INFO: namespace proxy-6622 deletion completed in 6.093148897s • [SLOW TEST:6.234 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:20:42.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 19 14:20:46.987: INFO: Successfully updated pod "pod-update-activedeadlineseconds-8d613d04-8fe1-4a28-a756-6fe93ec6f340" May 19 14:20:46.987: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-8d613d04-8fe1-4a28-a756-6fe93ec6f340" in namespace "pods-4100" to be "terminated due to deadline exceeded" May 19 14:20:47.042: INFO: Pod "pod-update-activedeadlineseconds-8d613d04-8fe1-4a28-a756-6fe93ec6f340": Phase="Running", Reason="", readiness=true. Elapsed: 54.830686ms May 19 14:20:49.046: INFO: Pod "pod-update-activedeadlineseconds-8d613d04-8fe1-4a28-a756-6fe93ec6f340": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.059330842s May 19 14:20:49.046: INFO: Pod "pod-update-activedeadlineseconds-8d613d04-8fe1-4a28-a756-6fe93ec6f340" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:20:49.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4100" for this suite. May 19 14:20:55.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:20:55.151: INFO: namespace pods-4100 deletion completed in 6.100938995s • [SLOW TEST:12.775 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:20:55.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 19 14:21:03.277: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 19 14:21:03.297: INFO: Pod pod-with-poststart-exec-hook still exists May 19 14:21:05.298: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 19 14:21:05.302: INFO: Pod pod-with-poststart-exec-hook still exists May 19 14:21:07.298: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 19 14:21:07.302: INFO: Pod pod-with-poststart-exec-hook still exists May 19 14:21:09.298: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 19 14:21:09.301: INFO: Pod pod-with-poststart-exec-hook still exists May 19 14:21:11.298: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 19 14:21:11.302: INFO: Pod pod-with-poststart-exec-hook still exists May 19 14:21:13.298: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 19 14:21:13.302: INFO: Pod pod-with-poststart-exec-hook still exists May 19 14:21:15.298: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 19 14:21:15.302: INFO: Pod pod-with-poststart-exec-hook still exists May 19 14:21:17.298: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 19 14:21:17.302: INFO: Pod pod-with-poststart-exec-hook still exists May 19 14:21:19.298: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 19 14:21:19.302: INFO: Pod pod-with-poststart-exec-hook still exists May 19 14:21:21.298: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 19 14:21:21.302: INFO: Pod pod-with-poststart-exec-hook still exists May 19 14:21:23.298: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 19 14:21:23.318: INFO: Pod pod-with-poststart-exec-hook still exists May 19 14:21:25.298: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 19 14:21:25.302: INFO: Pod pod-with-poststart-exec-hook still exists May 19 14:21:27.298: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 19 14:21:27.303: INFO: Pod pod-with-poststart-exec-hook still exists May 19 14:21:29.298: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 19 14:21:29.302: INFO: Pod pod-with-poststart-exec-hook still exists May 19 14:21:31.298: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 19 14:21:31.302: INFO: Pod pod-with-poststart-exec-hook still exists May 19 14:21:33.298: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 19 14:21:33.324: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:21:33.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4994" for this suite. May 19 14:21:55.340: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:21:55.415: INFO: namespace container-lifecycle-hook-4994 deletion completed in 22.087229488s • [SLOW TEST:60.263 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:21:55.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs May 19 14:21:55.492: INFO: Waiting up to 5m0s for pod "pod-af74ac9e-9eea-477d-a174-089dc3f2ae3b" in namespace "emptydir-1759" to be "success or failure" May 19 14:21:55.498: INFO: Pod "pod-af74ac9e-9eea-477d-a174-089dc3f2ae3b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.83054ms May 19 14:21:57.505: INFO: Pod "pod-af74ac9e-9eea-477d-a174-089dc3f2ae3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012602497s May 19 14:21:59.509: INFO: Pod "pod-af74ac9e-9eea-477d-a174-089dc3f2ae3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016004333s STEP: Saw pod success May 19 14:21:59.509: INFO: Pod "pod-af74ac9e-9eea-477d-a174-089dc3f2ae3b" satisfied condition "success or failure" May 19 14:21:59.511: INFO: Trying to get logs from node iruya-worker pod pod-af74ac9e-9eea-477d-a174-089dc3f2ae3b container test-container: STEP: delete the pod May 19 14:21:59.530: INFO: Waiting for pod pod-af74ac9e-9eea-477d-a174-089dc3f2ae3b to disappear May 19 14:21:59.534: INFO: Pod pod-af74ac9e-9eea-477d-a174-089dc3f2ae3b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:21:59.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1759" for this suite. May 19 14:22:05.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:22:05.660: INFO: namespace emptydir-1759 deletion completed in 6.123821341s • [SLOW TEST:10.245 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:22:05.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 19 14:22:05.764: INFO: Waiting up to 5m0s for pod "downwardapi-volume-44d9f988-4b40-44af-9959-67044c1d9a32" in namespace "projected-7938" to be "success or failure" May 19 14:22:05.768: INFO: Pod "downwardapi-volume-44d9f988-4b40-44af-9959-67044c1d9a32": Phase="Pending", Reason="", readiness=false. Elapsed: 3.234268ms May 19 14:22:07.771: INFO: Pod "downwardapi-volume-44d9f988-4b40-44af-9959-67044c1d9a32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006456022s May 19 14:22:09.776: INFO: Pod "downwardapi-volume-44d9f988-4b40-44af-9959-67044c1d9a32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011605591s STEP: Saw pod success May 19 14:22:09.776: INFO: Pod "downwardapi-volume-44d9f988-4b40-44af-9959-67044c1d9a32" satisfied condition "success or failure" May 19 14:22:09.779: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-44d9f988-4b40-44af-9959-67044c1d9a32 container client-container: STEP: delete the pod May 19 14:22:09.799: INFO: Waiting for pod downwardapi-volume-44d9f988-4b40-44af-9959-67044c1d9a32 to disappear May 19 14:22:09.803: INFO: Pod downwardapi-volume-44d9f988-4b40-44af-9959-67044c1d9a32 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:22:09.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7938" for this suite. May 19 14:22:15.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:22:15.894: INFO: namespace projected-7938 deletion completed in 6.087293201s • [SLOW TEST:10.232 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:22:15.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0519 14:22:25.998456 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 19 14:22:25.998: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:22:25.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3948" for this suite. May 19 14:22:32.029: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:22:32.124: INFO: namespace gc-3948 deletion completed in 6.122279971s • [SLOW TEST:16.227 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:22:32.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 19 14:22:32.207: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7893369c-2791-4012-af70-3fab1d476713" in namespace "projected-5055" to be "success or failure" May 19 14:22:32.227: INFO: Pod "downwardapi-volume-7893369c-2791-4012-af70-3fab1d476713": Phase="Pending", Reason="", readiness=false. Elapsed: 20.237106ms May 19 14:22:34.231: INFO: Pod "downwardapi-volume-7893369c-2791-4012-af70-3fab1d476713": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024597177s May 19 14:22:36.236: INFO: Pod "downwardapi-volume-7893369c-2791-4012-af70-3fab1d476713": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029349354s STEP: Saw pod success May 19 14:22:36.236: INFO: Pod "downwardapi-volume-7893369c-2791-4012-af70-3fab1d476713" satisfied condition "success or failure" May 19 14:22:36.239: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-7893369c-2791-4012-af70-3fab1d476713 container client-container: STEP: delete the pod May 19 14:22:36.368: INFO: Waiting for pod downwardapi-volume-7893369c-2791-4012-af70-3fab1d476713 to disappear May 19 14:22:36.379: INFO: Pod downwardapi-volume-7893369c-2791-4012-af70-3fab1d476713 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:22:36.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5055" for this suite. May 19 14:22:42.402: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:22:42.487: INFO: namespace projected-5055 deletion completed in 6.104162528s • [SLOW TEST:10.363 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:22:42.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-81c1cf0b-01fd-4000-8d6e-bd3b3a71d696 STEP: Creating a pod to test consume secrets May 19 14:22:42.562: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-70ad1355-3383-452f-9099-4ccb89aa14b3" in namespace "projected-5016" to be "success or failure" May 19 14:22:42.618: INFO: Pod "pod-projected-secrets-70ad1355-3383-452f-9099-4ccb89aa14b3": Phase="Pending", Reason="", readiness=false. Elapsed: 55.866414ms May 19 14:22:44.623: INFO: Pod "pod-projected-secrets-70ad1355-3383-452f-9099-4ccb89aa14b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060287818s May 19 14:22:46.627: INFO: Pod "pod-projected-secrets-70ad1355-3383-452f-9099-4ccb89aa14b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064846018s STEP: Saw pod success May 19 14:22:46.627: INFO: Pod "pod-projected-secrets-70ad1355-3383-452f-9099-4ccb89aa14b3" satisfied condition "success or failure" May 19 14:22:46.631: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-70ad1355-3383-452f-9099-4ccb89aa14b3 container projected-secret-volume-test: STEP: delete the pod May 19 14:22:46.651: INFO: Waiting for pod pod-projected-secrets-70ad1355-3383-452f-9099-4ccb89aa14b3 to disappear May 19 14:22:46.713: INFO: Pod pod-projected-secrets-70ad1355-3383-452f-9099-4ccb89aa14b3 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:22:46.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5016" for this suite. May 19 14:22:52.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:22:52.816: INFO: namespace projected-5016 deletion completed in 6.099059949s • [SLOW TEST:10.328 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:22:52.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 19 14:22:57.455: INFO: Successfully updated pod "pod-update-c1f94aeb-7e6f-4434-afe7-7437e83d5357" STEP: verifying the updated pod is in kubernetes May 19 14:22:57.466: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:22:57.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5149" for this suite. May 19 14:23:19.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:23:19.584: INFO: namespace pods-5149 deletion completed in 22.114682326s • [SLOW TEST:26.767 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:23:19.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0519 14:24:00.504869 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 19 14:24:00.504: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:24:00.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9981" for this suite. May 19 14:24:08.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:24:08.817: INFO: namespace gc-9981 deletion completed in 8.309667759s • [SLOW TEST:49.233 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:24:08.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all May 19 14:24:09.102: INFO: Waiting up to 5m0s for pod "client-containers-8d73c972-0136-4b0f-844f-e62509e1abc7" in namespace "containers-8392" to be "success or failure" May 19 14:24:09.155: INFO: Pod "client-containers-8d73c972-0136-4b0f-844f-e62509e1abc7": Phase="Pending", Reason="", readiness=false. Elapsed: 53.111641ms May 19 14:24:11.158: INFO: Pod "client-containers-8d73c972-0136-4b0f-844f-e62509e1abc7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056332824s May 19 14:24:13.170: INFO: Pod "client-containers-8d73c972-0136-4b0f-844f-e62509e1abc7": Phase="Running", Reason="", readiness=true. Elapsed: 4.068437139s May 19 14:24:15.175: INFO: Pod "client-containers-8d73c972-0136-4b0f-844f-e62509e1abc7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.072806287s STEP: Saw pod success May 19 14:24:15.175: INFO: Pod "client-containers-8d73c972-0136-4b0f-844f-e62509e1abc7" satisfied condition "success or failure" May 19 14:24:15.178: INFO: Trying to get logs from node iruya-worker2 pod client-containers-8d73c972-0136-4b0f-844f-e62509e1abc7 container test-container: STEP: delete the pod May 19 14:24:15.198: INFO: Waiting for pod client-containers-8d73c972-0136-4b0f-844f-e62509e1abc7 to disappear May 19 14:24:15.201: INFO: Pod client-containers-8d73c972-0136-4b0f-844f-e62509e1abc7 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:24:15.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8392" for this suite. May 19 14:24:21.239: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:24:21.308: INFO: namespace containers-8392 deletion completed in 6.103692414s • [SLOW TEST:12.491 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:24:21.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 19 14:24:21.366: INFO: Waiting up to 5m0s for pod "downwardapi-volume-95003c77-9d5d-43bb-939d-e226f01e5d5d" in namespace "projected-8065" to be "success or failure" May 19 14:24:21.369: INFO: Pod "downwardapi-volume-95003c77-9d5d-43bb-939d-e226f01e5d5d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.359044ms May 19 14:24:23.374: INFO: Pod "downwardapi-volume-95003c77-9d5d-43bb-939d-e226f01e5d5d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007709843s May 19 14:24:25.378: INFO: Pod "downwardapi-volume-95003c77-9d5d-43bb-939d-e226f01e5d5d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012247505s STEP: Saw pod success May 19 14:24:25.378: INFO: Pod "downwardapi-volume-95003c77-9d5d-43bb-939d-e226f01e5d5d" satisfied condition "success or failure" May 19 14:24:25.382: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-95003c77-9d5d-43bb-939d-e226f01e5d5d container client-container: STEP: delete the pod May 19 14:24:25.415: INFO: Waiting for pod downwardapi-volume-95003c77-9d5d-43bb-939d-e226f01e5d5d to disappear May 19 14:24:25.458: INFO: Pod downwardapi-volume-95003c77-9d5d-43bb-939d-e226f01e5d5d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:24:25.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8065" for this suite. May 19 14:24:31.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:24:31.560: INFO: namespace projected-8065 deletion completed in 6.098247109s • [SLOW TEST:10.251 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:24:31.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium May 19 14:24:31.617: INFO: Waiting up to 5m0s for pod "pod-2936520f-2094-4300-b7e1-0aceffba50a3" in namespace "emptydir-8648" to be "success or failure" May 19 14:24:31.621: INFO: Pod "pod-2936520f-2094-4300-b7e1-0aceffba50a3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.985066ms May 19 14:24:33.626: INFO: Pod "pod-2936520f-2094-4300-b7e1-0aceffba50a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008591554s May 19 14:24:35.631: INFO: Pod "pod-2936520f-2094-4300-b7e1-0aceffba50a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01308647s STEP: Saw pod success May 19 14:24:35.631: INFO: Pod "pod-2936520f-2094-4300-b7e1-0aceffba50a3" satisfied condition "success or failure" May 19 14:24:35.634: INFO: Trying to get logs from node iruya-worker2 pod pod-2936520f-2094-4300-b7e1-0aceffba50a3 container test-container: STEP: delete the pod May 19 14:24:35.652: INFO: Waiting for pod pod-2936520f-2094-4300-b7e1-0aceffba50a3 to disappear May 19 14:24:35.657: INFO: Pod pod-2936520f-2094-4300-b7e1-0aceffba50a3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:24:35.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8648" for this suite. May 19 14:24:41.725: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:24:41.795: INFO: namespace emptydir-8648 deletion completed in 6.134502887s • [SLOW TEST:10.234 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:24:41.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 19 14:24:41.840: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:24:49.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-210" for this suite. May 19 14:24:55.763: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:24:55.835: INFO: namespace init-container-210 deletion completed in 6.086625638s • [SLOW TEST:14.040 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:24:55.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 19 14:24:55.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-9589' May 19 14:24:58.771: INFO: stderr: "" May 19 14:24:58.771: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created May 19 14:25:03.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-9589 -o json' May 19 14:25:03.922: INFO: stderr: "" May 19 14:25:03.922: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-19T14:24:58Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-9589\",\n \"resourceVersion\": \"11770841\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-9589/pods/e2e-test-nginx-pod\",\n \"uid\": \"23f2c6be-2ec4-4e54-a2c2-8d57e703c86d\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-hd9wj\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-hd9wj\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-hd9wj\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-19T14:24:58Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-19T14:25:02Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-19T14:25:02Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-19T14:24:58Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://842e122bbd5a033728d04b3e6a1735e761d091ca48443cea86bb3629a1f684ce\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-19T14:25:01Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.5\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.36\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-19T14:24:58Z\"\n }\n}\n" STEP: replace the image in the pod May 19 14:25:03.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-9589' May 19 14:25:04.191: INFO: stderr: "" May 19 14:25:04.191: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 May 19 14:25:04.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-9589' May 19 14:25:08.239: INFO: stderr: "" May 19 14:25:08.239: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:25:08.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9589" for this suite. May 19 14:25:14.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:25:14.339: INFO: namespace kubectl-9589 deletion completed in 6.096192595s • [SLOW TEST:18.505 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:25:14.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 19 14:25:14.413: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7b40bdc9-b243-49b8-aaa2-e215ea5f4090" in namespace "projected-6320" to be "success or failure" May 19 14:25:14.425: INFO: Pod "downwardapi-volume-7b40bdc9-b243-49b8-aaa2-e215ea5f4090": Phase="Pending", Reason="", readiness=false. Elapsed: 11.878054ms May 19 14:25:16.428: INFO: Pod "downwardapi-volume-7b40bdc9-b243-49b8-aaa2-e215ea5f4090": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015309146s May 19 14:25:18.433: INFO: Pod "downwardapi-volume-7b40bdc9-b243-49b8-aaa2-e215ea5f4090": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019783964s STEP: Saw pod success May 19 14:25:18.433: INFO: Pod "downwardapi-volume-7b40bdc9-b243-49b8-aaa2-e215ea5f4090" satisfied condition "success or failure" May 19 14:25:18.436: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-7b40bdc9-b243-49b8-aaa2-e215ea5f4090 container client-container: STEP: delete the pod May 19 14:25:18.457: INFO: Waiting for pod downwardapi-volume-7b40bdc9-b243-49b8-aaa2-e215ea5f4090 to disappear May 19 14:25:18.482: INFO: Pod downwardapi-volume-7b40bdc9-b243-49b8-aaa2-e215ea5f4090 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:25:18.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6320" for this suite. May 19 14:25:24.501: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:25:24.582: INFO: namespace projected-6320 deletion completed in 6.095706139s • [SLOW TEST:10.242 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:25:24.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 19 14:25:28.696: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-fd72cc1d-9067-4706-a80c-ec86f46a3beb,GenerateName:,Namespace:events-5256,SelfLink:/api/v1/namespaces/events-5256/pods/send-events-fd72cc1d-9067-4706-a80c-ec86f46a3beb,UID:94b62bbf-a16d-4233-9f55-027e9168f044,ResourceVersion:11770952,Generation:0,CreationTimestamp:2020-05-19 14:25:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 636738434,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6rl6z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6rl6z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-6rl6z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00235c100} {node.kubernetes.io/unreachable Exists NoExecute 0xc00235c120}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 14:25:24 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 14:25:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 14:25:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 14:25:24 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.37,StartTime:2020-05-19 14:25:24 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-05-19 14:25:27 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://1d0a563b54368ef5b20c56c176f28f236444f4f915906f7552657e7e5e551cca}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod May 19 14:25:30.700: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 19 14:25:32.705: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:25:32.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-5256" for this suite. May 19 14:26:12.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:26:12.845: INFO: namespace events-5256 deletion completed in 40.126387811s • [SLOW TEST:48.263 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:26:12.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs May 19 14:26:12.929: INFO: Waiting up to 5m0s for pod "pod-c14cd03b-cd27-4dba-bb69-535a316bd97f" in namespace "emptydir-8944" to be "success or failure" May 19 14:26:12.935: INFO: Pod "pod-c14cd03b-cd27-4dba-bb69-535a316bd97f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.418946ms May 19 14:26:14.938: INFO: Pod "pod-c14cd03b-cd27-4dba-bb69-535a316bd97f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009370023s May 19 14:26:16.943: INFO: Pod "pod-c14cd03b-cd27-4dba-bb69-535a316bd97f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014364743s STEP: Saw pod success May 19 14:26:16.943: INFO: Pod "pod-c14cd03b-cd27-4dba-bb69-535a316bd97f" satisfied condition "success or failure" May 19 14:26:16.946: INFO: Trying to get logs from node iruya-worker pod pod-c14cd03b-cd27-4dba-bb69-535a316bd97f container test-container: STEP: delete the pod May 19 14:26:16.974: INFO: Waiting for pod pod-c14cd03b-cd27-4dba-bb69-535a316bd97f to disappear May 19 14:26:16.989: INFO: Pod pod-c14cd03b-cd27-4dba-bb69-535a316bd97f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:26:16.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8944" for this suite. May 19 14:26:23.005: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:26:23.092: INFO: namespace emptydir-8944 deletion completed in 6.099825555s • [SLOW TEST:10.246 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:26:23.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-75300169-162e-408c-821a-8ac8aa351866 STEP: Creating secret with name secret-projected-all-test-volume-f0fb28e8-978d-4a84-9593-64fabf205a97 STEP: Creating a pod to test Check all projections for projected volume plugin May 19 14:26:23.199: INFO: Waiting up to 5m0s for pod "projected-volume-b5577b0f-276b-4077-946a-165033199b5b" in namespace "projected-2695" to be "success or failure" May 19 14:26:23.205: INFO: Pod "projected-volume-b5577b0f-276b-4077-946a-165033199b5b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.288504ms May 19 14:26:25.226: INFO: Pod "projected-volume-b5577b0f-276b-4077-946a-165033199b5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027377359s May 19 14:26:27.262: INFO: Pod "projected-volume-b5577b0f-276b-4077-946a-165033199b5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06347632s STEP: Saw pod success May 19 14:26:27.262: INFO: Pod "projected-volume-b5577b0f-276b-4077-946a-165033199b5b" satisfied condition "success or failure" May 19 14:26:27.266: INFO: Trying to get logs from node iruya-worker2 pod projected-volume-b5577b0f-276b-4077-946a-165033199b5b container projected-all-volume-test: STEP: delete the pod May 19 14:26:27.289: INFO: Waiting for pod projected-volume-b5577b0f-276b-4077-946a-165033199b5b to disappear May 19 14:26:27.294: INFO: Pod projected-volume-b5577b0f-276b-4077-946a-165033199b5b no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:26:27.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2695" for this suite. May 19 14:26:33.309: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:26:33.394: INFO: namespace projected-2695 deletion completed in 6.097291542s • [SLOW TEST:10.301 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:26:33.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 19 14:26:37.512: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:26:37.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5239" for this suite. May 19 14:26:43.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:26:43.692: INFO: namespace container-runtime-5239 deletion completed in 6.131542573s • [SLOW TEST:10.297 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:26:43.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:26:50.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1235" for this suite. May 19 14:26:56.059: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:26:56.130: INFO: namespace namespaces-1235 deletion completed in 6.082859989s STEP: Destroying namespace "nsdeletetest-9553" for this suite. May 19 14:26:56.132: INFO: Namespace nsdeletetest-9553 was already deleted STEP: Destroying namespace "nsdeletetest-8080" for this suite. May 19 14:27:02.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:27:02.223: INFO: namespace nsdeletetest-8080 deletion completed in 6.090817273s • [SLOW TEST:18.531 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:27:02.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container May 19 14:27:06.814: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3164 pod-service-account-68f9b123-a2ff-486a-9a37-24c3f3b26311 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 19 14:27:07.048: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3164 pod-service-account-68f9b123-a2ff-486a-9a37-24c3f3b26311 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 19 14:27:07.265: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3164 pod-service-account-68f9b123-a2ff-486a-9a37-24c3f3b26311 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:27:07.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3164" for this suite. May 19 14:27:13.555: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:27:13.674: INFO: namespace svcaccounts-3164 deletion completed in 6.129220115s • [SLOW TEST:11.451 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:27:13.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 19 14:27:19.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-08f162c8-5fbe-4c8e-97db-0a94324c771e -c busybox-main-container --namespace=emptydir-4522 -- cat /usr/share/volumeshare/shareddata.txt' May 19 14:27:19.986: INFO: stderr: "I0519 14:27:19.885268 3048 log.go:172] (0xc000848370) (0xc0006588c0) Create stream\nI0519 14:27:19.885431 3048 log.go:172] (0xc000848370) (0xc0006588c0) Stream added, broadcasting: 1\nI0519 14:27:19.887770 3048 log.go:172] (0xc000848370) Reply frame received for 1\nI0519 14:27:19.887804 3048 log.go:172] (0xc000848370) (0xc0007ca000) Create stream\nI0519 14:27:19.887817 3048 log.go:172] (0xc000848370) (0xc0007ca000) Stream added, broadcasting: 3\nI0519 14:27:19.888603 3048 log.go:172] (0xc000848370) Reply frame received for 3\nI0519 14:27:19.888632 3048 log.go:172] (0xc000848370) (0xc000658960) Create stream\nI0519 14:27:19.888641 3048 log.go:172] (0xc000848370) (0xc000658960) Stream added, broadcasting: 5\nI0519 14:27:19.889622 3048 log.go:172] (0xc000848370) Reply frame received for 5\nI0519 14:27:19.978339 3048 log.go:172] (0xc000848370) Data frame received for 5\nI0519 14:27:19.978383 3048 log.go:172] (0xc000658960) (5) Data frame handling\nI0519 14:27:19.978439 3048 log.go:172] (0xc000848370) Data frame received for 3\nI0519 14:27:19.978487 3048 log.go:172] (0xc0007ca000) (3) Data frame handling\nI0519 14:27:19.978523 3048 log.go:172] (0xc0007ca000) (3) Data frame sent\nI0519 14:27:19.978550 3048 log.go:172] (0xc000848370) Data frame received for 3\nI0519 14:27:19.978571 3048 log.go:172] (0xc0007ca000) (3) Data frame handling\nI0519 14:27:19.980327 3048 log.go:172] (0xc000848370) Data frame received for 1\nI0519 14:27:19.980356 3048 log.go:172] (0xc0006588c0) (1) Data frame handling\nI0519 14:27:19.980376 3048 log.go:172] (0xc0006588c0) (1) Data frame sent\nI0519 14:27:19.980392 3048 log.go:172] (0xc000848370) (0xc0006588c0) Stream removed, broadcasting: 1\nI0519 14:27:19.980695 3048 log.go:172] (0xc000848370) Go away received\nI0519 14:27:19.980902 3048 log.go:172] (0xc000848370) (0xc0006588c0) Stream removed, broadcasting: 1\nI0519 14:27:19.980922 3048 log.go:172] (0xc000848370) (0xc0007ca000) Stream removed, broadcasting: 3\nI0519 14:27:19.980933 3048 log.go:172] (0xc000848370) (0xc000658960) Stream removed, broadcasting: 5\n" May 19 14:27:19.986: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:27:19.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4522" for this suite. May 19 14:27:26.005: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:27:26.081: INFO: namespace emptydir-4522 deletion completed in 6.091672263s • [SLOW TEST:12.407 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:27:26.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:28:26.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1476" for this suite. May 19 14:28:48.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:28:48.280: INFO: namespace container-probe-1476 deletion completed in 22.096815737s • [SLOW TEST:82.199 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:28:48.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 19 14:28:48.364: INFO: Waiting up to 5m0s for pod "downwardapi-volume-737f9d2f-3f2d-4d05-8a95-120c5ac257b2" in namespace "downward-api-416" to be "success or failure" May 19 14:28:48.382: INFO: Pod "downwardapi-volume-737f9d2f-3f2d-4d05-8a95-120c5ac257b2": Phase="Pending", Reason="", readiness=false. Elapsed: 18.538061ms May 19 14:28:50.390: INFO: Pod "downwardapi-volume-737f9d2f-3f2d-4d05-8a95-120c5ac257b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026693423s May 19 14:28:52.395: INFO: Pod "downwardapi-volume-737f9d2f-3f2d-4d05-8a95-120c5ac257b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031215651s STEP: Saw pod success May 19 14:28:52.395: INFO: Pod "downwardapi-volume-737f9d2f-3f2d-4d05-8a95-120c5ac257b2" satisfied condition "success or failure" May 19 14:28:52.403: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-737f9d2f-3f2d-4d05-8a95-120c5ac257b2 container client-container: STEP: delete the pod May 19 14:28:52.476: INFO: Waiting for pod downwardapi-volume-737f9d2f-3f2d-4d05-8a95-120c5ac257b2 to disappear May 19 14:28:52.538: INFO: Pod downwardapi-volume-737f9d2f-3f2d-4d05-8a95-120c5ac257b2 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:28:52.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-416" for this suite. May 19 14:28:58.564: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:28:58.662: INFO: namespace downward-api-416 deletion completed in 6.120088179s • [SLOW TEST:10.381 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:28:58.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command May 19 14:28:58.770: INFO: Waiting up to 5m0s for pod "var-expansion-be205fc1-a69d-4a49-8ab5-233b8271c727" in namespace "var-expansion-2717" to be "success or failure" May 19 14:28:58.799: INFO: Pod "var-expansion-be205fc1-a69d-4a49-8ab5-233b8271c727": Phase="Pending", Reason="", readiness=false. Elapsed: 28.989539ms May 19 14:29:00.858: INFO: Pod "var-expansion-be205fc1-a69d-4a49-8ab5-233b8271c727": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088173486s May 19 14:29:02.862: INFO: Pod "var-expansion-be205fc1-a69d-4a49-8ab5-233b8271c727": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.092256283s STEP: Saw pod success May 19 14:29:02.862: INFO: Pod "var-expansion-be205fc1-a69d-4a49-8ab5-233b8271c727" satisfied condition "success or failure" May 19 14:29:02.864: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-be205fc1-a69d-4a49-8ab5-233b8271c727 container dapi-container: STEP: delete the pod May 19 14:29:03.009: INFO: Waiting for pod var-expansion-be205fc1-a69d-4a49-8ab5-233b8271c727 to disappear May 19 14:29:03.174: INFO: Pod var-expansion-be205fc1-a69d-4a49-8ab5-233b8271c727 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:29:03.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2717" for this suite. May 19 14:29:09.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:29:09.280: INFO: namespace var-expansion-2717 deletion completed in 6.101614652s • [SLOW TEST:10.618 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:29:09.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-7rlg STEP: Creating a pod to test atomic-volume-subpath May 19 14:29:09.416: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-7rlg" in namespace "subpath-7330" to be "success or failure" May 19 14:29:09.456: INFO: Pod "pod-subpath-test-configmap-7rlg": Phase="Pending", Reason="", readiness=false. Elapsed: 39.762369ms May 19 14:29:11.460: INFO: Pod "pod-subpath-test-configmap-7rlg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043521446s May 19 14:29:13.464: INFO: Pod "pod-subpath-test-configmap-7rlg": Phase="Running", Reason="", readiness=true. Elapsed: 4.047464802s May 19 14:29:15.468: INFO: Pod "pod-subpath-test-configmap-7rlg": Phase="Running", Reason="", readiness=true. Elapsed: 6.052014592s May 19 14:29:17.471: INFO: Pod "pod-subpath-test-configmap-7rlg": Phase="Running", Reason="", readiness=true. Elapsed: 8.054955561s May 19 14:29:19.475: INFO: Pod "pod-subpath-test-configmap-7rlg": Phase="Running", Reason="", readiness=true. Elapsed: 10.058788913s May 19 14:29:21.479: INFO: Pod "pod-subpath-test-configmap-7rlg": Phase="Running", Reason="", readiness=true. Elapsed: 12.062826203s May 19 14:29:23.484: INFO: Pod "pod-subpath-test-configmap-7rlg": Phase="Running", Reason="", readiness=true. Elapsed: 14.067492421s May 19 14:29:25.488: INFO: Pod "pod-subpath-test-configmap-7rlg": Phase="Running", Reason="", readiness=true. Elapsed: 16.071434611s May 19 14:29:27.491: INFO: Pod "pod-subpath-test-configmap-7rlg": Phase="Running", Reason="", readiness=true. Elapsed: 18.074710364s May 19 14:29:29.495: INFO: Pod "pod-subpath-test-configmap-7rlg": Phase="Running", Reason="", readiness=true. Elapsed: 20.078600658s May 19 14:29:31.499: INFO: Pod "pod-subpath-test-configmap-7rlg": Phase="Running", Reason="", readiness=true. Elapsed: 22.082848302s May 19 14:29:33.503: INFO: Pod "pod-subpath-test-configmap-7rlg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.087108639s STEP: Saw pod success May 19 14:29:33.503: INFO: Pod "pod-subpath-test-configmap-7rlg" satisfied condition "success or failure" May 19 14:29:33.506: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-7rlg container test-container-subpath-configmap-7rlg: STEP: delete the pod May 19 14:29:33.564: INFO: Waiting for pod pod-subpath-test-configmap-7rlg to disappear May 19 14:29:33.568: INFO: Pod pod-subpath-test-configmap-7rlg no longer exists STEP: Deleting pod pod-subpath-test-configmap-7rlg May 19 14:29:33.568: INFO: Deleting pod "pod-subpath-test-configmap-7rlg" in namespace "subpath-7330" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:29:33.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7330" for this suite. May 19 14:29:39.852: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:29:39.928: INFO: namespace subpath-7330 deletion completed in 6.176206317s • [SLOW TEST:30.647 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 19 14:29:39.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 19 14:29:50.091: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4847 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 14:29:50.091: INFO: >>> kubeConfig: /root/.kube/config I0519 14:29:50.128221 6 log.go:172] (0xc002436630) (0xc00126dae0) Create stream I0519 14:29:50.128271 6 log.go:172] (0xc002436630) (0xc00126dae0) Stream added, broadcasting: 1 I0519 14:29:50.131488 6 log.go:172] (0xc002436630) Reply frame received for 1 I0519 14:29:50.131526 6 log.go:172] (0xc002436630) (0xc0009acaa0) Create stream I0519 14:29:50.131538 6 log.go:172] (0xc002436630) (0xc0009acaa0) Stream added, broadcasting: 3 I0519 14:29:50.132581 6 log.go:172] (0xc002436630) Reply frame received for 3 I0519 14:29:50.132631 6 log.go:172] (0xc002436630) (0xc00126db80) Create stream I0519 14:29:50.132652 6 log.go:172] (0xc002436630) (0xc00126db80) Stream added, broadcasting: 5 I0519 14:29:50.133969 6 log.go:172] (0xc002436630) Reply frame received for 5 I0519 14:29:50.225627 6 log.go:172] (0xc002436630) Data frame received for 5 I0519 14:29:50.225677 6 log.go:172] (0xc00126db80) (5) Data frame handling I0519 14:29:50.225704 6 log.go:172] (0xc002436630) Data frame received for 3 I0519 14:29:50.225716 6 log.go:172] (0xc0009acaa0) (3) Data frame handling I0519 14:29:50.225733 6 log.go:172] (0xc0009acaa0) (3) Data frame sent I0519 14:29:50.225741 6 log.go:172] (0xc002436630) Data frame received for 3 I0519 14:29:50.225754 6 log.go:172] (0xc0009acaa0) (3) Data frame handling I0519 14:29:50.227379 6 log.go:172] (0xc002436630) Data frame received for 1 I0519 14:29:50.227421 6 log.go:172] (0xc00126dae0) (1) Data frame handling I0519 14:29:50.227441 6 log.go:172] (0xc00126dae0) (1) Data frame sent I0519 14:29:50.227459 6 log.go:172] (0xc002436630) (0xc00126dae0) Stream removed, broadcasting: 1 I0519 14:29:50.227485 6 log.go:172] (0xc002436630) Go away received I0519 14:29:50.227574 6 log.go:172] (0xc002436630) (0xc00126dae0) Stream removed, broadcasting: 1 I0519 14:29:50.227595 6 log.go:172] (0xc002436630) (0xc0009acaa0) Stream removed, broadcasting: 3 I0519 14:29:50.227608 6 log.go:172] (0xc002436630) (0xc00126db80) Stream removed, broadcasting: 5 May 19 14:29:50.227: INFO: Exec stderr: "" May 19 14:29:50.227: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4847 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 14:29:50.227: INFO: >>> kubeConfig: /root/.kube/config I0519 14:29:50.252886 6 log.go:172] (0xc002582e70) (0xc0010ab180) Create stream I0519 14:29:50.252920 6 log.go:172] (0xc002582e70) (0xc0010ab180) Stream added, broadcasting: 1 I0519 14:29:50.255471 6 log.go:172] (0xc002582e70) Reply frame received for 1 I0519 14:29:50.255505 6 log.go:172] (0xc002582e70) (0xc0010ab360) Create stream I0519 14:29:50.255517 6 log.go:172] (0xc002582e70) (0xc0010ab360) Stream added, broadcasting: 3 I0519 14:29:50.256404 6 log.go:172] (0xc002582e70) Reply frame received for 3 I0519 14:29:50.256432 6 log.go:172] (0xc002582e70) (0xc0010ab4a0) Create stream I0519 14:29:50.256441 6 log.go:172] (0xc002582e70) (0xc0010ab4a0) Stream added, broadcasting: 5 I0519 14:29:50.257588 6 log.go:172] (0xc002582e70) Reply frame received for 5 I0519 14:29:50.328881 6 log.go:172] (0xc002582e70) Data frame received for 3 I0519 14:29:50.328912 6 log.go:172] (0xc0010ab360) (3) Data frame handling I0519 14:29:50.328920 6 log.go:172] (0xc0010ab360) (3) Data frame sent I0519 14:29:50.328925 6 log.go:172] (0xc002582e70) Data frame received for 3 I0519 14:29:50.328931 6 log.go:172] (0xc0010ab360) (3) Data frame handling I0519 14:29:50.328963 6 log.go:172] (0xc002582e70) Data frame received for 5 I0519 14:29:50.328994 6 log.go:172] (0xc0010ab4a0) (5) Data frame handling I0519 14:29:50.330837 6 log.go:172] (0xc002582e70) Data frame received for 1 I0519 14:29:50.330858 6 log.go:172] (0xc0010ab180) (1) Data frame handling I0519 14:29:50.330882 6 log.go:172] (0xc0010ab180) (1) Data frame sent I0519 14:29:50.330907 6 log.go:172] (0xc002582e70) (0xc0010ab180) Stream removed, broadcasting: 1 I0519 14:29:50.331003 6 log.go:172] (0xc002582e70) (0xc0010ab180) Stream removed, broadcasting: 1 I0519 14:29:50.331020 6 log.go:172] (0xc002582e70) (0xc0010ab360) Stream removed, broadcasting: 3 I0519 14:29:50.331029 6 log.go:172] (0xc002582e70) (0xc0010ab4a0) Stream removed, broadcasting: 5 May 19 14:29:50.331: INFO: Exec stderr: "" May 19 14:29:50.331: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4847 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} I0519 14:29:50.331076 6 log.go:172] (0xc002582e70) Go away received May 19 14:29:50.331: INFO: >>> kubeConfig: /root/.kube/config I0519 14:29:50.357064 6 log.go:172] (0xc002583d90) (0xc0010abb80) Create stream I0519 14:29:50.357089 6 log.go:172] (0xc002583d90) (0xc0010abb80) Stream added, broadcasting: 1 I0519 14:29:50.359532 6 log.go:172] (0xc002583d90) Reply frame received for 1 I0519 14:29:50.359558 6 log.go:172] (0xc002583d90) (0xc0010abc20) Create stream I0519 14:29:50.359566 6 log.go:172] (0xc002583d90) (0xc0010abc20) Stream added, broadcasting: 3 I0519 14:29:50.360205 6 log.go:172] (0xc002583d90) Reply frame received for 3 I0519 14:29:50.360226 6 log.go:172] (0xc002583d90) (0xc0009acb40) Create stream I0519 14:29:50.360235 6 log.go:172] (0xc002583d90) (0xc0009acb40) Stream added, broadcasting: 5 I0519 14:29:50.360898 6 log.go:172] (0xc002583d90) Reply frame received for 5 I0519 14:29:50.430399 6 log.go:172] (0xc002583d90) Data frame received for 3 I0519 14:29:50.430433 6 log.go:172] (0xc0010abc20) (3) Data frame handling I0519 14:29:50.430446 6 log.go:172] (0xc0010abc20) (3) Data frame sent I0519 14:29:50.430453 6 log.go:172] (0xc002583d90) Data frame received for 3 I0519 14:29:50.430457 6 log.go:172] (0xc0010abc20) (3) Data frame handling I0519 14:29:50.430493 6 log.go:172] (0xc002583d90) Data frame received for 5 I0519 14:29:50.430506 6 log.go:172] (0xc0009acb40) (5) Data frame handling I0519 14:29:50.431764 6 log.go:172] (0xc002583d90) Data frame received for 1 I0519 14:29:50.431790 6 log.go:172] (0xc0010abb80) (1) Data frame handling I0519 14:29:50.431805 6 log.go:172] (0xc0010abb80) (1) Data frame sent I0519 14:29:50.431817 6 log.go:172] (0xc002583d90) (0xc0010abb80) Stream removed, broadcasting: 1 I0519 14:29:50.431832 6 log.go:172] (0xc002583d90) Go away received I0519 14:29:50.431957 6 log.go:172] (0xc002583d90) (0xc0010abb80) Stream removed, broadcasting: 1 I0519 14:29:50.431976 6 log.go:172] (0xc002583d90) (0xc0010abc20) Stream removed, broadcasting: 3 I0519 14:29:50.431986 6 log.go:172] (0xc002583d90) (0xc0009acb40) Stream removed, broadcasting: 5 May 19 14:29:50.431: INFO: Exec stderr: "" May 19 14:29:50.432: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4847 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 14:29:50.432: INFO: >>> kubeConfig: /root/.kube/config I0519 14:29:50.461371 6 log.go:172] (0xc001624d10) (0xc0019c01e0) Create stream I0519 14:29:50.461394 6 log.go:172] (0xc001624d10) (0xc0019c01e0) Stream added, broadcasting: 1 I0519 14:29:50.464068 6 log.go:172] (0xc001624d10) Reply frame received for 1 I0519 14:29:50.464112 6 log.go:172] (0xc001624d10) (0xc000aa80a0) Create stream I0519 14:29:50.464127 6 log.go:172] (0xc001624d10) (0xc000aa80a0) Stream added, broadcasting: 3 I0519 14:29:50.465597 6 log.go:172] (0xc001624d10) Reply frame received for 3 I0519 14:29:50.465657 6 log.go:172] (0xc001624d10) (0xc00126dc20) Create stream I0519 14:29:50.465674 6 log.go:172] (0xc001624d10) (0xc00126dc20) Stream added, broadcasting: 5 I0519 14:29:50.466619 6 log.go:172] (0xc001624d10) Reply frame received for 5 I0519 14:29:50.528427 6 log.go:172] (0xc001624d10) Data frame received for 5 I0519 14:29:50.528474 6 log.go:172] (0xc00126dc20) (5) Data frame handling I0519 14:29:50.528644 6 log.go:172] (0xc001624d10) Data frame received for 3 I0519 14:29:50.528667 6 log.go:172] (0xc000aa80a0) (3) Data frame handling I0519 14:29:50.528688 6 log.go:172] (0xc000aa80a0) (3) Data frame sent I0519 14:29:50.528701 6 log.go:172] (0xc001624d10) Data frame received for 3 I0519 14:29:50.528709 6 log.go:172] (0xc000aa80a0) (3) Data frame handling I0519 14:29:50.530348 6 log.go:172] (0xc001624d10) Data frame received for 1 I0519 14:29:50.530383 6 log.go:172] (0xc0019c01e0) (1) Data frame handling I0519 14:29:50.530406 6 log.go:172] (0xc0019c01e0) (1) Data frame sent I0519 14:29:50.530457 6 log.go:172] (0xc001624d10) (0xc0019c01e0) Stream removed, broadcasting: 1 I0519 14:29:50.530550 6 log.go:172] (0xc001624d10) (0xc0019c01e0) Stream removed, broadcasting: 1 I0519 14:29:50.530582 6 log.go:172] (0xc001624d10) (0xc000aa80a0) Stream removed, broadcasting: 3 I0519 14:29:50.530690 6 log.go:172] (0xc001624d10) Go away received I0519 14:29:50.530799 6 log.go:172] (0xc001624d10) (0xc00126dc20) Stream removed, broadcasting: 5 May 19 14:29:50.530: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 19 14:29:50.530: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4847 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 14:29:50.530: INFO: >>> kubeConfig: /root/.kube/config I0519 14:29:50.558568 6 log.go:172] (0xc00231b1e0) (0xc000aa8b40) Create stream I0519 14:29:50.558623 6 log.go:172] (0xc00231b1e0) (0xc000aa8b40) Stream added, broadcasting: 1 I0519 14:29:50.561456 6 log.go:172] (0xc00231b1e0) Reply frame received for 1 I0519 14:29:50.561500 6 log.go:172] (0xc00231b1e0) (0xc0009acc80) Create stream I0519 14:29:50.561518 6 log.go:172] (0xc00231b1e0) (0xc0009acc80) Stream added, broadcasting: 3 I0519 14:29:50.562432 6 log.go:172] (0xc00231b1e0) Reply frame received for 3 I0519 14:29:50.562463 6 log.go:172] (0xc00231b1e0) (0xc0009acdc0) Create stream I0519 14:29:50.562478 6 log.go:172] (0xc00231b1e0) (0xc0009acdc0) Stream added, broadcasting: 5 I0519 14:29:50.563440 6 log.go:172] (0xc00231b1e0) Reply frame received for 5 I0519 14:29:50.631094 6 log.go:172] (0xc00231b1e0) Data frame received for 3 I0519 14:29:50.631135 6 log.go:172] (0xc0009acc80) (3) Data frame handling I0519 14:29:50.631144 6 log.go:172] (0xc0009acc80) (3) Data frame sent I0519 14:29:50.631151 6 log.go:172] (0xc00231b1e0) Data frame received for 3 I0519 14:29:50.631161 6 log.go:172] (0xc0009acc80) (3) Data frame handling I0519 14:29:50.631190 6 log.go:172] (0xc00231b1e0) Data frame received for 5 I0519 14:29:50.631199 6 log.go:172] (0xc0009acdc0) (5) Data frame handling I0519 14:29:50.632714 6 log.go:172] (0xc00231b1e0) Data frame received for 1 I0519 14:29:50.632737 6 log.go:172] (0xc000aa8b40) (1) Data frame handling I0519 14:29:50.632760 6 log.go:172] (0xc000aa8b40) (1) Data frame sent I0519 14:29:50.632825 6 log.go:172] (0xc00231b1e0) (0xc000aa8b40) Stream removed, broadcasting: 1 I0519 14:29:50.632865 6 log.go:172] (0xc00231b1e0) Go away received I0519 14:29:50.632940 6 log.go:172] (0xc00231b1e0) (0xc000aa8b40) Stream removed, broadcasting: 1 I0519 14:29:50.632958 6 log.go:172] (0xc00231b1e0) (0xc0009acc80) Stream removed, broadcasting: 3 I0519 14:29:50.632966 6 log.go:172] (0xc00231b1e0) (0xc0009acdc0) Stream removed, broadcasting: 5 May 19 14:29:50.632: INFO: Exec stderr: "" May 19 14:29:50.633: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4847 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 14:29:50.633: INFO: >>> kubeConfig: /root/.kube/config I0519 14:29:50.665563 6 log.go:172] (0xc0021ec4d0) (0xc0009ad4a0) Create stream I0519 14:29:50.665603 6 log.go:172] (0xc0021ec4d0) (0xc0009ad4a0) Stream added, broadcasting: 1 I0519 14:29:50.668379 6 log.go:172] (0xc0021ec4d0) Reply frame received for 1 I0519 14:29:50.668422 6 log.go:172] (0xc0021ec4d0) (0xc0009ad5e0) Create stream I0519 14:29:50.668452 6 log.go:172] (0xc0021ec4d0) (0xc0009ad5e0) Stream added, broadcasting: 3 I0519 14:29:50.669794 6 log.go:172] (0xc0021ec4d0) Reply frame received for 3 I0519 14:29:50.669851 6 log.go:172] (0xc0021ec4d0) (0xc0009ad680) Create stream I0519 14:29:50.669868 6 log.go:172] (0xc0021ec4d0) (0xc0009ad680) Stream added, broadcasting: 5 I0519 14:29:50.670772 6 log.go:172] (0xc0021ec4d0) Reply frame received for 5 I0519 14:29:50.732735 6 log.go:172] (0xc0021ec4d0) Data frame received for 5 I0519 14:29:50.732810 6 log.go:172] (0xc0009ad680) (5) Data frame handling I0519 14:29:50.732839 6 log.go:172] (0xc0021ec4d0) Data frame received for 3 I0519 14:29:50.732859 6 log.go:172] (0xc0009ad5e0) (3) Data frame handling I0519 14:29:50.732873 6 log.go:172] (0xc0009ad5e0) (3) Data frame sent I0519 14:29:50.732882 6 log.go:172] (0xc0021ec4d0) Data frame received for 3 I0519 14:29:50.732891 6 log.go:172] (0xc0009ad5e0) (3) Data frame handling I0519 14:29:50.734714 6 log.go:172] (0xc0021ec4d0) Data frame received for 1 I0519 14:29:50.734747 6 log.go:172] (0xc0009ad4a0) (1) Data frame handling I0519 14:29:50.734760 6 log.go:172] (0xc0009ad4a0) (1) Data frame sent I0519 14:29:50.734781 6 log.go:172] (0xc0021ec4d0) (0xc0009ad4a0) Stream removed, broadcasting: 1 I0519 14:29:50.734798 6 log.go:172] (0xc0021ec4d0) Go away received I0519 14:29:50.734909 6 log.go:172] (0xc0021ec4d0) (0xc0009ad4a0) Stream removed, broadcasting: 1 I0519 14:29:50.734927 6 log.go:172] (0xc0021ec4d0) (0xc0009ad5e0) Stream removed, broadcasting: 3 I0519 14:29:50.734936 6 log.go:172] (0xc0021ec4d0) (0xc0009ad680) Stream removed, broadcasting: 5 May 19 14:29:50.734: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 19 14:29:50.734: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4847 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 14:29:50.735: INFO: >>> kubeConfig: /root/.kube/config I0519 14:29:50.762662 6 log.go:172] (0xc0021ed1e0) (0xc0003a0d20) Create stream I0519 14:29:50.762693 6 log.go:172] (0xc0021ed1e0) (0xc0003a0d20) Stream added, broadcasting: 1 I0519 14:29:50.765777 6 log.go:172] (0xc0021ed1e0) Reply frame received for 1 I0519 14:29:50.765816 6 log.go:172] (0xc0021ed1e0) (0xc0019c0280) Create stream I0519 14:29:50.765828 6 log.go:172] (0xc0021ed1e0) (0xc0019c0280) Stream added, broadcasting: 3 I0519 14:29:50.766789 6 log.go:172] (0xc0021ed1e0) Reply frame received for 3 I0519 14:29:50.766819 6 log.go:172] (0xc0021ed1e0) (0xc00126dd60) Create stream I0519 14:29:50.766829 6 log.go:172] (0xc0021ed1e0) (0xc00126dd60) Stream added, broadcasting: 5 I0519 14:29:50.767736 6 log.go:172] (0xc0021ed1e0) Reply frame received for 5 I0519 14:29:50.834257 6 log.go:172] (0xc0021ed1e0) Data frame received for 3 I0519 14:29:50.834289 6 log.go:172] (0xc0019c0280) (3) Data frame handling I0519 14:29:50.834305 6 log.go:172] (0xc0021ed1e0) Data frame received for 5 I0519 14:29:50.834324 6 log.go:172] (0xc00126dd60) (5) Data frame handling I0519 14:29:50.834346 6 log.go:172] (0xc0019c0280) (3) Data frame sent I0519 14:29:50.834356 6 log.go:172] (0xc0021ed1e0) Data frame received for 3 I0519 14:29:50.834365 6 log.go:172] (0xc0019c0280) (3) Data frame handling I0519 14:29:50.835830 6 log.go:172] (0xc0021ed1e0) Data frame received for 1 I0519 14:29:50.835849 6 log.go:172] (0xc0003a0d20) (1) Data frame handling I0519 14:29:50.835862 6 log.go:172] (0xc0003a0d20) (1) Data frame sent I0519 14:29:50.835877 6 log.go:172] (0xc0021ed1e0) (0xc0003a0d20) Stream removed, broadcasting: 1 I0519 14:29:50.835901 6 log.go:172] (0xc0021ed1e0) Go away received I0519 14:29:50.836015 6 log.go:172] (0xc0021ed1e0) (0xc0003a0d20) Stream removed, broadcasting: 1 I0519 14:29:50.836044 6 log.go:172] (0xc0021ed1e0) (0xc0019c0280) Stream removed, broadcasting: 3 I0519 14:29:50.836061 6 log.go:172] (0xc0021ed1e0) (0xc00126dd60) Stream removed, broadcasting: 5 May 19 14:29:50.836: INFO: Exec stderr: "" May 19 14:29:50.836: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4847 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 14:29:50.836: INFO: >>> kubeConfig: /root/.kube/config I0519 14:29:50.882500 6 log.go:172] (0xc0021edb80) (0xc0003a1860) Create stream I0519 14:29:50.882535 6 log.go:172] (0xc0021edb80) (0xc0003a1860) Stream added, broadcasting: 1 I0519 14:29:50.886413 6 log.go:172] (0xc0021edb80) Reply frame received for 1 I0519 14:29:50.886485 6 log.go:172] (0xc0021edb80) (0xc0010abe00) Create stream I0519 14:29:50.886503 6 log.go:172] (0xc0021edb80) (0xc0010abe00) Stream added, broadcasting: 3 I0519 14:29:50.887740 6 log.go:172] (0xc0021edb80) Reply frame received for 3 I0519 14:29:50.887804 6 log.go:172] (0xc0021edb80) (0xc000aa8e60) Create stream I0519 14:29:50.887824 6 log.go:172] (0xc0021edb80) (0xc000aa8e60) Stream added, broadcasting: 5 I0519 14:29:50.888929 6 log.go:172] (0xc0021edb80) Reply frame received for 5 I0519 14:29:50.954157 6 log.go:172] (0xc0021edb80) Data frame received for 5 I0519 14:29:50.954207 6 log.go:172] (0xc000aa8e60) (5) Data frame handling I0519 14:29:50.954233 6 log.go:172] (0xc0021edb80) Data frame received for 3 I0519 14:29:50.954242 6 log.go:172] (0xc0010abe00) (3) Data frame handling I0519 14:29:50.954252 6 log.go:172] (0xc0010abe00) (3) Data frame sent I0519 14:29:50.954267 6 log.go:172] (0xc0021edb80) Data frame received for 3 I0519 14:29:50.954280 6 log.go:172] (0xc0010abe00) (3) Data frame handling I0519 14:29:50.955497 6 log.go:172] (0xc0021edb80) Data frame received for 1 I0519 14:29:50.955515 6 log.go:172] (0xc0003a1860) (1) Data frame handling I0519 14:29:50.955526 6 log.go:172] (0xc0003a1860) (1) Data frame sent I0519 14:29:50.955535 6 log.go:172] (0xc0021edb80) (0xc0003a1860) Stream removed, broadcasting: 1 I0519 14:29:50.955566 6 log.go:172] (0xc0021edb80) Go away received I0519 14:29:50.955642 6 log.go:172] (0xc0021edb80) (0xc0003a1860) Stream removed, broadcasting: 1 I0519 14:29:50.955665 6 log.go:172] (0xc0021edb80) (0xc0010abe00) Stream removed, broadcasting: 3 I0519 14:29:50.955678 6 log.go:172] (0xc0021edb80) (0xc000aa8e60) Stream removed, broadcasting: 5 May 19 14:29:50.955: INFO: Exec stderr: "" May 19 14:29:50.955: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4847 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 14:29:50.955: INFO: >>> kubeConfig: /root/.kube/config I0519 14:29:50.995584 6 log.go:172] (0xc0032bb080) (0xc000aa9400) Create stream I0519 14:29:50.995609 6 log.go:172] (0xc0032bb080) (0xc000aa9400) Stream added, broadcasting: 1 I0519 14:29:50.998067 6 log.go:172] (0xc0032bb080) Reply frame received for 1 I0519 14:29:50.998120 6 log.go:172] (0xc0032bb080) (0xc000aa95e0) Create stream I0519 14:29:50.998135 6 log.go:172] (0xc0032bb080) (0xc000aa95e0) Stream added, broadcasting: 3 I0519 14:29:50.999021 6 log.go:172] (0xc0032bb080) Reply frame received for 3 I0519 14:29:50.999051 6 log.go:172] (0xc0032bb080) (0xc000aa9680) Create stream I0519 14:29:50.999057 6 log.go:172] (0xc0032bb080) (0xc000aa9680) Stream added, broadcasting: 5 I0519 14:29:50.999816 6 log.go:172] (0xc0032bb080) Reply frame received for 5 I0519 14:29:51.077967 6 log.go:172] (0xc0032bb080) Data frame received for 5 I0519 14:29:51.078012 6 log.go:172] (0xc000aa9680) (5) Data frame handling I0519 14:29:51.078036 6 log.go:172] (0xc0032bb080) Data frame received for 3 I0519 14:29:51.078047 6 log.go:172] (0xc000aa95e0) (3) Data frame handling I0519 14:29:51.078059 6 log.go:172] (0xc000aa95e0) (3) Data frame sent I0519 14:29:51.078070 6 log.go:172] (0xc0032bb080) Data frame received for 3 I0519 14:29:51.078081 6 log.go:172] (0xc000aa95e0) (3) Data frame handling I0519 14:29:51.080365 6 log.go:172] (0xc0032bb080) Data frame received for 1 I0519 14:29:51.080379 6 log.go:172] (0xc000aa9400) (1) Data frame handling I0519 14:29:51.080387 6 log.go:172] (0xc000aa9400) (1) Data frame sent I0519 14:29:51.080396 6 log.go:172] (0xc0032bb080) (0xc000aa9400) Stream removed, broadcasting: 1 I0519 14:29:51.080484 6 log.go:172] (0xc0032bb080) (0xc000aa9400) Stream removed, broadcasting: 1 I0519 14:29:51.080500 6 log.go:172] (0xc0032bb080) (0xc000aa95e0) Stream removed, broadcasting: 3 I0519 14:29:51.080515 6 log.go:172] (0xc0032bb080) (0xc000aa9680) Stream removed, broadcasting: 5 May 19 14:29:51.080: INFO: Exec stderr: "" May 19 14:29:51.080: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4847 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 14:29:51.080: INFO: >>> kubeConfig: /root/.kube/config I0519 14:29:51.080674 6 log.go:172] (0xc0032bb080) Go away received I0519 14:29:51.110501 6 log.go:172] (0xc002b1c000) (0xc000aa9b80) Create stream I0519 14:29:51.110523 6 log.go:172] (0xc002b1c000) (0xc000aa9b80) Stream added, broadcasting: 1 I0519 14:29:51.113438 6 log.go:172] (0xc002b1c000) Reply frame received for 1 I0519 14:29:51.113491 6 log.go:172] (0xc002b1c000) (0xc00126dea0) Create stream I0519 14:29:51.113501 6 log.go:172] (0xc002b1c000) (0xc00126dea0) Stream added, broadcasting: 3 I0519 14:29:51.114513 6 log.go:172] (0xc002b1c000) Reply frame received for 3 I0519 14:29:51.114559 6 log.go:172] (0xc002b1c000) (0xc0019c0320) Create stream I0519 14:29:51.114575 6 log.go:172] (0xc002b1c000) (0xc0019c0320) Stream added, broadcasting: 5 I0519 14:29:51.115723 6 log.go:172] (0xc002b1c000) Reply frame received for 5 I0519 14:29:51.180423 6 log.go:172] (0xc002b1c000) Data frame received for 5 I0519 14:29:51.180491 6 log.go:172] (0xc0019c0320) (5) Data frame handling I0519 14:29:51.180527 6 log.go:172] (0xc002b1c000) Data frame received for 3 I0519 14:29:51.180572 6 log.go:172] (0xc00126dea0) (3) Data frame handling I0519 14:29:51.180593 6 log.go:172] (0xc00126dea0) (3) Data frame sent I0519 14:29:51.180627 6 log.go:172] (0xc002b1c000) Data frame received for 3 I0519 14:29:51.180644 6 log.go:172] (0xc00126dea0) (3) Data frame handling I0519 14:29:51.182378 6 log.go:172] (0xc002b1c000) Data frame received for 1 I0519 14:29:51.182425 6 log.go:172] (0xc000aa9b80) (1) Data frame handling I0519 14:29:51.182459 6 log.go:172] (0xc000aa9b80) (1) Data frame sent I0519 14:29:51.182481 6 log.go:172] (0xc002b1c000) (0xc000aa9b80) Stream removed, broadcasting: 1 I0519 14:29:51.182571 6 log.go:172] (0xc002b1c000) Go away received I0519 14:29:51.182611 6 log.go:172] (0xc002b1c000) (0xc000aa9b80) Stream removed, broadcasting: 1 I0519 14:29:51.182640 6 log.go:172] (0xc002b1c000) (0xc00126dea0) Stream removed, broadcasting: 3 I0519 14:29:51.182659 6 log.go:172] (0xc002b1c000) (0xc0019c0320) Stream removed, broadcasting: 5 May 19 14:29:51.182: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 19 14:29:51.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-4847" for this suite. May 19 14:30:31.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 19 14:30:31.363: INFO: namespace e2e-kubelet-etc-hosts-4847 deletion completed in 40.17687536s • [SLOW TEST:51.436 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSMay 19 14:30:31.364: INFO: Running AfterSuite actions on all nodes May 19 14:30:31.364: INFO: Running AfterSuite actions on node 1 May 19 14:30:31.364: INFO: Skipping dumping logs from cluster Ran 215 of 4412 Specs in 5687.017 seconds SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped PASS