I0522 12:55:44.563188 6 e2e.go:243] Starting e2e run "b25b5038-1534-4f18-a180-9bd1f494280e" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1590152143 - Will randomize all specs Will run 215 of 4412 specs May 22 12:55:44.759: INFO: >>> kubeConfig: /root/.kube/config May 22 12:55:44.763: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 22 12:55:44.804: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 22 12:55:44.831: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 22 12:55:44.831: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 22 12:55:44.831: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 22 12:55:44.838: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 22 12:55:44.838: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 22 12:55:44.838: INFO: e2e test version: v1.15.11 May 22 12:55:44.839: INFO: kube-apiserver version: v1.15.7 SSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 12:55:44.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset May 22 12:55:44.922: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 22 12:55:49.959: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 12:55:51.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-5181" for this suite. May 22 12:56:13.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:56:13.147: INFO: namespace replicaset-5181 deletion completed in 22.10782473s • [SLOW TEST:28.309 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 12:56:13.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-8e8aaab7-5232-4fee-86f0-9fb4ffe3c2e4 STEP: Creating a pod to test consume secrets May 22 12:56:13.280: INFO: Waiting up to 5m0s for pod "pod-secrets-58f72ad9-cf3a-4ee5-b4c5-ef0574e54873" in namespace "secrets-4875" to be "success or failure" May 22 12:56:13.284: INFO: Pod "pod-secrets-58f72ad9-cf3a-4ee5-b4c5-ef0574e54873": Phase="Pending", Reason="", readiness=false. Elapsed: 4.313887ms May 22 12:56:15.306: INFO: Pod "pod-secrets-58f72ad9-cf3a-4ee5-b4c5-ef0574e54873": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025973508s May 22 12:56:17.337: INFO: Pod "pod-secrets-58f72ad9-cf3a-4ee5-b4c5-ef0574e54873": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056792053s STEP: Saw pod success May 22 12:56:17.337: INFO: Pod "pod-secrets-58f72ad9-cf3a-4ee5-b4c5-ef0574e54873" satisfied condition "success or failure" May 22 12:56:17.339: INFO: Trying to get logs from node iruya-worker pod pod-secrets-58f72ad9-cf3a-4ee5-b4c5-ef0574e54873 container secret-volume-test: STEP: delete the pod May 22 12:56:17.481: INFO: Waiting for pod pod-secrets-58f72ad9-cf3a-4ee5-b4c5-ef0574e54873 to disappear May 22 12:56:17.518: INFO: Pod pod-secrets-58f72ad9-cf3a-4ee5-b4c5-ef0574e54873 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 12:56:17.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4875" for this suite. May 22 12:56:23.593: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:56:23.684: INFO: namespace secrets-4875 deletion completed in 6.16373573s • [SLOW TEST:10.537 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 12:56:23.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0522 12:56:35.687624 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 22 12:56:35.687: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 12:56:35.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9339" for this suite. May 22 12:56:45.713: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:56:45.781: INFO: namespace gc-9339 deletion completed in 10.091018799s • [SLOW TEST:22.096 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 12:56:45.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium May 22 12:56:46.040: INFO: Waiting up to 5m0s for pod "pod-8df32501-7359-40de-88ff-525a4c2a6948" in namespace "emptydir-6914" to be "success or failure" May 22 12:56:46.103: INFO: Pod "pod-8df32501-7359-40de-88ff-525a4c2a6948": Phase="Pending", Reason="", readiness=false. Elapsed: 63.808266ms May 22 12:56:48.108: INFO: Pod "pod-8df32501-7359-40de-88ff-525a4c2a6948": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067910575s May 22 12:56:50.111: INFO: Pod "pod-8df32501-7359-40de-88ff-525a4c2a6948": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.071326095s STEP: Saw pod success May 22 12:56:50.111: INFO: Pod "pod-8df32501-7359-40de-88ff-525a4c2a6948" satisfied condition "success or failure" May 22 12:56:50.113: INFO: Trying to get logs from node iruya-worker pod pod-8df32501-7359-40de-88ff-525a4c2a6948 container test-container: STEP: delete the pod May 22 12:56:50.176: INFO: Waiting for pod pod-8df32501-7359-40de-88ff-525a4c2a6948 to disappear May 22 12:56:50.180: INFO: Pod pod-8df32501-7359-40de-88ff-525a4c2a6948 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 12:56:50.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6914" for this suite. May 22 12:56:56.192: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:56:56.272: INFO: namespace emptydir-6914 deletion completed in 6.089243346s • [SLOW TEST:10.491 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 12:56:56.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-4466 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet May 22 12:56:56.366: INFO: Found 0 stateful pods, waiting for 3 May 22 12:57:06.372: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 22 12:57:06.372: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 22 12:57:06.372: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false May 22 12:57:16.372: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 22 12:57:16.372: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 22 12:57:16.372: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 22 12:57:16.399: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 22 12:57:26.454: INFO: Updating stateful set ss2 May 22 12:57:26.524: INFO: Waiting for Pod statefulset-4466/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted May 22 12:57:37.883: INFO: Found 2 stateful pods, waiting for 3 May 22 12:57:47.888: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 22 12:57:47.889: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 22 12:57:47.889: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 22 12:57:47.914: INFO: Updating stateful set ss2 May 22 12:57:48.013: INFO: Waiting for Pod statefulset-4466/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 22 12:57:58.040: INFO: Updating stateful set ss2 May 22 12:57:58.058: INFO: Waiting for StatefulSet statefulset-4466/ss2 to complete update May 22 12:57:58.058: INFO: Waiting for Pod statefulset-4466/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 22 12:58:08.067: INFO: Deleting all statefulset in ns statefulset-4466 May 22 12:58:08.071: INFO: Scaling statefulset ss2 to 0 May 22 12:58:38.116: INFO: Waiting for statefulset status.replicas updated to 0 May 22 12:58:38.119: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 12:58:38.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4466" for this suite. May 22 12:58:44.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 12:58:44.224: INFO: namespace statefulset-4466 deletion completed in 6.086332399s • [SLOW TEST:107.951 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 12:58:44.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 22 12:58:44.274: INFO: PodSpec: initContainers in spec.initContainers May 22 12:59:38.519: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-fa369e22-bb94-4ecc-a46d-e0a6c5564fe5", GenerateName:"", Namespace:"init-container-2365", SelfLink:"/api/v1/namespaces/init-container-2365/pods/pod-init-fa369e22-bb94-4ecc-a46d-e0a6c5564fe5", UID:"4cb32f1e-5a8f-47ea-a079-aacffb9e7348", ResourceVersion:"12289835", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63725749124, loc:(*time.Location)(0x7ead8c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"274201632"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-tl8cp", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0017c5d80), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-tl8cp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-tl8cp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-tl8cp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001980c08), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001a1a600), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001980c90)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001980cb0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001980cb8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001980cbc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725749124, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725749124, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725749124, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725749124, loc:(*time.Location)(0x7ead8c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.5", PodIP:"10.244.1.154", StartTime:(*v1.Time)(0xc001fec780), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0025dd880)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0025dd8f0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://45bd1d339bc78dd7c8c5d71d13cf0b31b3c44efe5bc23070ce91a061303bca21"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001fec7c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001fec7a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 12:59:38.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2365" for this suite. May 22 13:00:06.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:00:06.700: INFO: namespace init-container-2365 deletion completed in 28.144847247s • [SLOW TEST:82.476 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:00:06.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-1898 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-1898 STEP: Deleting pre-stop pod May 22 13:00:19.891: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:00:19.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-1898" for this suite. May 22 13:00:57.947: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:00:58.014: INFO: namespace prestop-1898 deletion completed in 38.106071045s • [SLOW TEST:51.313 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:00:58.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 22 13:00:58.143: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 5.565516ms) May 22 13:00:58.147: INFO: (1) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.595702ms) May 22 13:00:58.150: INFO: (2) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.509465ms) May 22 13:00:58.154: INFO: (3) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.266802ms) May 22 13:00:58.157: INFO: (4) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.10462ms) May 22 13:00:58.160: INFO: (5) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.86065ms) May 22 13:00:58.163: INFO: (6) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.473021ms) May 22 13:00:58.167: INFO: (7) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.517781ms) May 22 13:00:58.170: INFO: (8) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.541939ms) May 22 13:00:58.173: INFO: (9) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.734583ms) May 22 13:00:58.176: INFO: (10) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.656199ms) May 22 13:00:58.178: INFO: (11) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.533364ms) May 22 13:00:58.181: INFO: (12) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.781309ms) May 22 13:00:58.184: INFO: (13) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.700825ms) May 22 13:00:58.187: INFO: (14) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.123801ms) May 22 13:00:58.190: INFO: (15) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.324224ms) May 22 13:00:58.194: INFO: (16) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.434832ms) May 22 13:00:58.197: INFO: (17) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.348415ms) May 22 13:00:58.200: INFO: (18) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.69759ms) May 22 13:00:58.203: INFO: (19) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.754793ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:00:58.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-9326" for this suite. May 22 13:01:04.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:01:04.295: INFO: namespace proxy-9326 deletion completed in 6.08883643s • [SLOW TEST:6.281 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:01:04.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-72qq STEP: Creating a pod to test atomic-volume-subpath May 22 13:01:04.420: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-72qq" in namespace "subpath-1396" to be "success or failure" May 22 13:01:04.439: INFO: Pod "pod-subpath-test-secret-72qq": Phase="Pending", Reason="", readiness=false. Elapsed: 19.724647ms May 22 13:01:06.443: INFO: Pod "pod-subpath-test-secret-72qq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02326916s May 22 13:01:08.447: INFO: Pod "pod-subpath-test-secret-72qq": Phase="Running", Reason="", readiness=true. Elapsed: 4.027043624s May 22 13:01:10.451: INFO: Pod "pod-subpath-test-secret-72qq": Phase="Running", Reason="", readiness=true. Elapsed: 6.031240205s May 22 13:01:12.454: INFO: Pod "pod-subpath-test-secret-72qq": Phase="Running", Reason="", readiness=true. Elapsed: 8.034614155s May 22 13:01:14.459: INFO: Pod "pod-subpath-test-secret-72qq": Phase="Running", Reason="", readiness=true. Elapsed: 10.03916588s May 22 13:01:16.463: INFO: Pod "pod-subpath-test-secret-72qq": Phase="Running", Reason="", readiness=true. Elapsed: 12.043645773s May 22 13:01:18.467: INFO: Pod "pod-subpath-test-secret-72qq": Phase="Running", Reason="", readiness=true. Elapsed: 14.047342018s May 22 13:01:20.471: INFO: Pod "pod-subpath-test-secret-72qq": Phase="Running", Reason="", readiness=true. Elapsed: 16.051501043s May 22 13:01:22.475: INFO: Pod "pod-subpath-test-secret-72qq": Phase="Running", Reason="", readiness=true. Elapsed: 18.055672537s May 22 13:01:24.479: INFO: Pod "pod-subpath-test-secret-72qq": Phase="Running", Reason="", readiness=true. Elapsed: 20.059715263s May 22 13:01:26.484: INFO: Pod "pod-subpath-test-secret-72qq": Phase="Running", Reason="", readiness=true. Elapsed: 22.064052259s May 22 13:01:28.489: INFO: Pod "pod-subpath-test-secret-72qq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.069309274s STEP: Saw pod success May 22 13:01:28.489: INFO: Pod "pod-subpath-test-secret-72qq" satisfied condition "success or failure" May 22 13:01:28.492: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-secret-72qq container test-container-subpath-secret-72qq: STEP: delete the pod May 22 13:01:28.520: INFO: Waiting for pod pod-subpath-test-secret-72qq to disappear May 22 13:01:28.551: INFO: Pod pod-subpath-test-secret-72qq no longer exists STEP: Deleting pod pod-subpath-test-secret-72qq May 22 13:01:28.551: INFO: Deleting pod "pod-subpath-test-secret-72qq" in namespace "subpath-1396" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:01:28.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1396" for this suite. May 22 13:01:34.570: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:01:34.641: INFO: namespace subpath-1396 deletion completed in 6.083904807s • [SLOW TEST:30.346 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:01:34.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:01:38.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5747" for this suite. May 22 13:02:24.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:02:24.820: INFO: namespace kubelet-test-5747 deletion completed in 46.081615168s • [SLOW TEST:50.178 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:02:24.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 22 13:02:32.953: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 22 13:02:32.963: INFO: Pod pod-with-poststart-exec-hook still exists May 22 13:02:34.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 22 13:02:34.967: INFO: Pod pod-with-poststart-exec-hook still exists May 22 13:02:36.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 22 13:02:36.968: INFO: Pod pod-with-poststart-exec-hook still exists May 22 13:02:38.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 22 13:02:38.968: INFO: Pod pod-with-poststart-exec-hook still exists May 22 13:02:40.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 22 13:02:40.968: INFO: Pod pod-with-poststart-exec-hook still exists May 22 13:02:42.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 22 13:02:42.968: INFO: Pod pod-with-poststart-exec-hook still exists May 22 13:02:44.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 22 13:02:44.967: INFO: Pod pod-with-poststart-exec-hook still exists May 22 13:02:46.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 22 13:02:46.968: INFO: Pod pod-with-poststart-exec-hook still exists May 22 13:02:48.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 22 13:02:48.967: INFO: Pod pod-with-poststart-exec-hook still exists May 22 13:02:50.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 22 13:02:50.968: INFO: Pod pod-with-poststart-exec-hook still exists May 22 13:02:52.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 22 13:02:52.967: INFO: Pod pod-with-poststart-exec-hook still exists May 22 13:02:54.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 22 13:02:54.966: INFO: Pod pod-with-poststart-exec-hook still exists May 22 13:02:56.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 22 13:02:56.968: INFO: Pod pod-with-poststart-exec-hook still exists May 22 13:02:58.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 22 13:02:58.983: INFO: Pod pod-with-poststart-exec-hook still exists May 22 13:03:00.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 22 13:03:00.967: INFO: Pod pod-with-poststart-exec-hook still exists May 22 13:03:02.963: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 22 13:03:02.968: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:03:02.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6497" for this suite. May 22 13:03:25.008: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:03:25.074: INFO: namespace container-lifecycle-hook-6497 deletion completed in 22.102550966s • [SLOW TEST:60.254 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:03:25.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-0fbdf9b9-69c2-477a-bd90-06c1ca6c711c STEP: Creating the pod STEP: Updating configmap configmap-test-upd-0fbdf9b9-69c2-477a-bd90-06c1ca6c711c STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:04:45.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1908" for this suite. May 22 13:05:07.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:05:07.673: INFO: namespace configmap-1908 deletion completed in 22.088475088s • [SLOW TEST:102.598 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:05:07.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-3b0fe05d-b9d9-433a-a3b0-a590cca3f2c6 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:05:07.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7158" for this suite. May 22 13:05:13.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:05:13.852: INFO: namespace secrets-7158 deletion completed in 6.11910131s • [SLOW TEST:6.179 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:05:13.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-6963 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-6963 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6963 May 22 13:05:13.996: INFO: Found 0 stateful pods, waiting for 1 May 22 13:05:24.001: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 22 13:05:24.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 22 13:05:26.685: INFO: stderr: "I0522 13:05:26.527914 38 log.go:172] (0xc000140e70) (0xc00051caa0) Create stream\nI0522 13:05:26.528004 38 log.go:172] (0xc000140e70) (0xc00051caa0) Stream added, broadcasting: 1\nI0522 13:05:26.530998 38 log.go:172] (0xc000140e70) Reply frame received for 1\nI0522 13:05:26.531028 38 log.go:172] (0xc000140e70) (0xc0004fe000) Create stream\nI0522 13:05:26.531036 38 log.go:172] (0xc000140e70) (0xc0004fe000) Stream added, broadcasting: 3\nI0522 13:05:26.531908 38 log.go:172] (0xc000140e70) Reply frame received for 3\nI0522 13:05:26.531952 38 log.go:172] (0xc000140e70) (0xc000528000) Create stream\nI0522 13:05:26.531965 38 log.go:172] (0xc000140e70) (0xc000528000) Stream added, broadcasting: 5\nI0522 13:05:26.532826 38 log.go:172] (0xc000140e70) Reply frame received for 5\nI0522 13:05:26.636534 38 log.go:172] (0xc000140e70) Data frame received for 5\nI0522 13:05:26.636564 38 log.go:172] (0xc000528000) (5) Data frame handling\nI0522 13:05:26.636580 38 log.go:172] (0xc000528000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0522 13:05:26.675457 38 log.go:172] (0xc000140e70) Data frame received for 3\nI0522 13:05:26.675499 38 log.go:172] (0xc0004fe000) (3) Data frame handling\nI0522 13:05:26.675535 38 log.go:172] (0xc0004fe000) (3) Data frame sent\nI0522 13:05:26.675549 38 log.go:172] (0xc000140e70) Data frame received for 3\nI0522 13:05:26.675560 38 log.go:172] (0xc0004fe000) (3) Data frame handling\nI0522 13:05:26.675708 38 log.go:172] (0xc000140e70) Data frame received for 5\nI0522 13:05:26.675736 38 log.go:172] (0xc000528000) (5) Data frame handling\nI0522 13:05:26.678106 38 log.go:172] (0xc000140e70) Data frame received for 1\nI0522 13:05:26.678125 38 log.go:172] (0xc00051caa0) (1) Data frame handling\nI0522 13:05:26.678138 38 log.go:172] (0xc00051caa0) (1) Data frame sent\nI0522 13:05:26.678150 38 log.go:172] (0xc000140e70) (0xc00051caa0) Stream removed, broadcasting: 1\nI0522 13:05:26.678168 38 log.go:172] (0xc000140e70) Go away received\nI0522 13:05:26.678760 38 log.go:172] (0xc000140e70) (0xc00051caa0) Stream removed, broadcasting: 1\nI0522 13:05:26.678794 38 log.go:172] (0xc000140e70) (0xc0004fe000) Stream removed, broadcasting: 3\nI0522 13:05:26.678806 38 log.go:172] (0xc000140e70) (0xc000528000) Stream removed, broadcasting: 5\n" May 22 13:05:26.686: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 22 13:05:26.686: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 22 13:05:26.689: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 22 13:05:36.695: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 22 13:05:36.695: INFO: Waiting for statefulset status.replicas updated to 0 May 22 13:05:36.709: INFO: POD NODE PHASE GRACE CONDITIONS May 22 13:05:36.709: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:14 +0000 UTC }] May 22 13:05:36.709: INFO: May 22 13:05:36.709: INFO: StatefulSet ss has not reached scale 3, at 1 May 22 13:05:37.714: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.99431063s May 22 13:05:38.746: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.989907756s May 22 13:05:39.812: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.957513057s May 22 13:05:40.818: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.891719235s May 22 13:05:41.823: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.885798325s May 22 13:05:42.828: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.880737217s May 22 13:05:43.833: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.87551683s May 22 13:05:44.837: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.870181834s May 22 13:05:45.844: INFO: Verifying statefulset ss doesn't scale past 3 for another 866.613265ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6963 May 22 13:05:46.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 22 13:05:47.114: INFO: stderr: "I0522 13:05:47.007476 70 log.go:172] (0xc000aba0b0) (0xc000918780) Create stream\nI0522 13:05:47.007540 70 log.go:172] (0xc000aba0b0) (0xc000918780) Stream added, broadcasting: 1\nI0522 13:05:47.009632 70 log.go:172] (0xc000aba0b0) Reply frame received for 1\nI0522 13:05:47.009683 70 log.go:172] (0xc000aba0b0) (0xc000436820) Create stream\nI0522 13:05:47.009698 70 log.go:172] (0xc000aba0b0) (0xc000436820) Stream added, broadcasting: 3\nI0522 13:05:47.010495 70 log.go:172] (0xc000aba0b0) Reply frame received for 3\nI0522 13:05:47.010525 70 log.go:172] (0xc000aba0b0) (0xc000918820) Create stream\nI0522 13:05:47.010534 70 log.go:172] (0xc000aba0b0) (0xc000918820) Stream added, broadcasting: 5\nI0522 13:05:47.011461 70 log.go:172] (0xc000aba0b0) Reply frame received for 5\nI0522 13:05:47.108032 70 log.go:172] (0xc000aba0b0) Data frame received for 3\nI0522 13:05:47.108078 70 log.go:172] (0xc000436820) (3) Data frame handling\nI0522 13:05:47.108092 70 log.go:172] (0xc000436820) (3) Data frame sent\nI0522 13:05:47.108102 70 log.go:172] (0xc000aba0b0) Data frame received for 3\nI0522 13:05:47.108111 70 log.go:172] (0xc000436820) (3) Data frame handling\nI0522 13:05:47.108178 70 log.go:172] (0xc000aba0b0) Data frame received for 5\nI0522 13:05:47.108228 70 log.go:172] (0xc000918820) (5) Data frame handling\nI0522 13:05:47.108253 70 log.go:172] (0xc000918820) (5) Data frame sent\nI0522 13:05:47.108263 70 log.go:172] (0xc000aba0b0) Data frame received for 5\nI0522 13:05:47.108269 70 log.go:172] (0xc000918820) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0522 13:05:47.110209 70 log.go:172] (0xc000aba0b0) Data frame received for 1\nI0522 13:05:47.110230 70 log.go:172] (0xc000918780) (1) Data frame handling\nI0522 13:05:47.110243 70 log.go:172] (0xc000918780) (1) Data frame sent\nI0522 13:05:47.110264 70 log.go:172] (0xc000aba0b0) (0xc000918780) Stream removed, broadcasting: 1\nI0522 13:05:47.110570 70 log.go:172] (0xc000aba0b0) (0xc000918780) Stream removed, broadcasting: 1\nI0522 13:05:47.110585 70 log.go:172] (0xc000aba0b0) (0xc000436820) Stream removed, broadcasting: 3\nI0522 13:05:47.110723 70 log.go:172] (0xc000aba0b0) (0xc000918820) Stream removed, broadcasting: 5\n" May 22 13:05:47.114: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 22 13:05:47.114: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 22 13:05:47.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 22 13:05:47.372: INFO: stderr: "I0522 13:05:47.250469 90 log.go:172] (0xc0001ee420) (0xc00080c640) Create stream\nI0522 13:05:47.250529 90 log.go:172] (0xc0001ee420) (0xc00080c640) Stream added, broadcasting: 1\nI0522 13:05:47.253041 90 log.go:172] (0xc0001ee420) Reply frame received for 1\nI0522 13:05:47.253091 90 log.go:172] (0xc0001ee420) (0xc0003dc000) Create stream\nI0522 13:05:47.253108 90 log.go:172] (0xc0001ee420) (0xc0003dc000) Stream added, broadcasting: 3\nI0522 13:05:47.254205 90 log.go:172] (0xc0001ee420) Reply frame received for 3\nI0522 13:05:47.254233 90 log.go:172] (0xc0001ee420) (0xc0003dc0a0) Create stream\nI0522 13:05:47.254244 90 log.go:172] (0xc0001ee420) (0xc0003dc0a0) Stream added, broadcasting: 5\nI0522 13:05:47.255080 90 log.go:172] (0xc0001ee420) Reply frame received for 5\nI0522 13:05:47.359637 90 log.go:172] (0xc0001ee420) Data frame received for 5\nI0522 13:05:47.359667 90 log.go:172] (0xc0003dc0a0) (5) Data frame handling\nI0522 13:05:47.359681 90 log.go:172] (0xc0003dc0a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0522 13:05:47.365998 90 log.go:172] (0xc0001ee420) Data frame received for 5\nI0522 13:05:47.366028 90 log.go:172] (0xc0003dc0a0) (5) Data frame handling\nI0522 13:05:47.366039 90 log.go:172] (0xc0003dc0a0) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0522 13:05:47.366068 90 log.go:172] (0xc0001ee420) Data frame received for 3\nI0522 13:05:47.366095 90 log.go:172] (0xc0003dc000) (3) Data frame handling\nI0522 13:05:47.366115 90 log.go:172] (0xc0003dc000) (3) Data frame sent\nI0522 13:05:47.366283 90 log.go:172] (0xc0001ee420) Data frame received for 3\nI0522 13:05:47.366302 90 log.go:172] (0xc0003dc000) (3) Data frame handling\nI0522 13:05:47.366951 90 log.go:172] (0xc0001ee420) Data frame received for 5\nI0522 13:05:47.366976 90 log.go:172] (0xc0003dc0a0) (5) Data frame handling\nI0522 13:05:47.366992 90 log.go:172] (0xc0003dc0a0) (5) Data frame sent\nI0522 13:05:47.367008 90 log.go:172] (0xc0001ee420) Data frame received for 5\nI0522 13:05:47.367021 90 log.go:172] (0xc0003dc0a0) (5) Data frame handling\n+ true\nI0522 13:05:47.368396 90 log.go:172] (0xc0001ee420) Data frame received for 1\nI0522 13:05:47.368410 90 log.go:172] (0xc00080c640) (1) Data frame handling\nI0522 13:05:47.368421 90 log.go:172] (0xc00080c640) (1) Data frame sent\nI0522 13:05:47.368433 90 log.go:172] (0xc0001ee420) (0xc00080c640) Stream removed, broadcasting: 1\nI0522 13:05:47.368479 90 log.go:172] (0xc0001ee420) Go away received\nI0522 13:05:47.368790 90 log.go:172] (0xc0001ee420) (0xc00080c640) Stream removed, broadcasting: 1\nI0522 13:05:47.368812 90 log.go:172] (0xc0001ee420) (0xc0003dc000) Stream removed, broadcasting: 3\nI0522 13:05:47.368823 90 log.go:172] (0xc0001ee420) (0xc0003dc0a0) Stream removed, broadcasting: 5\n" May 22 13:05:47.373: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 22 13:05:47.373: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 22 13:05:47.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 22 13:05:47.546: INFO: stderr: "I0522 13:05:47.490379 110 log.go:172] (0xc00091e420) (0xc0003d6820) Create stream\nI0522 13:05:47.490439 110 log.go:172] (0xc00091e420) (0xc0003d6820) Stream added, broadcasting: 1\nI0522 13:05:47.493845 110 log.go:172] (0xc00091e420) Reply frame received for 1\nI0522 13:05:47.493886 110 log.go:172] (0xc00091e420) (0xc0003d6000) Create stream\nI0522 13:05:47.493896 110 log.go:172] (0xc00091e420) (0xc0003d6000) Stream added, broadcasting: 3\nI0522 13:05:47.494683 110 log.go:172] (0xc00091e420) Reply frame received for 3\nI0522 13:05:47.494705 110 log.go:172] (0xc00091e420) (0xc0005ec280) Create stream\nI0522 13:05:47.494713 110 log.go:172] (0xc00091e420) (0xc0005ec280) Stream added, broadcasting: 5\nI0522 13:05:47.495425 110 log.go:172] (0xc00091e420) Reply frame received for 5\nI0522 13:05:47.539454 110 log.go:172] (0xc00091e420) Data frame received for 3\nI0522 13:05:47.539487 110 log.go:172] (0xc0003d6000) (3) Data frame handling\nI0522 13:05:47.539498 110 log.go:172] (0xc0003d6000) (3) Data frame sent\nI0522 13:05:47.539600 110 log.go:172] (0xc00091e420) Data frame received for 5\nI0522 13:05:47.539613 110 log.go:172] (0xc0005ec280) (5) Data frame handling\nI0522 13:05:47.539620 110 log.go:172] (0xc0005ec280) (5) Data frame sent\nI0522 13:05:47.539630 110 log.go:172] (0xc00091e420) Data frame received for 5\nI0522 13:05:47.539642 110 log.go:172] (0xc0005ec280) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0522 13:05:47.539682 110 log.go:172] (0xc00091e420) Data frame received for 3\nI0522 13:05:47.539692 110 log.go:172] (0xc0003d6000) (3) Data frame handling\nI0522 13:05:47.541394 110 log.go:172] (0xc00091e420) Data frame received for 1\nI0522 13:05:47.541498 110 log.go:172] (0xc0003d6820) (1) Data frame handling\nI0522 13:05:47.541528 110 log.go:172] (0xc0003d6820) (1) Data frame sent\nI0522 13:05:47.541572 110 log.go:172] (0xc00091e420) (0xc0003d6820) Stream removed, broadcasting: 1\nI0522 13:05:47.541607 110 log.go:172] (0xc00091e420) Go away received\nI0522 13:05:47.541939 110 log.go:172] (0xc00091e420) (0xc0003d6820) Stream removed, broadcasting: 1\nI0522 13:05:47.542025 110 log.go:172] (0xc00091e420) (0xc0003d6000) Stream removed, broadcasting: 3\nI0522 13:05:47.542051 110 log.go:172] (0xc00091e420) (0xc0005ec280) Stream removed, broadcasting: 5\n" May 22 13:05:47.546: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 22 13:05:47.546: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 22 13:05:47.549: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false May 22 13:05:57.554: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 22 13:05:57.554: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 22 13:05:57.554: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 22 13:05:57.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 22 13:05:57.749: INFO: stderr: "I0522 13:05:57.677697 130 log.go:172] (0xc00097e420) (0xc0006126e0) Create stream\nI0522 13:05:57.677743 130 log.go:172] (0xc00097e420) (0xc0006126e0) Stream added, broadcasting: 1\nI0522 13:05:57.683131 130 log.go:172] (0xc00097e420) Reply frame received for 1\nI0522 13:05:57.683172 130 log.go:172] (0xc00097e420) (0xc000612000) Create stream\nI0522 13:05:57.683198 130 log.go:172] (0xc00097e420) (0xc000612000) Stream added, broadcasting: 3\nI0522 13:05:57.684077 130 log.go:172] (0xc00097e420) Reply frame received for 3\nI0522 13:05:57.684111 130 log.go:172] (0xc00097e420) (0xc0006100a0) Create stream\nI0522 13:05:57.684122 130 log.go:172] (0xc00097e420) (0xc0006100a0) Stream added, broadcasting: 5\nI0522 13:05:57.684996 130 log.go:172] (0xc00097e420) Reply frame received for 5\nI0522 13:05:57.742012 130 log.go:172] (0xc00097e420) Data frame received for 5\nI0522 13:05:57.742067 130 log.go:172] (0xc0006100a0) (5) Data frame handling\nI0522 13:05:57.742095 130 log.go:172] (0xc0006100a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0522 13:05:57.742115 130 log.go:172] (0xc00097e420) Data frame received for 5\nI0522 13:05:57.742151 130 log.go:172] (0xc0006100a0) (5) Data frame handling\nI0522 13:05:57.742178 130 log.go:172] (0xc00097e420) Data frame received for 3\nI0522 13:05:57.742192 130 log.go:172] (0xc000612000) (3) Data frame handling\nI0522 13:05:57.742210 130 log.go:172] (0xc000612000) (3) Data frame sent\nI0522 13:05:57.742241 130 log.go:172] (0xc00097e420) Data frame received for 3\nI0522 13:05:57.742278 130 log.go:172] (0xc000612000) (3) Data frame handling\nI0522 13:05:57.743745 130 log.go:172] (0xc00097e420) Data frame received for 1\nI0522 13:05:57.743770 130 log.go:172] (0xc0006126e0) (1) Data frame handling\nI0522 13:05:57.743781 130 log.go:172] (0xc0006126e0) (1) Data frame sent\nI0522 13:05:57.743795 130 log.go:172] (0xc00097e420) (0xc0006126e0) Stream removed, broadcasting: 1\nI0522 13:05:57.743810 130 log.go:172] (0xc00097e420) Go away received\nI0522 13:05:57.744204 130 log.go:172] (0xc00097e420) (0xc0006126e0) Stream removed, broadcasting: 1\nI0522 13:05:57.744240 130 log.go:172] (0xc00097e420) (0xc000612000) Stream removed, broadcasting: 3\nI0522 13:05:57.744251 130 log.go:172] (0xc00097e420) (0xc0006100a0) Stream removed, broadcasting: 5\n" May 22 13:05:57.749: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 22 13:05:57.749: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 22 13:05:57.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 22 13:05:57.992: INFO: stderr: "I0522 13:05:57.895015 152 log.go:172] (0xc000a0a370) (0xc0008ec780) Create stream\nI0522 13:05:57.895061 152 log.go:172] (0xc000a0a370) (0xc0008ec780) Stream added, broadcasting: 1\nI0522 13:05:57.897684 152 log.go:172] (0xc000a0a370) Reply frame received for 1\nI0522 13:05:57.897726 152 log.go:172] (0xc000a0a370) (0xc000b12000) Create stream\nI0522 13:05:57.897740 152 log.go:172] (0xc000a0a370) (0xc000b12000) Stream added, broadcasting: 3\nI0522 13:05:57.898632 152 log.go:172] (0xc000a0a370) Reply frame received for 3\nI0522 13:05:57.898661 152 log.go:172] (0xc000a0a370) (0xc000b120a0) Create stream\nI0522 13:05:57.898670 152 log.go:172] (0xc000a0a370) (0xc000b120a0) Stream added, broadcasting: 5\nI0522 13:05:57.899489 152 log.go:172] (0xc000a0a370) Reply frame received for 5\nI0522 13:05:57.956999 152 log.go:172] (0xc000a0a370) Data frame received for 5\nI0522 13:05:57.957023 152 log.go:172] (0xc000b120a0) (5) Data frame handling\nI0522 13:05:57.957034 152 log.go:172] (0xc000b120a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0522 13:05:57.985354 152 log.go:172] (0xc000a0a370) Data frame received for 3\nI0522 13:05:57.985392 152 log.go:172] (0xc000b12000) (3) Data frame handling\nI0522 13:05:57.985405 152 log.go:172] (0xc000b12000) (3) Data frame sent\nI0522 13:05:57.985416 152 log.go:172] (0xc000a0a370) Data frame received for 3\nI0522 13:05:57.985424 152 log.go:172] (0xc000b12000) (3) Data frame handling\nI0522 13:05:57.986524 152 log.go:172] (0xc000a0a370) Data frame received for 5\nI0522 13:05:57.986553 152 log.go:172] (0xc000b120a0) (5) Data frame handling\nI0522 13:05:57.987726 152 log.go:172] (0xc000a0a370) Data frame received for 1\nI0522 13:05:57.987768 152 log.go:172] (0xc0008ec780) (1) Data frame handling\nI0522 13:05:57.987792 152 log.go:172] (0xc0008ec780) (1) Data frame sent\nI0522 13:05:57.987815 152 log.go:172] (0xc000a0a370) (0xc0008ec780) Stream removed, broadcasting: 1\nI0522 13:05:57.987842 152 log.go:172] (0xc000a0a370) Go away received\nI0522 13:05:57.988351 152 log.go:172] (0xc000a0a370) (0xc0008ec780) Stream removed, broadcasting: 1\nI0522 13:05:57.988375 152 log.go:172] (0xc000a0a370) (0xc000b12000) Stream removed, broadcasting: 3\nI0522 13:05:57.988387 152 log.go:172] (0xc000a0a370) (0xc000b120a0) Stream removed, broadcasting: 5\n" May 22 13:05:57.992: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 22 13:05:57.992: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 22 13:05:57.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 22 13:05:58.245: INFO: stderr: "I0522 13:05:58.121089 172 log.go:172] (0xc00089a8f0) (0xc0008988c0) Create stream\nI0522 13:05:58.121374 172 log.go:172] (0xc00089a8f0) (0xc0008988c0) Stream added, broadcasting: 1\nI0522 13:05:58.126600 172 log.go:172] (0xc00089a8f0) Reply frame received for 1\nI0522 13:05:58.126632 172 log.go:172] (0xc00089a8f0) (0xc000888000) Create stream\nI0522 13:05:58.126650 172 log.go:172] (0xc00089a8f0) (0xc000888000) Stream added, broadcasting: 3\nI0522 13:05:58.127667 172 log.go:172] (0xc00089a8f0) Reply frame received for 3\nI0522 13:05:58.127719 172 log.go:172] (0xc00089a8f0) (0xc000898000) Create stream\nI0522 13:05:58.127735 172 log.go:172] (0xc00089a8f0) (0xc000898000) Stream added, broadcasting: 5\nI0522 13:05:58.128768 172 log.go:172] (0xc00089a8f0) Reply frame received for 5\nI0522 13:05:58.190531 172 log.go:172] (0xc00089a8f0) Data frame received for 5\nI0522 13:05:58.190557 172 log.go:172] (0xc000898000) (5) Data frame handling\nI0522 13:05:58.190585 172 log.go:172] (0xc000898000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0522 13:05:58.237038 172 log.go:172] (0xc00089a8f0) Data frame received for 3\nI0522 13:05:58.237075 172 log.go:172] (0xc000888000) (3) Data frame handling\nI0522 13:05:58.237095 172 log.go:172] (0xc000888000) (3) Data frame sent\nI0522 13:05:58.237366 172 log.go:172] (0xc00089a8f0) Data frame received for 3\nI0522 13:05:58.237390 172 log.go:172] (0xc000888000) (3) Data frame handling\nI0522 13:05:58.237695 172 log.go:172] (0xc00089a8f0) Data frame received for 5\nI0522 13:05:58.237728 172 log.go:172] (0xc000898000) (5) Data frame handling\nI0522 13:05:58.239503 172 log.go:172] (0xc00089a8f0) Data frame received for 1\nI0522 13:05:58.239521 172 log.go:172] (0xc0008988c0) (1) Data frame handling\nI0522 13:05:58.239531 172 log.go:172] (0xc0008988c0) (1) Data frame sent\nI0522 13:05:58.239544 172 log.go:172] (0xc00089a8f0) (0xc0008988c0) Stream removed, broadcasting: 1\nI0522 13:05:58.239778 172 log.go:172] (0xc00089a8f0) (0xc0008988c0) Stream removed, broadcasting: 1\nI0522 13:05:58.239789 172 log.go:172] (0xc00089a8f0) (0xc000888000) Stream removed, broadcasting: 3\nI0522 13:05:58.239962 172 log.go:172] (0xc00089a8f0) (0xc000898000) Stream removed, broadcasting: 5\nI0522 13:05:58.240023 172 log.go:172] (0xc00089a8f0) Go away received\n" May 22 13:05:58.245: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 22 13:05:58.245: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 22 13:05:58.245: INFO: Waiting for statefulset status.replicas updated to 0 May 22 13:05:58.248: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 22 13:06:08.257: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 22 13:06:08.257: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 22 13:06:08.257: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 22 13:06:08.267: INFO: POD NODE PHASE GRACE CONDITIONS May 22 13:06:08.267: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:14 +0000 UTC }] May 22 13:06:08.267: INFO: ss-1 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:36 +0000 UTC }] May 22 13:06:08.267: INFO: ss-2 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:36 +0000 UTC }] May 22 13:06:08.267: INFO: May 22 13:06:08.267: INFO: StatefulSet ss has not reached scale 0, at 3 May 22 13:06:09.304: INFO: POD NODE PHASE GRACE CONDITIONS May 22 13:06:09.304: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:14 +0000 UTC }] May 22 13:06:09.304: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:36 +0000 UTC }] May 22 13:06:09.304: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:36 +0000 UTC }] May 22 13:06:09.304: INFO: May 22 13:06:09.304: INFO: StatefulSet ss has not reached scale 0, at 3 May 22 13:06:10.310: INFO: POD NODE PHASE GRACE CONDITIONS May 22 13:06:10.310: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:14 +0000 UTC }] May 22 13:06:10.310: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:36 +0000 UTC }] May 22 13:06:10.310: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:36 +0000 UTC }] May 22 13:06:10.310: INFO: May 22 13:06:10.310: INFO: StatefulSet ss has not reached scale 0, at 3 May 22 13:06:11.315: INFO: POD NODE PHASE GRACE CONDITIONS May 22 13:06:11.315: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:14 +0000 UTC }] May 22 13:06:11.315: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:36 +0000 UTC }] May 22 13:06:11.315: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:36 +0000 UTC }] May 22 13:06:11.315: INFO: May 22 13:06:11.315: INFO: StatefulSet ss has not reached scale 0, at 3 May 22 13:06:12.324: INFO: POD NODE PHASE GRACE CONDITIONS May 22 13:06:12.324: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:36 +0000 UTC }] May 22 13:06:12.324: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:36 +0000 UTC }] May 22 13:06:12.324: INFO: May 22 13:06:12.325: INFO: StatefulSet ss has not reached scale 0, at 2 May 22 13:06:13.329: INFO: POD NODE PHASE GRACE CONDITIONS May 22 13:06:13.329: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:36 +0000 UTC }] May 22 13:06:13.329: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:36 +0000 UTC }] May 22 13:06:13.329: INFO: May 22 13:06:13.329: INFO: StatefulSet ss has not reached scale 0, at 2 May 22 13:06:14.333: INFO: POD NODE PHASE GRACE CONDITIONS May 22 13:06:14.333: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:36 +0000 UTC }] May 22 13:06:14.333: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:36 +0000 UTC }] May 22 13:06:14.333: INFO: May 22 13:06:14.333: INFO: StatefulSet ss has not reached scale 0, at 2 May 22 13:06:15.338: INFO: POD NODE PHASE GRACE CONDITIONS May 22 13:06:15.338: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:36 +0000 UTC }] May 22 13:06:15.338: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:36 +0000 UTC }] May 22 13:06:15.338: INFO: May 22 13:06:15.338: INFO: StatefulSet ss has not reached scale 0, at 2 May 22 13:06:16.343: INFO: POD NODE PHASE GRACE CONDITIONS May 22 13:06:16.343: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:36 +0000 UTC }] May 22 13:06:16.343: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:36 +0000 UTC }] May 22 13:06:16.343: INFO: May 22 13:06:16.343: INFO: StatefulSet ss has not reached scale 0, at 2 May 22 13:06:17.348: INFO: POD NODE PHASE GRACE CONDITIONS May 22 13:06:17.348: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:36 +0000 UTC }] May 22 13:06:17.348: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:05:36 +0000 UTC }] May 22 13:06:17.349: INFO: May 22 13:06:17.349: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6963 May 22 13:06:18.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 22 13:06:18.479: INFO: rc: 1 May 22 13:06:18.479: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc0022f9e60 exit status 1 true [0xc002cb61f8 0xc002cb6248 0xc002cb6290] [0xc002cb61f8 0xc002cb6248 0xc002cb6290] [0xc002cb6228 0xc002cb6280] [0xba70e0 0xba70e0] 0xc002bcd980 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 May 22 13:06:28.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 22 13:06:28.577: INFO: rc: 1 May 22 13:06:28.577: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0029be300 exit status 1 true [0xc001ab3660 0xc001ab3710 0xc001ab3918] [0xc001ab3660 0xc001ab3710 0xc001ab3918] [0xc001ab36f0 0xc001ab3890] [0xba70e0 0xba70e0] 0xc002334cc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 22 13:06:38.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 22 13:06:38.669: INFO: rc: 1 May 22 13:06:38.669: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0029be3f0 exit status 1 true [0xc001ab3930 0xc001ab3a50 0xc001ab3af8] [0xc001ab3930 0xc001ab3a50 0xc001ab3af8] [0xc001ab39f0 0xc001ab3ae8] [0xba70e0 0xba70e0] 0xc002334fc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 22 13:06:48.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 22 13:06:48.784: INFO: rc: 1 May 22 13:06:48.784: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0020f8090 exit status 1 true [0xc000546048 0xc000546250 0xc000546490] [0xc000546048 0xc000546250 0xc000546490] [0xc0005460a0 0xc000546430] [0xba70e0 0xba70e0] 0xc0025ce8a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 22 13:06:58.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 22 13:06:58.880: INFO: rc: 1 May 22 13:06:58.880: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0020f8180 exit status 1 true [0xc000546498 0xc000546548 0xc0005465d0] [0xc000546498 0xc000546548 0xc0005465d0] [0xc000546510 0xc0005465b0] [0xba70e0 0xba70e0] 0xc0025ceba0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 22 13:07:08.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 22 13:07:08.986: INFO: rc: 1 May 22 13:07:08.986: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0022f9f50 exit status 1 true [0xc002cb62a8 0xc002cb62c0 0xc002cb62d8] [0xc002cb62a8 0xc002cb62c0 0xc002cb62d8] [0xc002cb62b8 0xc002cb62d0] [0xba70e0 0xba70e0] 0xc002bcdce0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 22 13:07:18.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 22 13:07:19.087: INFO: rc: 1 May 22 13:07:19.087: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0020f8270 exit status 1 true [0xc0005465e0 0xc000546628 0xc000546658] [0xc0005465e0 0xc000546628 0xc000546658] [0xc000546618 0xc000546650] [0xba70e0 0xba70e0] 0xc0025ceea0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 22 13:07:29.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 22 13:07:29.182: INFO: rc: 1 May 22 13:07:29.182: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0020f8360 exit status 1 true [0xc000546688 0xc000546728 0xc000546788] [0xc000546688 0xc000546728 0xc000546788] [0xc000546718 0xc000546768] [0xba70e0 0xba70e0] 0xc0025cf200 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 22 13:07:39.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 22 13:07:39.269: INFO: rc: 1 May 22 13:07:39.269: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0020f8450 exit status 1 true [0xc0005467b8 0xc0005467e0 0xc000546818] [0xc0005467b8 0xc0005467e0 0xc000546818] [0xc0005467d0 0xc000546808] [0xba70e0 0xba70e0] 0xc0025cfe00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 22 13:07:49.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 22 13:07:49.373: INFO: rc: 1 May 22 13:07:49.373: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0020c0090 exit status 1 true [0xc000546060 0xc0005463d0 0xc000546498] [0xc000546060 0xc0005463d0 0xc000546498] [0xc000546250 0xc000546490] [0xba70e0 0xba70e0] 0xc002c762a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 22 13:07:59.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 22 13:07:59.467: INFO: rc: 1 May 22 13:07:59.467: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc00076b950 exit status 1 true [0xc002cb6000 0xc002cb6030 0xc002cb6070] [0xc002cb6000 0xc002cb6030 0xc002cb6070] [0xc002cb6018 0xc002cb6050] [0xba70e0 0xba70e0] 0xc0025ce4e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 22 13:08:09.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 22 13:08:09.597: INFO: rc: 1 May 22 13:08:09.598: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0020c0180 exit status 1 true [0xc0005464d8 0xc000546588 0xc0005465e0] [0xc0005464d8 0xc000546588 0xc0005465e0] [0xc000546548 0xc0005465d0] [0xba70e0 0xba70e0] 0xc002c77560 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 22 13:08:19.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 22 13:08:19.702: INFO: rc: 1 May 22 13:08:19.702: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc00149e0c0 exit status 1 true [0xc001ab2060 0xc001ab20d0 0xc001ab22a0] [0xc001ab2060 0xc001ab20d0 0xc001ab22a0] [0xc001ab2090 0xc001ab2290] [0xba70e0 0xba70e0] 0xc002bcc3c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 22 13:08:29.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 22 13:08:29.802: INFO: rc: 1 May 22 13:08:29.802: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc00149e1b0 exit status 1 true [0xc001ab22d8 0xc001ab2470 0xc001ab2550] [0xc001ab22d8 0xc001ab2470 0xc001ab2550] [0xc001ab23d8 0xc001ab2510] [0xba70e0 0xba70e0] 0xc002bcc6c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 22 13:08:39.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 22 13:08:39.902: INFO: rc: 1 May 22 13:08:39.902: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc00076ba10 exit status 1 true [0xc002cb6078 0xc002cb60b8 0xc002cb60e8] [0xc002cb6078 0xc002cb60b8 0xc002cb60e8] [0xc002cb60a0 0xc002cb60c8] [0xba70e0 0xba70e0] 0xc0025ceae0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 22 13:08:49.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 22 13:08:50.000: INFO: rc: 1 May 22 13:08:50.000: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc00076bad0 exit status 1 true [0xc002cb6110 0xc002cb6138 0xc002cb6150] [0xc002cb6110 0xc002cb6138 0xc002cb6150] [0xc002cb6130 0xc002cb6148] [0xba70e0 0xba70e0] 0xc0025cede0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 22 13:09:00.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 22 13:09:00.103: INFO: rc: 1 May 22 13:09:00.103: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0020c0270 exit status 1 true [0xc0005465f8 0xc000546648 0xc000546688] [0xc0005465f8 0xc000546648 0xc000546688] [0xc000546628 0xc000546658] [0xba70e0 0xba70e0] 0xc002c77860 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 22 13:09:10.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 22 13:09:10.202: INFO: rc: 1 May 22 13:09:10.203: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0020c0360 exit status 1 true [0xc0005466d8 0xc000546750 0xc0005467b8] [0xc0005466d8 0xc000546750 0xc0005467b8] [0xc000546728 0xc000546788] [0xba70e0 0xba70e0] 0xc002c77b60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 22 13:09:20.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 22 13:09:20.300: INFO: rc: 1 May 22 13:09:20.300: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0020c0420 exit status 1 true [0xc0005467c8 0xc0005467f0 0xc000546820] [0xc0005467c8 0xc0005467f0 0xc000546820] [0xc0005467e0 0xc000546818] [0xba70e0 0xba70e0] 0xc0023340c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 22 13:09:30.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 22 13:09:30.399: INFO: rc: 1 May 22 13:09:30.400: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc00076bbc0 exit status 1 true [0xc002cb6158 0xc002cb6170 0xc002cb61c8] [0xc002cb6158 0xc002cb6170 0xc002cb61c8] [0xc002cb6168 0xc002cb61a0] [0xba70e0 0xba70e0] 0xc0025cf140 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 22 13:09:40.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 22 13:09:40.499: INFO: rc: 1 May 22 13:09:40.499: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc00076bc80 exit status 1 true [0xc002cb61d0 0xc002cb6210 0xc002cb6268] [0xc002cb61d0 0xc002cb6210 0xc002cb6268] [0xc002cb61f8 0xc002cb6248] [0xba70e0 0xba70e0] 0xc0025cfd40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 22 13:09:50.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 22 13:09:50.598: INFO: rc: 1 May 22 13:09:50.598: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc00149e090 exit status 1 true [0xc001ab2080 0xc001ab21e8 0xc001ab22d8] [0xc001ab2080 0xc001ab21e8 0xc001ab22d8] [0xc001ab20d0 0xc001ab22a0] [0xba70e0 0xba70e0] 0xc002c76180 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 22 13:10:00.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 22 13:10:00.719: INFO: rc: 1 May 22 13:10:00.719: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0020c00c0 exit status 1 true [0xc000546048 0xc000546250 0xc000546490] [0xc000546048 0xc000546250 0xc000546490] [0xc0005460a0 0xc000546430] [0xba70e0 0xba70e0] 0xc002bcc2a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 22 13:10:10.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 22 13:10:10.816: INFO: rc: 1 May 22 13:10:10.816: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0020c01e0 exit status 1 true [0xc000546498 0xc000546548 0xc0005465d0] [0xc000546498 0xc000546548 0xc0005465d0] [0xc000546510 0xc0005465b0] [0xba70e0 0xba70e0] 0xc002bcc600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 22 13:10:20.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 22 13:10:20.907: INFO: rc: 1 May 22 13:10:20.907: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0020c0300 exit status 1 true [0xc0005465e0 0xc000546628 0xc000546658] [0xc0005465e0 0xc000546628 0xc000546658] [0xc000546618 0xc000546650] [0xba70e0 0xba70e0] 0xc002bcd260 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 22 13:10:30.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 22 13:10:31.002: INFO: rc: 1 May 22 13:10:31.002: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0020780c0 exit status 1 true [0xc002cb6000 0xc002cb6030 0xc002cb6070] [0xc002cb6000 0xc002cb6030 0xc002cb6070] [0xc002cb6018 0xc002cb6050] [0xba70e0 0xba70e0] 0xc002334240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 22 13:10:41.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 22 13:10:41.109: INFO: rc: 1 May 22 13:10:41.110: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0020781b0 exit status 1 true [0xc002cb6078 0xc002cb60b8 0xc002cb60e8] [0xc002cb6078 0xc002cb60b8 0xc002cb60e8] [0xc002cb60a0 0xc002cb60c8] [0xba70e0 0xba70e0] 0xc002334540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 22 13:10:51.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 22 13:10:51.222: INFO: rc: 1 May 22 13:10:51.222: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0020c03f0 exit status 1 true [0xc000546688 0xc000546728 0xc000546788] [0xc000546688 0xc000546728 0xc000546788] [0xc000546718 0xc000546768] [0xba70e0 0xba70e0] 0xc002bcdb60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 22 13:11:01.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 22 13:11:01.317: INFO: rc: 1 May 22 13:11:01.317: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0020c04e0 exit status 1 true [0xc0005467b8 0xc0005467e0 0xc000546818] [0xc0005467b8 0xc0005467e0 0xc000546818] [0xc0005467d0 0xc000546808] [0xba70e0 0xba70e0] 0xc002bcde60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 22 13:11:11.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 22 13:11:11.416: INFO: rc: 1 May 22 13:11:11.416: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0020c05d0 exit status 1 true [0xc000546820 0xc000546890 0xc0005468c8] [0xc000546820 0xc000546890 0xc0005468c8] [0xc000546878 0xc0005468a8] [0xba70e0 0xba70e0] 0xc0025ce8a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 22 13:11:21.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6963 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 22 13:11:21.529: INFO: rc: 1 May 22 13:11:21.529: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: May 22 13:11:21.529: INFO: Scaling statefulset ss to 0 May 22 13:11:21.537: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 22 13:11:21.539: INFO: Deleting all statefulset in ns statefulset-6963 May 22 13:11:21.541: INFO: Scaling statefulset ss to 0 May 22 13:11:21.550: INFO: Waiting for statefulset status.replicas updated to 0 May 22 13:11:21.552: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:11:21.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6963" for this suite. May 22 13:11:27.622: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:11:27.706: INFO: namespace statefulset-6963 deletion completed in 6.137180004s • [SLOW TEST:373.853 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:11:27.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller May 22 13:11:27.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9943' May 22 13:11:28.116: INFO: stderr: "" May 22 13:11:28.116: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 22 13:11:28.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9943' May 22 13:11:28.221: INFO: stderr: "" May 22 13:11:28.221: INFO: stdout: "update-demo-nautilus-bhrpp update-demo-nautilus-lch4r " May 22 13:11:28.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bhrpp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9943' May 22 13:11:28.325: INFO: stderr: "" May 22 13:11:28.325: INFO: stdout: "" May 22 13:11:28.325: INFO: update-demo-nautilus-bhrpp is created but not running May 22 13:11:33.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9943' May 22 13:11:33.429: INFO: stderr: "" May 22 13:11:33.429: INFO: stdout: "update-demo-nautilus-bhrpp update-demo-nautilus-lch4r " May 22 13:11:33.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bhrpp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9943' May 22 13:11:33.515: INFO: stderr: "" May 22 13:11:33.515: INFO: stdout: "true" May 22 13:11:33.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bhrpp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9943' May 22 13:11:33.607: INFO: stderr: "" May 22 13:11:33.607: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 22 13:11:33.607: INFO: validating pod update-demo-nautilus-bhrpp May 22 13:11:33.613: INFO: got data: { "image": "nautilus.jpg" } May 22 13:11:33.613: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 22 13:11:33.613: INFO: update-demo-nautilus-bhrpp is verified up and running May 22 13:11:33.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lch4r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9943' May 22 13:11:33.711: INFO: stderr: "" May 22 13:11:33.711: INFO: stdout: "true" May 22 13:11:33.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lch4r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9943' May 22 13:11:33.805: INFO: stderr: "" May 22 13:11:33.805: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 22 13:11:33.805: INFO: validating pod update-demo-nautilus-lch4r May 22 13:11:33.819: INFO: got data: { "image": "nautilus.jpg" } May 22 13:11:33.819: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 22 13:11:33.819: INFO: update-demo-nautilus-lch4r is verified up and running STEP: scaling down the replication controller May 22 13:11:33.822: INFO: scanned /root for discovery docs: May 22 13:11:33.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-9943' May 22 13:11:34.976: INFO: stderr: "" May 22 13:11:34.976: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 22 13:11:34.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9943' May 22 13:11:35.077: INFO: stderr: "" May 22 13:11:35.077: INFO: stdout: "update-demo-nautilus-bhrpp update-demo-nautilus-lch4r " STEP: Replicas for name=update-demo: expected=1 actual=2 May 22 13:11:40.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9943' May 22 13:11:40.194: INFO: stderr: "" May 22 13:11:40.194: INFO: stdout: "update-demo-nautilus-bhrpp update-demo-nautilus-lch4r " STEP: Replicas for name=update-demo: expected=1 actual=2 May 22 13:11:45.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9943' May 22 13:11:45.287: INFO: stderr: "" May 22 13:11:45.287: INFO: stdout: "update-demo-nautilus-bhrpp " May 22 13:11:45.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bhrpp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9943' May 22 13:11:45.380: INFO: stderr: "" May 22 13:11:45.380: INFO: stdout: "true" May 22 13:11:45.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bhrpp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9943' May 22 13:11:45.471: INFO: stderr: "" May 22 13:11:45.471: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 22 13:11:45.471: INFO: validating pod update-demo-nautilus-bhrpp May 22 13:11:45.474: INFO: got data: { "image": "nautilus.jpg" } May 22 13:11:45.474: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 22 13:11:45.474: INFO: update-demo-nautilus-bhrpp is verified up and running STEP: scaling up the replication controller May 22 13:11:45.475: INFO: scanned /root for discovery docs: May 22 13:11:45.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-9943' May 22 13:11:46.606: INFO: stderr: "" May 22 13:11:46.606: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 22 13:11:46.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9943' May 22 13:11:46.701: INFO: stderr: "" May 22 13:11:46.701: INFO: stdout: "update-demo-nautilus-bhrpp update-demo-nautilus-bxf2t " May 22 13:11:46.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bhrpp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9943' May 22 13:11:46.797: INFO: stderr: "" May 22 13:11:46.797: INFO: stdout: "true" May 22 13:11:46.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bhrpp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9943' May 22 13:11:46.907: INFO: stderr: "" May 22 13:11:46.907: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 22 13:11:46.907: INFO: validating pod update-demo-nautilus-bhrpp May 22 13:11:46.911: INFO: got data: { "image": "nautilus.jpg" } May 22 13:11:46.911: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 22 13:11:46.911: INFO: update-demo-nautilus-bhrpp is verified up and running May 22 13:11:46.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bxf2t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9943' May 22 13:11:47.062: INFO: stderr: "" May 22 13:11:47.062: INFO: stdout: "" May 22 13:11:47.062: INFO: update-demo-nautilus-bxf2t is created but not running May 22 13:11:52.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9943' May 22 13:11:52.163: INFO: stderr: "" May 22 13:11:52.163: INFO: stdout: "update-demo-nautilus-bhrpp update-demo-nautilus-bxf2t " May 22 13:11:52.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bhrpp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9943' May 22 13:11:52.252: INFO: stderr: "" May 22 13:11:52.252: INFO: stdout: "true" May 22 13:11:52.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bhrpp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9943' May 22 13:11:52.341: INFO: stderr: "" May 22 13:11:52.341: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 22 13:11:52.341: INFO: validating pod update-demo-nautilus-bhrpp May 22 13:11:52.344: INFO: got data: { "image": "nautilus.jpg" } May 22 13:11:52.344: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 22 13:11:52.344: INFO: update-demo-nautilus-bhrpp is verified up and running May 22 13:11:52.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bxf2t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9943' May 22 13:11:52.448: INFO: stderr: "" May 22 13:11:52.448: INFO: stdout: "true" May 22 13:11:52.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bxf2t -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9943' May 22 13:11:52.544: INFO: stderr: "" May 22 13:11:52.544: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 22 13:11:52.544: INFO: validating pod update-demo-nautilus-bxf2t May 22 13:11:52.548: INFO: got data: { "image": "nautilus.jpg" } May 22 13:11:52.548: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 22 13:11:52.549: INFO: update-demo-nautilus-bxf2t is verified up and running STEP: using delete to clean up resources May 22 13:11:52.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9943' May 22 13:11:52.654: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 22 13:11:52.654: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 22 13:11:52.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9943' May 22 13:11:52.767: INFO: stderr: "No resources found.\n" May 22 13:11:52.767: INFO: stdout: "" May 22 13:11:52.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9943 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 22 13:11:52.900: INFO: stderr: "" May 22 13:11:52.900: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:11:52.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9943" for this suite. May 22 13:12:14.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:12:14.981: INFO: namespace kubectl-9943 deletion completed in 22.078181067s • [SLOW TEST:47.275 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:12:14.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info May 22 13:12:15.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 22 13:12:15.122: INFO: stderr: "" May 22 13:12:15.122: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:12:15.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-403" for this suite. May 22 13:12:21.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:12:21.211: INFO: namespace kubectl-403 deletion completed in 6.085879568s • [SLOW TEST:6.230 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:12:21.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 22 13:12:21.282: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0231dbfe-2435-40e3-a623-c4f56f315033" in namespace "downward-api-1432" to be "success or failure" May 22 13:12:21.326: INFO: Pod "downwardapi-volume-0231dbfe-2435-40e3-a623-c4f56f315033": Phase="Pending", Reason="", readiness=false. Elapsed: 44.051894ms May 22 13:12:23.330: INFO: Pod "downwardapi-volume-0231dbfe-2435-40e3-a623-c4f56f315033": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04805406s May 22 13:12:25.335: INFO: Pod "downwardapi-volume-0231dbfe-2435-40e3-a623-c4f56f315033": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052255968s STEP: Saw pod success May 22 13:12:25.335: INFO: Pod "downwardapi-volume-0231dbfe-2435-40e3-a623-c4f56f315033" satisfied condition "success or failure" May 22 13:12:25.337: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-0231dbfe-2435-40e3-a623-c4f56f315033 container client-container: STEP: delete the pod May 22 13:12:25.407: INFO: Waiting for pod downwardapi-volume-0231dbfe-2435-40e3-a623-c4f56f315033 to disappear May 22 13:12:25.412: INFO: Pod downwardapi-volume-0231dbfe-2435-40e3-a623-c4f56f315033 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:12:25.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1432" for this suite. May 22 13:12:31.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:12:31.505: INFO: namespace downward-api-1432 deletion completed in 6.090084288s • [SLOW TEST:10.294 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:12:31.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 22 13:12:31.717: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-7204,SelfLink:/api/v1/namespaces/watch-7204/configmaps/e2e-watch-test-resource-version,UID:8dc5a1ea-f269-4c50-8072-f82c8b0607d4,ResourceVersion:12291830,Generation:0,CreationTimestamp:2020-05-22 13:12:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 22 13:12:31.718: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-7204,SelfLink:/api/v1/namespaces/watch-7204/configmaps/e2e-watch-test-resource-version,UID:8dc5a1ea-f269-4c50-8072-f82c8b0607d4,ResourceVersion:12291831,Generation:0,CreationTimestamp:2020-05-22 13:12:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:12:31.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7204" for this suite. May 22 13:12:37.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:12:37.796: INFO: namespace watch-7204 deletion completed in 6.074113532s • [SLOW TEST:6.290 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:12:37.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 22 13:12:37.857: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3375c73c-2772-46c9-9d75-a7978ab505bb" in namespace "projected-3904" to be "success or failure" May 22 13:12:37.919: INFO: Pod "downwardapi-volume-3375c73c-2772-46c9-9d75-a7978ab505bb": Phase="Pending", Reason="", readiness=false. Elapsed: 62.141752ms May 22 13:12:39.924: INFO: Pod "downwardapi-volume-3375c73c-2772-46c9-9d75-a7978ab505bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066465712s May 22 13:12:41.927: INFO: Pod "downwardapi-volume-3375c73c-2772-46c9-9d75-a7978ab505bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069839484s STEP: Saw pod success May 22 13:12:41.927: INFO: Pod "downwardapi-volume-3375c73c-2772-46c9-9d75-a7978ab505bb" satisfied condition "success or failure" May 22 13:12:41.929: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-3375c73c-2772-46c9-9d75-a7978ab505bb container client-container: STEP: delete the pod May 22 13:12:42.076: INFO: Waiting for pod downwardapi-volume-3375c73c-2772-46c9-9d75-a7978ab505bb to disappear May 22 13:12:42.111: INFO: Pod downwardapi-volume-3375c73c-2772-46c9-9d75-a7978ab505bb no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:12:42.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3904" for this suite. May 22 13:12:48.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:12:48.199: INFO: namespace projected-3904 deletion completed in 6.084501895s • [SLOW TEST:10.402 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:12:48.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 22 13:12:48.348: INFO: Waiting up to 5m0s for pod "downward-api-d7c5d569-593d-4bd9-8672-3891b22e4fd2" in namespace "downward-api-4141" to be "success or failure" May 22 13:12:48.380: INFO: Pod "downward-api-d7c5d569-593d-4bd9-8672-3891b22e4fd2": Phase="Pending", Reason="", readiness=false. Elapsed: 32.325446ms May 22 13:12:50.464: INFO: Pod "downward-api-d7c5d569-593d-4bd9-8672-3891b22e4fd2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116292516s May 22 13:12:52.469: INFO: Pod "downward-api-d7c5d569-593d-4bd9-8672-3891b22e4fd2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.120959695s STEP: Saw pod success May 22 13:12:52.469: INFO: Pod "downward-api-d7c5d569-593d-4bd9-8672-3891b22e4fd2" satisfied condition "success or failure" May 22 13:12:52.473: INFO: Trying to get logs from node iruya-worker2 pod downward-api-d7c5d569-593d-4bd9-8672-3891b22e4fd2 container dapi-container: STEP: delete the pod May 22 13:12:52.515: INFO: Waiting for pod downward-api-d7c5d569-593d-4bd9-8672-3891b22e4fd2 to disappear May 22 13:12:52.536: INFO: Pod downward-api-d7c5d569-593d-4bd9-8672-3891b22e4fd2 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:12:52.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4141" for this suite. May 22 13:12:58.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:12:58.682: INFO: namespace downward-api-4141 deletion completed in 6.109056086s • [SLOW TEST:10.483 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:12:58.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 22 13:12:58.795: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 22 13:13:03.799: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 22 13:13:03.799: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 22 13:13:03.834: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-989,SelfLink:/apis/apps/v1/namespaces/deployment-989/deployments/test-cleanup-deployment,UID:cf6e1cd5-463a-46c2-8eb2-0211949e86c6,ResourceVersion:12291963,Generation:1,CreationTimestamp:2020-05-22 13:13:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} May 22 13:13:03.853: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-989,SelfLink:/apis/apps/v1/namespaces/deployment-989/replicasets/test-cleanup-deployment-55bbcbc84c,UID:19abe75d-23a2-4bf6-98f9-d6bc379f4de4,ResourceVersion:12291965,Generation:1,CreationTimestamp:2020-05-22 13:13:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment cf6e1cd5-463a-46c2-8eb2-0211949e86c6 0xc0028fa327 0xc0028fa328}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 22 13:13:03.853: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 22 13:13:03.854: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-989,SelfLink:/apis/apps/v1/namespaces/deployment-989/replicasets/test-cleanup-controller,UID:3d9c1b41-955d-47af-8056-97c94cc30ef4,ResourceVersion:12291964,Generation:1,CreationTimestamp:2020-05-22 13:12:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment cf6e1cd5-463a-46c2-8eb2-0211949e86c6 0xc0028fa23f 0xc0028fa250}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 22 13:13:03.929: INFO: Pod "test-cleanup-controller-knxfg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-knxfg,GenerateName:test-cleanup-controller-,Namespace:deployment-989,SelfLink:/api/v1/namespaces/deployment-989/pods/test-cleanup-controller-knxfg,UID:617d491a-3550-4497-b7e8-99b93f58c0fb,ResourceVersion:12291957,Generation:0,CreationTimestamp:2020-05-22 13:12:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 3d9c1b41-955d-47af-8056-97c94cc30ef4 0xc0028fabe7 0xc0028fabe8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-sdslp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sdslp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-sdslp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0028fac60} {node.kubernetes.io/unreachable Exists NoExecute 0xc0028fac80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:12:58 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:13:01 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:13:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:12:58 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.92,StartTime:2020-05-22 13:12:58 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-22 13:13:01 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://5578eb5bff0bc3b142eb35583aafdf9ef86673b3b887488f7354c6353047debd}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 13:13:03.929: INFO: Pod "test-cleanup-deployment-55bbcbc84c-vcww8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-vcww8,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-989,SelfLink:/api/v1/namespaces/deployment-989/pods/test-cleanup-deployment-55bbcbc84c-vcww8,UID:b6e0d69a-703d-4df9-b837-ba818924902b,ResourceVersion:12291969,Generation:0,CreationTimestamp:2020-05-22 13:13:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 19abe75d-23a2-4bf6-98f9-d6bc379f4de4 0xc0028fad67 0xc0028fad68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-sdslp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sdslp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-sdslp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0028fade0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0028fae00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:13:03 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:13:03.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-989" for this suite. May 22 13:13:09.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:13:10.052: INFO: namespace deployment-989 deletion completed in 6.09428675s • [SLOW TEST:11.370 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:13:10.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 22 13:13:10.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-8859' May 22 13:13:10.251: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 22 13:13:10.251: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: rolling-update to same image controller May 22 13:13:10.301: INFO: scanned /root for discovery docs: May 22 13:13:10.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-8859' May 22 13:13:26.193: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 22 13:13:26.193: INFO: stdout: "Created e2e-test-nginx-rc-750c3cf894e0eac006838684acccaa2c\nScaling up e2e-test-nginx-rc-750c3cf894e0eac006838684acccaa2c from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-750c3cf894e0eac006838684acccaa2c up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-750c3cf894e0eac006838684acccaa2c to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" May 22 13:13:26.193: INFO: stdout: "Created e2e-test-nginx-rc-750c3cf894e0eac006838684acccaa2c\nScaling up e2e-test-nginx-rc-750c3cf894e0eac006838684acccaa2c from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-750c3cf894e0eac006838684acccaa2c up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-750c3cf894e0eac006838684acccaa2c to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. May 22 13:13:26.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-8859' May 22 13:13:26.292: INFO: stderr: "" May 22 13:13:26.292: INFO: stdout: "e2e-test-nginx-rc-750c3cf894e0eac006838684acccaa2c-p8z8z " May 22 13:13:26.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-750c3cf894e0eac006838684acccaa2c-p8z8z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8859' May 22 13:13:26.392: INFO: stderr: "" May 22 13:13:26.392: INFO: stdout: "true" May 22 13:13:26.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-750c3cf894e0eac006838684acccaa2c-p8z8z -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8859' May 22 13:13:26.486: INFO: stderr: "" May 22 13:13:26.486: INFO: stdout: "docker.io/library/nginx:1.14-alpine" May 22 13:13:26.486: INFO: e2e-test-nginx-rc-750c3cf894e0eac006838684acccaa2c-p8z8z is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 May 22 13:13:26.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-8859' May 22 13:13:26.591: INFO: stderr: "" May 22 13:13:26.591: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:13:26.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8859" for this suite. May 22 13:13:32.639: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:13:32.728: INFO: namespace kubectl-8859 deletion completed in 6.11477151s • [SLOW TEST:22.676 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:13:32.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1955.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1955.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1955.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1955.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1955.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1955.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1955.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1955.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1955.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1955.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1955.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 114.34.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.34.114_udp@PTR;check="$$(dig +tcp +noall +answer +search 114.34.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.34.114_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1955.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1955.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1955.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1955.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1955.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1955.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1955.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1955.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1955.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1955.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1955.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 114.34.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.34.114_udp@PTR;check="$$(dig +tcp +noall +answer +search 114.34.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.34.114_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 22 13:13:40.939: INFO: Unable to read wheezy_udp@dns-test-service.dns-1955.svc.cluster.local from pod dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2: the server could not find the requested resource (get pods dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2) May 22 13:13:40.942: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1955.svc.cluster.local from pod dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2: the server could not find the requested resource (get pods dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2) May 22 13:13:40.945: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local from pod dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2: the server could not find the requested resource (get pods dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2) May 22 13:13:40.947: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local from pod dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2: the server could not find the requested resource (get pods dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2) May 22 13:13:40.967: INFO: Unable to read jessie_udp@dns-test-service.dns-1955.svc.cluster.local from pod dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2: the server could not find the requested resource (get pods dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2) May 22 13:13:40.970: INFO: Unable to read jessie_tcp@dns-test-service.dns-1955.svc.cluster.local from pod dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2: the server could not find the requested resource (get pods dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2) May 22 13:13:40.973: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local from pod dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2: the server could not find the requested resource (get pods dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2) May 22 13:13:40.975: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local from pod dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2: the server could not find the requested resource (get pods dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2) May 22 13:13:40.992: INFO: Lookups using dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2 failed for: [wheezy_udp@dns-test-service.dns-1955.svc.cluster.local wheezy_tcp@dns-test-service.dns-1955.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local jessie_udp@dns-test-service.dns-1955.svc.cluster.local jessie_tcp@dns-test-service.dns-1955.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local] May 22 13:13:45.998: INFO: Unable to read wheezy_udp@dns-test-service.dns-1955.svc.cluster.local from pod dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2: the server could not find the requested resource (get pods dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2) May 22 13:13:46.002: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1955.svc.cluster.local from pod dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2: the server could not find the requested resource (get pods dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2) May 22 13:13:46.005: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local from pod dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2: the server could not find the requested resource (get pods dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2) May 22 13:13:46.008: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local from pod dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2: the server could not find the requested resource (get pods dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2) May 22 13:13:46.028: INFO: Unable to read jessie_udp@dns-test-service.dns-1955.svc.cluster.local from pod dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2: the server could not find the requested resource (get pods dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2) May 22 13:13:46.031: INFO: Unable to read jessie_tcp@dns-test-service.dns-1955.svc.cluster.local from pod dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2: the server could not find the requested resource (get pods dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2) May 22 13:13:46.034: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local from pod dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2: the server could not find the requested resource (get pods dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2) May 22 13:13:46.037: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local from pod dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2: the server could not find the requested resource (get pods dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2) May 22 13:13:46.054: INFO: Lookups using dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2 failed for: [wheezy_udp@dns-test-service.dns-1955.svc.cluster.local wheezy_tcp@dns-test-service.dns-1955.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local jessie_udp@dns-test-service.dns-1955.svc.cluster.local jessie_tcp@dns-test-service.dns-1955.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local] May 22 13:13:50.998: INFO: Unable to read wheezy_udp@dns-test-service.dns-1955.svc.cluster.local from pod dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2: the server could not find the requested resource (get pods dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2) May 22 13:13:51.002: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1955.svc.cluster.local from pod dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2: the server could not find the requested resource (get pods dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2) May 22 13:13:51.006: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local from pod dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2: the server could not find the requested resource (get pods dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2) May 22 13:13:51.009: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local from pod dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2: the server could not find the requested resource (get pods dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2) May 22 13:13:51.032: INFO: Unable to read jessie_udp@dns-test-service.dns-1955.svc.cluster.local from pod dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2: the server could not find the requested resource (get pods dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2) May 22 13:13:51.034: INFO: Unable to read jessie_tcp@dns-test-service.dns-1955.svc.cluster.local from pod dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2: the server could not find the requested resource (get pods dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2) May 22 13:13:51.038: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local from pod dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2: the server could not find the requested resource (get pods dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2) May 22 13:13:51.040: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local from pod dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2: the server could not find the requested resource (get pods dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2) May 22 13:13:51.060: INFO: Lookups using dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2 failed for: [wheezy_udp@dns-test-service.dns-1955.svc.cluster.local wheezy_tcp@dns-test-service.dns-1955.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local jessie_udp@dns-test-service.dns-1955.svc.cluster.local jessie_tcp@dns-test-service.dns-1955.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local] May 22 13:13:55.997: INFO: Unable to read wheezy_udp@dns-test-service.dns-1955.svc.cluster.local from pod dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2: the server could not find the requested resource (get pods dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2) May 22 13:13:56.001: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1955.svc.cluster.local from pod dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2: the server could not find the requested resource (get pods dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2) May 22 13:13:56.005: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local from pod dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2: the server could not find the requested resource (get pods dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2) May 22 13:13:56.008: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local from pod dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2: the server could not find the requested resource (get pods dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2) May 22 13:13:56.025: INFO: Unable to read jessie_udp@dns-test-service.dns-1955.svc.cluster.local from pod dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2: the server could not find the requested resource (get pods dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2) May 22 13:13:56.028: INFO: Unable to read jessie_tcp@dns-test-service.dns-1955.svc.cluster.local from pod dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2: the server could not find the requested resource (get pods dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2) May 22 13:13:56.032: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local from pod dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2: the server could not find the requested resource (get pods dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2) May 22 13:13:56.035: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local from pod dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2: the server could not find the requested resource (get pods dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2) May 22 13:13:56.052: INFO: Lookups using dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2 failed for: [wheezy_udp@dns-test-service.dns-1955.svc.cluster.local wheezy_tcp@dns-test-service.dns-1955.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local jessie_udp@dns-test-service.dns-1955.svc.cluster.local jessie_tcp@dns-test-service.dns-1955.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local] May 22 13:14:00.998: INFO: Unable to read wheezy_udp@dns-test-service.dns-1955.svc.cluster.local from pod dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2: the server could not find the requested resource (get pods dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2) May 22 13:14:01.003: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1955.svc.cluster.local from pod dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2: the server could not find the requested resource (get pods dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2) May 22 13:14:01.006: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local from pod dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2: the server could not find the requested resource (get pods dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2) May 22 13:14:01.009: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local from pod dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2: the server could not find the requested resource (get pods dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2) May 22 13:14:01.031: INFO: Unable to read jessie_udp@dns-test-service.dns-1955.svc.cluster.local from pod dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2: the server could not find the requested resource (get pods dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2) May 22 13:14:01.034: INFO: Unable to read jessie_tcp@dns-test-service.dns-1955.svc.cluster.local from pod dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2: the server could not find the requested resource (get pods dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2) May 22 13:14:01.037: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local from pod dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2: the server could not find the requested resource (get pods dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2) May 22 13:14:01.040: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local from pod dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2: the server could not find the requested resource (get pods dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2) May 22 13:14:01.063: INFO: Lookups using dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2 failed for: [wheezy_udp@dns-test-service.dns-1955.svc.cluster.local wheezy_tcp@dns-test-service.dns-1955.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local jessie_udp@dns-test-service.dns-1955.svc.cluster.local jessie_tcp@dns-test-service.dns-1955.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local] May 22 13:14:05.996: INFO: Unable to read wheezy_udp@dns-test-service.dns-1955.svc.cluster.local from pod dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2: the server could not find the requested resource (get pods dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2) May 22 13:14:06.000: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1955.svc.cluster.local from pod dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2: the server could not find the requested resource (get pods dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2) May 22 13:14:06.003: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local from pod dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2: the server could not find the requested resource (get pods dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2) May 22 13:14:06.006: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local from pod dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2: the server could not find the requested resource (get pods dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2) May 22 13:14:06.038: INFO: Unable to read jessie_udp@dns-test-service.dns-1955.svc.cluster.local from pod dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2: the server could not find the requested resource (get pods dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2) May 22 13:14:06.041: INFO: Unable to read jessie_tcp@dns-test-service.dns-1955.svc.cluster.local from pod dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2: the server could not find the requested resource (get pods dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2) May 22 13:14:06.045: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local from pod dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2: the server could not find the requested resource (get pods dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2) May 22 13:14:06.048: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local from pod dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2: the server could not find the requested resource (get pods dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2) May 22 13:14:06.059: INFO: Lookups using dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2 failed for: [wheezy_udp@dns-test-service.dns-1955.svc.cluster.local wheezy_tcp@dns-test-service.dns-1955.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local jessie_udp@dns-test-service.dns-1955.svc.cluster.local jessie_tcp@dns-test-service.dns-1955.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1955.svc.cluster.local] May 22 13:14:11.051: INFO: DNS probes using dns-1955/dns-test-46342851-ec02-4ab6-9cca-0b2778a7a3b2 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:14:11.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1955" for this suite. May 22 13:14:17.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:14:17.916: INFO: namespace dns-1955 deletion completed in 6.114932842s • [SLOW TEST:45.187 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:14:17.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 22 13:14:17.992: INFO: Waiting up to 5m0s for pod "downwardapi-volume-05e5321b-7131-4e50-89fd-93d6d1423824" in namespace "projected-6505" to be "success or failure" May 22 13:14:18.001: INFO: Pod "downwardapi-volume-05e5321b-7131-4e50-89fd-93d6d1423824": Phase="Pending", Reason="", readiness=false. Elapsed: 9.903546ms May 22 13:14:20.006: INFO: Pod "downwardapi-volume-05e5321b-7131-4e50-89fd-93d6d1423824": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014770055s May 22 13:14:22.012: INFO: Pod "downwardapi-volume-05e5321b-7131-4e50-89fd-93d6d1423824": Phase="Running", Reason="", readiness=true. Elapsed: 4.020698129s May 22 13:14:24.018: INFO: Pod "downwardapi-volume-05e5321b-7131-4e50-89fd-93d6d1423824": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.026090371s STEP: Saw pod success May 22 13:14:24.018: INFO: Pod "downwardapi-volume-05e5321b-7131-4e50-89fd-93d6d1423824" satisfied condition "success or failure" May 22 13:14:24.021: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-05e5321b-7131-4e50-89fd-93d6d1423824 container client-container: STEP: delete the pod May 22 13:14:24.055: INFO: Waiting for pod downwardapi-volume-05e5321b-7131-4e50-89fd-93d6d1423824 to disappear May 22 13:14:24.067: INFO: Pod downwardapi-volume-05e5321b-7131-4e50-89fd-93d6d1423824 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:14:24.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6505" for this suite. May 22 13:14:30.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:14:30.154: INFO: namespace projected-6505 deletion completed in 6.083076449s • [SLOW TEST:12.237 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:14:30.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-39d40109-e84c-4f37-b377-9b88682512fd STEP: Creating a pod to test consume configMaps May 22 13:14:30.263: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0b5bf023-ce8d-4c66-946e-a1530e092ba2" in namespace "projected-4040" to be "success or failure" May 22 13:14:30.277: INFO: Pod "pod-projected-configmaps-0b5bf023-ce8d-4c66-946e-a1530e092ba2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.153112ms May 22 13:14:32.282: INFO: Pod "pod-projected-configmaps-0b5bf023-ce8d-4c66-946e-a1530e092ba2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018371466s May 22 13:14:34.286: INFO: Pod "pod-projected-configmaps-0b5bf023-ce8d-4c66-946e-a1530e092ba2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022216454s STEP: Saw pod success May 22 13:14:34.286: INFO: Pod "pod-projected-configmaps-0b5bf023-ce8d-4c66-946e-a1530e092ba2" satisfied condition "success or failure" May 22 13:14:34.288: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-0b5bf023-ce8d-4c66-946e-a1530e092ba2 container projected-configmap-volume-test: STEP: delete the pod May 22 13:14:34.426: INFO: Waiting for pod pod-projected-configmaps-0b5bf023-ce8d-4c66-946e-a1530e092ba2 to disappear May 22 13:14:34.442: INFO: Pod pod-projected-configmaps-0b5bf023-ce8d-4c66-946e-a1530e092ba2 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:14:34.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4040" for this suite. May 22 13:14:40.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:14:40.537: INFO: namespace projected-4040 deletion completed in 6.09159789s • [SLOW TEST:10.383 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:14:40.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 22 13:14:40.599: INFO: Waiting up to 5m0s for pod "downwardapi-volume-95f5aff8-08b6-4d91-aa03-44949b964e5c" in namespace "downward-api-3329" to be "success or failure" May 22 13:14:40.609: INFO: Pod "downwardapi-volume-95f5aff8-08b6-4d91-aa03-44949b964e5c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.753924ms May 22 13:14:42.613: INFO: Pod "downwardapi-volume-95f5aff8-08b6-4d91-aa03-44949b964e5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013803237s May 22 13:14:44.617: INFO: Pod "downwardapi-volume-95f5aff8-08b6-4d91-aa03-44949b964e5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01805151s STEP: Saw pod success May 22 13:14:44.617: INFO: Pod "downwardapi-volume-95f5aff8-08b6-4d91-aa03-44949b964e5c" satisfied condition "success or failure" May 22 13:14:44.621: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-95f5aff8-08b6-4d91-aa03-44949b964e5c container client-container: STEP: delete the pod May 22 13:14:44.647: INFO: Waiting for pod downwardapi-volume-95f5aff8-08b6-4d91-aa03-44949b964e5c to disappear May 22 13:14:44.717: INFO: Pod downwardapi-volume-95f5aff8-08b6-4d91-aa03-44949b964e5c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:14:44.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3329" for this suite. May 22 13:14:50.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:14:50.857: INFO: namespace downward-api-3329 deletion completed in 6.135721654s • [SLOW TEST:10.320 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:14:50.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy May 22 13:14:50.911: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix984411099/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:14:50.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2946" for this suite. May 22 13:14:56.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:14:57.066: INFO: namespace kubectl-2946 deletion completed in 6.084562335s • [SLOW TEST:6.209 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:14:57.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 22 13:14:57.157: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 22 13:14:59.200: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:15:00.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9541" for this suite. May 22 13:15:06.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:15:06.553: INFO: namespace replication-controller-9541 deletion completed in 6.15192426s • [SLOW TEST:9.487 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:15:06.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:15:06.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3784" for this suite. May 22 13:15:29.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:15:29.175: INFO: namespace pods-3784 deletion completed in 22.177374892s • [SLOW TEST:22.621 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:15:29.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-3622 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-3622 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3622 May 22 13:15:29.262: INFO: Found 0 stateful pods, waiting for 1 May 22 13:15:39.268: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 22 13:15:39.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3622 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 22 13:15:42.362: INFO: stderr: "I0522 13:15:42.239264 1512 log.go:172] (0xc000aea4d0) (0xc000610aa0) Create stream\nI0522 13:15:42.239308 1512 log.go:172] (0xc000aea4d0) (0xc000610aa0) Stream added, broadcasting: 1\nI0522 13:15:42.241706 1512 log.go:172] (0xc000aea4d0) Reply frame received for 1\nI0522 13:15:42.241749 1512 log.go:172] (0xc000aea4d0) (0xc0007ce000) Create stream\nI0522 13:15:42.241762 1512 log.go:172] (0xc000aea4d0) (0xc0007ce000) Stream added, broadcasting: 3\nI0522 13:15:42.242725 1512 log.go:172] (0xc000aea4d0) Reply frame received for 3\nI0522 13:15:42.242787 1512 log.go:172] (0xc000aea4d0) (0xc0007de000) Create stream\nI0522 13:15:42.242812 1512 log.go:172] (0xc000aea4d0) (0xc0007de000) Stream added, broadcasting: 5\nI0522 13:15:42.243705 1512 log.go:172] (0xc000aea4d0) Reply frame received for 5\nI0522 13:15:42.321534 1512 log.go:172] (0xc000aea4d0) Data frame received for 5\nI0522 13:15:42.321564 1512 log.go:172] (0xc0007de000) (5) Data frame handling\nI0522 13:15:42.321583 1512 log.go:172] (0xc0007de000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0522 13:15:42.352430 1512 log.go:172] (0xc000aea4d0) Data frame received for 3\nI0522 13:15:42.352456 1512 log.go:172] (0xc0007ce000) (3) Data frame handling\nI0522 13:15:42.352476 1512 log.go:172] (0xc0007ce000) (3) Data frame sent\nI0522 13:15:42.353223 1512 log.go:172] (0xc000aea4d0) Data frame received for 3\nI0522 13:15:42.353240 1512 log.go:172] (0xc0007ce000) (3) Data frame handling\nI0522 13:15:42.353347 1512 log.go:172] (0xc000aea4d0) Data frame received for 5\nI0522 13:15:42.353366 1512 log.go:172] (0xc0007de000) (5) Data frame handling\nI0522 13:15:42.355141 1512 log.go:172] (0xc000aea4d0) Data frame received for 1\nI0522 13:15:42.355158 1512 log.go:172] (0xc000610aa0) (1) Data frame handling\nI0522 13:15:42.355181 1512 log.go:172] (0xc000610aa0) (1) Data frame sent\nI0522 13:15:42.355205 1512 log.go:172] (0xc000aea4d0) (0xc000610aa0) Stream removed, broadcasting: 1\nI0522 13:15:42.355231 1512 log.go:172] (0xc000aea4d0) Go away received\nI0522 13:15:42.355651 1512 log.go:172] (0xc000aea4d0) (0xc000610aa0) Stream removed, broadcasting: 1\nI0522 13:15:42.355674 1512 log.go:172] (0xc000aea4d0) (0xc0007ce000) Stream removed, broadcasting: 3\nI0522 13:15:42.355689 1512 log.go:172] (0xc000aea4d0) (0xc0007de000) Stream removed, broadcasting: 5\n" May 22 13:15:42.362: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 22 13:15:42.362: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 22 13:15:42.366: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 22 13:15:52.371: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 22 13:15:52.371: INFO: Waiting for statefulset status.replicas updated to 0 May 22 13:15:52.382: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999539s May 22 13:15:53.387: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.996378193s May 22 13:15:54.395: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.991826769s May 22 13:15:55.400: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.984005603s May 22 13:15:56.405: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.979018768s May 22 13:15:57.410: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.97367541s May 22 13:15:58.415: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.969227709s May 22 13:15:59.420: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.963953357s May 22 13:16:00.424: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.959125079s May 22 13:16:01.429: INFO: Verifying statefulset ss doesn't scale past 1 for another 954.819058ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3622 May 22 13:16:02.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3622 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 22 13:16:02.665: INFO: stderr: "I0522 13:16:02.570539 1546 log.go:172] (0xc00092c420) (0xc0008ee640) Create stream\nI0522 13:16:02.570596 1546 log.go:172] (0xc00092c420) (0xc0008ee640) Stream added, broadcasting: 1\nI0522 13:16:02.572921 1546 log.go:172] (0xc00092c420) Reply frame received for 1\nI0522 13:16:02.572957 1546 log.go:172] (0xc00092c420) (0xc00066a280) Create stream\nI0522 13:16:02.572967 1546 log.go:172] (0xc00092c420) (0xc00066a280) Stream added, broadcasting: 3\nI0522 13:16:02.574145 1546 log.go:172] (0xc00092c420) Reply frame received for 3\nI0522 13:16:02.574174 1546 log.go:172] (0xc00092c420) (0xc0008ee780) Create stream\nI0522 13:16:02.574185 1546 log.go:172] (0xc00092c420) (0xc0008ee780) Stream added, broadcasting: 5\nI0522 13:16:02.575146 1546 log.go:172] (0xc00092c420) Reply frame received for 5\nI0522 13:16:02.654814 1546 log.go:172] (0xc00092c420) Data frame received for 5\nI0522 13:16:02.654844 1546 log.go:172] (0xc0008ee780) (5) Data frame handling\nI0522 13:16:02.654854 1546 log.go:172] (0xc0008ee780) (5) Data frame sent\nI0522 13:16:02.654860 1546 log.go:172] (0xc00092c420) Data frame received for 5\nI0522 13:16:02.654865 1546 log.go:172] (0xc0008ee780) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0522 13:16:02.654883 1546 log.go:172] (0xc00092c420) Data frame received for 3\nI0522 13:16:02.654889 1546 log.go:172] (0xc00066a280) (3) Data frame handling\nI0522 13:16:02.654895 1546 log.go:172] (0xc00066a280) (3) Data frame sent\nI0522 13:16:02.654901 1546 log.go:172] (0xc00092c420) Data frame received for 3\nI0522 13:16:02.654908 1546 log.go:172] (0xc00066a280) (3) Data frame handling\nI0522 13:16:02.656771 1546 log.go:172] (0xc00092c420) Data frame received for 1\nI0522 13:16:02.656793 1546 log.go:172] (0xc0008ee640) (1) Data frame handling\nI0522 13:16:02.656807 1546 log.go:172] (0xc0008ee640) (1) Data frame sent\nI0522 13:16:02.656823 1546 log.go:172] (0xc00092c420) (0xc0008ee640) Stream removed, broadcasting: 1\nI0522 13:16:02.656838 1546 log.go:172] (0xc00092c420) Go away received\nI0522 13:16:02.657379 1546 log.go:172] (0xc00092c420) (0xc0008ee640) Stream removed, broadcasting: 1\nI0522 13:16:02.657398 1546 log.go:172] (0xc00092c420) (0xc00066a280) Stream removed, broadcasting: 3\nI0522 13:16:02.657405 1546 log.go:172] (0xc00092c420) (0xc0008ee780) Stream removed, broadcasting: 5\n" May 22 13:16:02.665: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 22 13:16:02.665: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 22 13:16:02.669: INFO: Found 1 stateful pods, waiting for 3 May 22 13:16:12.674: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 22 13:16:12.674: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 22 13:16:12.674: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 22 13:16:12.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3622 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 22 13:16:12.913: INFO: stderr: "I0522 13:16:12.808095 1568 log.go:172] (0xc000970420) (0xc0002ce6e0) Create stream\nI0522 13:16:12.808148 1568 log.go:172] (0xc000970420) (0xc0002ce6e0) Stream added, broadcasting: 1\nI0522 13:16:12.810463 1568 log.go:172] (0xc000970420) Reply frame received for 1\nI0522 13:16:12.810509 1568 log.go:172] (0xc000970420) (0xc0002ce780) Create stream\nI0522 13:16:12.810526 1568 log.go:172] (0xc000970420) (0xc0002ce780) Stream added, broadcasting: 3\nI0522 13:16:12.811449 1568 log.go:172] (0xc000970420) Reply frame received for 3\nI0522 13:16:12.811477 1568 log.go:172] (0xc000970420) (0xc0006583c0) Create stream\nI0522 13:16:12.811486 1568 log.go:172] (0xc000970420) (0xc0006583c0) Stream added, broadcasting: 5\nI0522 13:16:12.812390 1568 log.go:172] (0xc000970420) Reply frame received for 5\nI0522 13:16:12.906532 1568 log.go:172] (0xc000970420) Data frame received for 5\nI0522 13:16:12.906592 1568 log.go:172] (0xc0006583c0) (5) Data frame handling\nI0522 13:16:12.906616 1568 log.go:172] (0xc0006583c0) (5) Data frame sent\nI0522 13:16:12.906634 1568 log.go:172] (0xc000970420) Data frame received for 5\nI0522 13:16:12.906650 1568 log.go:172] (0xc0006583c0) (5) Data frame handling\nI0522 13:16:12.906670 1568 log.go:172] (0xc000970420) Data frame received for 3\nI0522 13:16:12.906685 1568 log.go:172] (0xc0002ce780) (3) Data frame handling\nI0522 13:16:12.906702 1568 log.go:172] (0xc0002ce780) (3) Data frame sent\nI0522 13:16:12.906721 1568 log.go:172] (0xc000970420) Data frame received for 3\nI0522 13:16:12.906735 1568 log.go:172] (0xc0002ce780) (3) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0522 13:16:12.908679 1568 log.go:172] (0xc000970420) Data frame received for 1\nI0522 13:16:12.908698 1568 log.go:172] (0xc0002ce6e0) (1) Data frame handling\nI0522 13:16:12.908707 1568 log.go:172] (0xc0002ce6e0) (1) Data frame sent\nI0522 13:16:12.908716 1568 log.go:172] (0xc000970420) (0xc0002ce6e0) Stream removed, broadcasting: 1\nI0522 13:16:12.908725 1568 log.go:172] (0xc000970420) Go away received\nI0522 13:16:12.909828 1568 log.go:172] (0xc000970420) (0xc0002ce6e0) Stream removed, broadcasting: 1\nI0522 13:16:12.909875 1568 log.go:172] (0xc000970420) (0xc0002ce780) Stream removed, broadcasting: 3\nI0522 13:16:12.909897 1568 log.go:172] (0xc000970420) (0xc0006583c0) Stream removed, broadcasting: 5\n" May 22 13:16:12.913: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 22 13:16:12.913: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 22 13:16:12.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3622 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 22 13:16:13.168: INFO: stderr: "I0522 13:16:13.061989 1588 log.go:172] (0xc00097c370) (0xc000916640) Create stream\nI0522 13:16:13.062041 1588 log.go:172] (0xc00097c370) (0xc000916640) Stream added, broadcasting: 1\nI0522 13:16:13.064637 1588 log.go:172] (0xc00097c370) Reply frame received for 1\nI0522 13:16:13.064704 1588 log.go:172] (0xc00097c370) (0xc000938000) Create stream\nI0522 13:16:13.064736 1588 log.go:172] (0xc00097c370) (0xc000938000) Stream added, broadcasting: 3\nI0522 13:16:13.066134 1588 log.go:172] (0xc00097c370) Reply frame received for 3\nI0522 13:16:13.066174 1588 log.go:172] (0xc00097c370) (0xc0005a4280) Create stream\nI0522 13:16:13.066184 1588 log.go:172] (0xc00097c370) (0xc0005a4280) Stream added, broadcasting: 5\nI0522 13:16:13.067114 1588 log.go:172] (0xc00097c370) Reply frame received for 5\nI0522 13:16:13.130455 1588 log.go:172] (0xc00097c370) Data frame received for 5\nI0522 13:16:13.130503 1588 log.go:172] (0xc0005a4280) (5) Data frame handling\nI0522 13:16:13.130540 1588 log.go:172] (0xc0005a4280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0522 13:16:13.160622 1588 log.go:172] (0xc00097c370) Data frame received for 5\nI0522 13:16:13.160671 1588 log.go:172] (0xc0005a4280) (5) Data frame handling\nI0522 13:16:13.160709 1588 log.go:172] (0xc00097c370) Data frame received for 3\nI0522 13:16:13.160731 1588 log.go:172] (0xc000938000) (3) Data frame handling\nI0522 13:16:13.160753 1588 log.go:172] (0xc000938000) (3) Data frame sent\nI0522 13:16:13.160767 1588 log.go:172] (0xc00097c370) Data frame received for 3\nI0522 13:16:13.160781 1588 log.go:172] (0xc000938000) (3) Data frame handling\nI0522 13:16:13.163552 1588 log.go:172] (0xc00097c370) Data frame received for 1\nI0522 13:16:13.163581 1588 log.go:172] (0xc000916640) (1) Data frame handling\nI0522 13:16:13.163595 1588 log.go:172] (0xc000916640) (1) Data frame sent\nI0522 13:16:13.163612 1588 log.go:172] (0xc00097c370) (0xc000916640) Stream removed, broadcasting: 1\nI0522 13:16:13.163652 1588 log.go:172] (0xc00097c370) Go away received\nI0522 13:16:13.163889 1588 log.go:172] (0xc00097c370) (0xc000916640) Stream removed, broadcasting: 1\nI0522 13:16:13.163899 1588 log.go:172] (0xc00097c370) (0xc000938000) Stream removed, broadcasting: 3\nI0522 13:16:13.163905 1588 log.go:172] (0xc00097c370) (0xc0005a4280) Stream removed, broadcasting: 5\n" May 22 13:16:13.168: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 22 13:16:13.168: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 22 13:16:13.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3622 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 22 13:16:13.402: INFO: stderr: "I0522 13:16:13.282915 1608 log.go:172] (0xc0009a4160) (0xc000997d60) Create stream\nI0522 13:16:13.282965 1608 log.go:172] (0xc0009a4160) (0xc000997d60) Stream added, broadcasting: 1\nI0522 13:16:13.286180 1608 log.go:172] (0xc0009a4160) Reply frame received for 1\nI0522 13:16:13.286208 1608 log.go:172] (0xc0009a4160) (0xc0008088c0) Create stream\nI0522 13:16:13.286216 1608 log.go:172] (0xc0009a4160) (0xc0008088c0) Stream added, broadcasting: 3\nI0522 13:16:13.286967 1608 log.go:172] (0xc0009a4160) Reply frame received for 3\nI0522 13:16:13.286999 1608 log.go:172] (0xc0009a4160) (0xc000996000) Create stream\nI0522 13:16:13.287009 1608 log.go:172] (0xc0009a4160) (0xc000996000) Stream added, broadcasting: 5\nI0522 13:16:13.287758 1608 log.go:172] (0xc0009a4160) Reply frame received for 5\nI0522 13:16:13.358794 1608 log.go:172] (0xc0009a4160) Data frame received for 5\nI0522 13:16:13.358815 1608 log.go:172] (0xc000996000) (5) Data frame handling\nI0522 13:16:13.358833 1608 log.go:172] (0xc000996000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0522 13:16:13.394257 1608 log.go:172] (0xc0009a4160) Data frame received for 3\nI0522 13:16:13.394288 1608 log.go:172] (0xc0008088c0) (3) Data frame handling\nI0522 13:16:13.394316 1608 log.go:172] (0xc0008088c0) (3) Data frame sent\nI0522 13:16:13.394381 1608 log.go:172] (0xc0009a4160) Data frame received for 3\nI0522 13:16:13.394396 1608 log.go:172] (0xc0008088c0) (3) Data frame handling\nI0522 13:16:13.395289 1608 log.go:172] (0xc0009a4160) Data frame received for 5\nI0522 13:16:13.395308 1608 log.go:172] (0xc000996000) (5) Data frame handling\nI0522 13:16:13.396244 1608 log.go:172] (0xc0009a4160) Data frame received for 1\nI0522 13:16:13.396268 1608 log.go:172] (0xc000997d60) (1) Data frame handling\nI0522 13:16:13.396284 1608 log.go:172] (0xc000997d60) (1) Data frame sent\nI0522 13:16:13.396295 1608 log.go:172] (0xc0009a4160) (0xc000997d60) Stream removed, broadcasting: 1\nI0522 13:16:13.396307 1608 log.go:172] (0xc0009a4160) Go away received\nI0522 13:16:13.396675 1608 log.go:172] (0xc0009a4160) (0xc000997d60) Stream removed, broadcasting: 1\nI0522 13:16:13.396701 1608 log.go:172] (0xc0009a4160) (0xc0008088c0) Stream removed, broadcasting: 3\nI0522 13:16:13.396716 1608 log.go:172] (0xc0009a4160) (0xc000996000) Stream removed, broadcasting: 5\n" May 22 13:16:13.402: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 22 13:16:13.402: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 22 13:16:13.402: INFO: Waiting for statefulset status.replicas updated to 0 May 22 13:16:13.405: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 22 13:16:23.411: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 22 13:16:23.411: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 22 13:16:23.411: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 22 13:16:23.420: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999401s May 22 13:16:24.426: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996620184s May 22 13:16:25.429: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.991051937s May 22 13:16:26.435: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.987345527s May 22 13:16:27.439: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.981749551s May 22 13:16:28.445: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.977410873s May 22 13:16:29.452: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.971817911s May 22 13:16:30.458: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.964838224s May 22 13:16:31.463: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.959190858s May 22 13:16:32.468: INFO: Verifying statefulset ss doesn't scale past 3 for another 954.086831ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3622 May 22 13:16:33.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3622 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 22 13:16:33.722: INFO: stderr: "I0522 13:16:33.620891 1624 log.go:172] (0xc00094e420) (0xc0002cc820) Create stream\nI0522 13:16:33.620944 1624 log.go:172] (0xc00094e420) (0xc0002cc820) Stream added, broadcasting: 1\nI0522 13:16:33.624297 1624 log.go:172] (0xc00094e420) Reply frame received for 1\nI0522 13:16:33.624341 1624 log.go:172] (0xc00094e420) (0xc000398280) Create stream\nI0522 13:16:33.624354 1624 log.go:172] (0xc00094e420) (0xc000398280) Stream added, broadcasting: 3\nI0522 13:16:33.625611 1624 log.go:172] (0xc00094e420) Reply frame received for 3\nI0522 13:16:33.625669 1624 log.go:172] (0xc00094e420) (0xc0002cc000) Create stream\nI0522 13:16:33.625687 1624 log.go:172] (0xc00094e420) (0xc0002cc000) Stream added, broadcasting: 5\nI0522 13:16:33.626611 1624 log.go:172] (0xc00094e420) Reply frame received for 5\nI0522 13:16:33.717903 1624 log.go:172] (0xc00094e420) Data frame received for 5\nI0522 13:16:33.717939 1624 log.go:172] (0xc0002cc000) (5) Data frame handling\nI0522 13:16:33.717953 1624 log.go:172] (0xc0002cc000) (5) Data frame sent\nI0522 13:16:33.717961 1624 log.go:172] (0xc00094e420) Data frame received for 5\nI0522 13:16:33.717968 1624 log.go:172] (0xc0002cc000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0522 13:16:33.717979 1624 log.go:172] (0xc00094e420) Data frame received for 3\nI0522 13:16:33.718007 1624 log.go:172] (0xc000398280) (3) Data frame handling\nI0522 13:16:33.718016 1624 log.go:172] (0xc000398280) (3) Data frame sent\nI0522 13:16:33.718022 1624 log.go:172] (0xc00094e420) Data frame received for 3\nI0522 13:16:33.718027 1624 log.go:172] (0xc000398280) (3) Data frame handling\nI0522 13:16:33.719289 1624 log.go:172] (0xc00094e420) Data frame received for 1\nI0522 13:16:33.719304 1624 log.go:172] (0xc0002cc820) (1) Data frame handling\nI0522 13:16:33.719310 1624 log.go:172] (0xc0002cc820) (1) Data frame sent\nI0522 13:16:33.719318 1624 log.go:172] (0xc00094e420) (0xc0002cc820) Stream removed, broadcasting: 1\nI0522 13:16:33.719350 1624 log.go:172] (0xc00094e420) Go away received\nI0522 13:16:33.719573 1624 log.go:172] (0xc00094e420) (0xc0002cc820) Stream removed, broadcasting: 1\nI0522 13:16:33.719591 1624 log.go:172] (0xc00094e420) (0xc000398280) Stream removed, broadcasting: 3\nI0522 13:16:33.719597 1624 log.go:172] (0xc00094e420) (0xc0002cc000) Stream removed, broadcasting: 5\n" May 22 13:16:33.722: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 22 13:16:33.722: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 22 13:16:33.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3622 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 22 13:16:33.922: INFO: stderr: "I0522 13:16:33.852998 1645 log.go:172] (0xc0006b6a50) (0xc0005c06e0) Create stream\nI0522 13:16:33.853050 1645 log.go:172] (0xc0006b6a50) (0xc0005c06e0) Stream added, broadcasting: 1\nI0522 13:16:33.856842 1645 log.go:172] (0xc0006b6a50) Reply frame received for 1\nI0522 13:16:33.856888 1645 log.go:172] (0xc0006b6a50) (0xc0005c0000) Create stream\nI0522 13:16:33.856906 1645 log.go:172] (0xc0006b6a50) (0xc0005c0000) Stream added, broadcasting: 3\nI0522 13:16:33.857990 1645 log.go:172] (0xc0006b6a50) Reply frame received for 3\nI0522 13:16:33.858032 1645 log.go:172] (0xc0006b6a50) (0xc000454000) Create stream\nI0522 13:16:33.858050 1645 log.go:172] (0xc0006b6a50) (0xc000454000) Stream added, broadcasting: 5\nI0522 13:16:33.858995 1645 log.go:172] (0xc0006b6a50) Reply frame received for 5\nI0522 13:16:33.917845 1645 log.go:172] (0xc0006b6a50) Data frame received for 5\nI0522 13:16:33.917893 1645 log.go:172] (0xc000454000) (5) Data frame handling\nI0522 13:16:33.917907 1645 log.go:172] (0xc000454000) (5) Data frame sent\nI0522 13:16:33.917917 1645 log.go:172] (0xc0006b6a50) Data frame received for 5\nI0522 13:16:33.917924 1645 log.go:172] (0xc000454000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0522 13:16:33.917946 1645 log.go:172] (0xc0006b6a50) Data frame received for 3\nI0522 13:16:33.917959 1645 log.go:172] (0xc0005c0000) (3) Data frame handling\nI0522 13:16:33.917975 1645 log.go:172] (0xc0005c0000) (3) Data frame sent\nI0522 13:16:33.917985 1645 log.go:172] (0xc0006b6a50) Data frame received for 3\nI0522 13:16:33.917992 1645 log.go:172] (0xc0005c0000) (3) Data frame handling\nI0522 13:16:33.918695 1645 log.go:172] (0xc0006b6a50) Data frame received for 1\nI0522 13:16:33.918718 1645 log.go:172] (0xc0005c06e0) (1) Data frame handling\nI0522 13:16:33.918740 1645 log.go:172] (0xc0005c06e0) (1) Data frame sent\nI0522 13:16:33.918760 1645 log.go:172] (0xc0006b6a50) (0xc0005c06e0) Stream removed, broadcasting: 1\nI0522 13:16:33.918920 1645 log.go:172] (0xc0006b6a50) Go away received\nI0522 13:16:33.919138 1645 log.go:172] (0xc0006b6a50) (0xc0005c06e0) Stream removed, broadcasting: 1\nI0522 13:16:33.919160 1645 log.go:172] (0xc0006b6a50) (0xc0005c0000) Stream removed, broadcasting: 3\nI0522 13:16:33.919181 1645 log.go:172] (0xc0006b6a50) (0xc000454000) Stream removed, broadcasting: 5\n" May 22 13:16:33.923: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 22 13:16:33.923: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 22 13:16:33.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3622 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 22 13:16:34.132: INFO: stderr: "I0522 13:16:34.056512 1666 log.go:172] (0xc0009d6420) (0xc0005f2820) Create stream\nI0522 13:16:34.056569 1666 log.go:172] (0xc0009d6420) (0xc0005f2820) Stream added, broadcasting: 1\nI0522 13:16:34.058994 1666 log.go:172] (0xc0009d6420) Reply frame received for 1\nI0522 13:16:34.059042 1666 log.go:172] (0xc0009d6420) (0xc000776000) Create stream\nI0522 13:16:34.059056 1666 log.go:172] (0xc0009d6420) (0xc000776000) Stream added, broadcasting: 3\nI0522 13:16:34.060037 1666 log.go:172] (0xc0009d6420) Reply frame received for 3\nI0522 13:16:34.060063 1666 log.go:172] (0xc0009d6420) (0xc0009a4000) Create stream\nI0522 13:16:34.060069 1666 log.go:172] (0xc0009d6420) (0xc0009a4000) Stream added, broadcasting: 5\nI0522 13:16:34.061008 1666 log.go:172] (0xc0009d6420) Reply frame received for 5\nI0522 13:16:34.126094 1666 log.go:172] (0xc0009d6420) Data frame received for 5\nI0522 13:16:34.126135 1666 log.go:172] (0xc0009a4000) (5) Data frame handling\nI0522 13:16:34.126149 1666 log.go:172] (0xc0009a4000) (5) Data frame sent\nI0522 13:16:34.126159 1666 log.go:172] (0xc0009d6420) Data frame received for 5\nI0522 13:16:34.126169 1666 log.go:172] (0xc0009a4000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0522 13:16:34.126195 1666 log.go:172] (0xc0009d6420) Data frame received for 3\nI0522 13:16:34.126207 1666 log.go:172] (0xc000776000) (3) Data frame handling\nI0522 13:16:34.126223 1666 log.go:172] (0xc000776000) (3) Data frame sent\nI0522 13:16:34.126235 1666 log.go:172] (0xc0009d6420) Data frame received for 3\nI0522 13:16:34.126245 1666 log.go:172] (0xc000776000) (3) Data frame handling\nI0522 13:16:34.127436 1666 log.go:172] (0xc0009d6420) Data frame received for 1\nI0522 13:16:34.127456 1666 log.go:172] (0xc0005f2820) (1) Data frame handling\nI0522 13:16:34.127469 1666 log.go:172] (0xc0005f2820) (1) Data frame sent\nI0522 13:16:34.127502 1666 log.go:172] (0xc0009d6420) (0xc0005f2820) Stream removed, broadcasting: 1\nI0522 13:16:34.127523 1666 log.go:172] (0xc0009d6420) Go away received\nI0522 13:16:34.127847 1666 log.go:172] (0xc0009d6420) (0xc0005f2820) Stream removed, broadcasting: 1\nI0522 13:16:34.127863 1666 log.go:172] (0xc0009d6420) (0xc000776000) Stream removed, broadcasting: 3\nI0522 13:16:34.127870 1666 log.go:172] (0xc0009d6420) (0xc0009a4000) Stream removed, broadcasting: 5\n" May 22 13:16:34.132: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 22 13:16:34.132: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 22 13:16:34.132: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 22 13:16:54.151: INFO: Deleting all statefulset in ns statefulset-3622 May 22 13:16:54.154: INFO: Scaling statefulset ss to 0 May 22 13:16:54.162: INFO: Waiting for statefulset status.replicas updated to 0 May 22 13:16:54.164: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:16:54.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3622" for this suite. May 22 13:17:00.229: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:17:00.314: INFO: namespace statefulset-3622 deletion completed in 6.127817246s • [SLOW TEST:91.139 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:17:00.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 22 13:17:08.518: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 22 13:17:08.545: INFO: Pod pod-with-prestop-http-hook still exists May 22 13:17:10.545: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 22 13:17:10.549: INFO: Pod pod-with-prestop-http-hook still exists May 22 13:17:12.545: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 22 13:17:12.550: INFO: Pod pod-with-prestop-http-hook still exists May 22 13:17:14.545: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 22 13:17:14.550: INFO: Pod pod-with-prestop-http-hook still exists May 22 13:17:16.545: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 22 13:17:16.550: INFO: Pod pod-with-prestop-http-hook still exists May 22 13:17:18.545: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 22 13:17:18.551: INFO: Pod pod-with-prestop-http-hook still exists May 22 13:17:20.545: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 22 13:17:20.548: INFO: Pod pod-with-prestop-http-hook still exists May 22 13:17:22.545: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 22 13:17:22.549: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:17:22.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5243" for this suite. May 22 13:17:44.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:17:44.661: INFO: namespace container-lifecycle-hook-5243 deletion completed in 22.10045366s • [SLOW TEST:44.347 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:17:44.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 22 13:17:44.740: INFO: Waiting up to 5m0s for pod "downward-api-b7611e87-fe19-4a4e-b0fb-a5f7a356883f" in namespace "downward-api-6608" to be "success or failure" May 22 13:17:44.744: INFO: Pod "downward-api-b7611e87-fe19-4a4e-b0fb-a5f7a356883f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094269ms May 22 13:17:46.748: INFO: Pod "downward-api-b7611e87-fe19-4a4e-b0fb-a5f7a356883f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007936902s May 22 13:17:48.753: INFO: Pod "downward-api-b7611e87-fe19-4a4e-b0fb-a5f7a356883f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012881667s STEP: Saw pod success May 22 13:17:48.753: INFO: Pod "downward-api-b7611e87-fe19-4a4e-b0fb-a5f7a356883f" satisfied condition "success or failure" May 22 13:17:48.756: INFO: Trying to get logs from node iruya-worker pod downward-api-b7611e87-fe19-4a4e-b0fb-a5f7a356883f container dapi-container: STEP: delete the pod May 22 13:17:48.917: INFO: Waiting for pod downward-api-b7611e87-fe19-4a4e-b0fb-a5f7a356883f to disappear May 22 13:17:48.940: INFO: Pod downward-api-b7611e87-fe19-4a4e-b0fb-a5f7a356883f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:17:48.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6608" for this suite. May 22 13:17:54.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:17:55.028: INFO: namespace downward-api-6608 deletion completed in 6.084037658s • [SLOW TEST:10.367 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:17:55.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 22 13:18:03.161: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 22 13:18:03.168: INFO: Pod pod-with-poststart-http-hook still exists May 22 13:18:05.168: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 22 13:18:05.172: INFO: Pod pod-with-poststart-http-hook still exists May 22 13:18:07.168: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 22 13:18:07.173: INFO: Pod pod-with-poststart-http-hook still exists May 22 13:18:09.168: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 22 13:18:09.172: INFO: Pod pod-with-poststart-http-hook still exists May 22 13:18:11.168: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 22 13:18:11.173: INFO: Pod pod-with-poststart-http-hook still exists May 22 13:18:13.168: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 22 13:18:13.172: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:18:13.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3552" for this suite. May 22 13:18:35.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:18:35.269: INFO: namespace container-lifecycle-hook-3552 deletion completed in 22.092349021s • [SLOW TEST:40.240 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:18:35.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-e8dc61b5-947f-46ea-838f-bc3c51a819d2 STEP: Creating a pod to test consume configMaps May 22 13:18:35.436: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-55f7a59a-ffd3-420a-9dfb-3e7340659597" in namespace "projected-3029" to be "success or failure" May 22 13:18:35.455: INFO: Pod "pod-projected-configmaps-55f7a59a-ffd3-420a-9dfb-3e7340659597": Phase="Pending", Reason="", readiness=false. Elapsed: 19.316029ms May 22 13:18:37.458: INFO: Pod "pod-projected-configmaps-55f7a59a-ffd3-420a-9dfb-3e7340659597": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022424986s May 22 13:18:39.463: INFO: Pod "pod-projected-configmaps-55f7a59a-ffd3-420a-9dfb-3e7340659597": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027273173s STEP: Saw pod success May 22 13:18:39.463: INFO: Pod "pod-projected-configmaps-55f7a59a-ffd3-420a-9dfb-3e7340659597" satisfied condition "success or failure" May 22 13:18:39.467: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-55f7a59a-ffd3-420a-9dfb-3e7340659597 container projected-configmap-volume-test: STEP: delete the pod May 22 13:18:39.520: INFO: Waiting for pod pod-projected-configmaps-55f7a59a-ffd3-420a-9dfb-3e7340659597 to disappear May 22 13:18:39.528: INFO: Pod pod-projected-configmaps-55f7a59a-ffd3-420a-9dfb-3e7340659597 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:18:39.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3029" for this suite. May 22 13:18:45.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:18:45.623: INFO: namespace projected-3029 deletion completed in 6.092166729s • [SLOW TEST:10.354 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:18:45.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-33580400-cafe-4d68-9526-7f27d64cb4c1 STEP: Creating a pod to test consume configMaps May 22 13:18:45.721: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-103ea56c-5b71-4152-b579-51d5f8bd9efb" in namespace "projected-2906" to be "success or failure" May 22 13:18:45.740: INFO: Pod "pod-projected-configmaps-103ea56c-5b71-4152-b579-51d5f8bd9efb": Phase="Pending", Reason="", readiness=false. Elapsed: 19.06281ms May 22 13:18:47.745: INFO: Pod "pod-projected-configmaps-103ea56c-5b71-4152-b579-51d5f8bd9efb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023838831s May 22 13:18:49.750: INFO: Pod "pod-projected-configmaps-103ea56c-5b71-4152-b579-51d5f8bd9efb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028772908s STEP: Saw pod success May 22 13:18:49.750: INFO: Pod "pod-projected-configmaps-103ea56c-5b71-4152-b579-51d5f8bd9efb" satisfied condition "success or failure" May 22 13:18:49.753: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-103ea56c-5b71-4152-b579-51d5f8bd9efb container projected-configmap-volume-test: STEP: delete the pod May 22 13:18:49.792: INFO: Waiting for pod pod-projected-configmaps-103ea56c-5b71-4152-b579-51d5f8bd9efb to disappear May 22 13:18:49.798: INFO: Pod pod-projected-configmaps-103ea56c-5b71-4152-b579-51d5f8bd9efb no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:18:49.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2906" for this suite. May 22 13:18:55.813: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:18:55.881: INFO: namespace projected-2906 deletion completed in 6.079057522s • [SLOW TEST:10.257 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:18:55.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-ef1de82e-058f-4f7d-8119-0f4d13bc265c STEP: Creating a pod to test consume secrets May 22 13:18:55.974: INFO: Waiting up to 5m0s for pod "pod-secrets-894b8166-8141-40cd-8588-a6401afd6f04" in namespace "secrets-4473" to be "success or failure" May 22 13:18:55.978: INFO: Pod "pod-secrets-894b8166-8141-40cd-8588-a6401afd6f04": Phase="Pending", Reason="", readiness=false. Elapsed: 3.753208ms May 22 13:18:57.982: INFO: Pod "pod-secrets-894b8166-8141-40cd-8588-a6401afd6f04": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008091267s May 22 13:18:59.997: INFO: Pod "pod-secrets-894b8166-8141-40cd-8588-a6401afd6f04": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022600727s STEP: Saw pod success May 22 13:18:59.997: INFO: Pod "pod-secrets-894b8166-8141-40cd-8588-a6401afd6f04" satisfied condition "success or failure" May 22 13:19:00.000: INFO: Trying to get logs from node iruya-worker pod pod-secrets-894b8166-8141-40cd-8588-a6401afd6f04 container secret-volume-test: STEP: delete the pod May 22 13:19:00.023: INFO: Waiting for pod pod-secrets-894b8166-8141-40cd-8588-a6401afd6f04 to disappear May 22 13:19:00.027: INFO: Pod pod-secrets-894b8166-8141-40cd-8588-a6401afd6f04 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:19:00.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4473" for this suite. May 22 13:19:06.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:19:06.116: INFO: namespace secrets-4473 deletion completed in 6.084494023s • [SLOW TEST:10.235 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:19:06.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 22 13:19:06.197: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:19:14.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5380" for this suite. May 22 13:19:36.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:19:36.691: INFO: namespace init-container-5380 deletion completed in 22.101435047s • [SLOW TEST:30.575 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:19:36.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 22 13:19:41.305: INFO: Successfully updated pod "annotationupdate23d16551-9df0-4961-95fe-d3881776c81a" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:19:45.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2558" for this suite. May 22 13:20:07.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:20:07.441: INFO: namespace downward-api-2558 deletion completed in 22.108951888s • [SLOW TEST:30.750 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:20:07.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 22 13:20:07.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-10' May 22 13:20:07.650: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 22 13:20:07.651: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 May 22 13:20:09.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-10' May 22 13:20:09.796: INFO: stderr: "" May 22 13:20:09.796: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:20:09.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-10" for this suite. May 22 13:20:31.857: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:20:31.929: INFO: namespace kubectl-10 deletion completed in 22.130097451s • [SLOW TEST:24.487 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:20:31.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 22 13:20:32.007: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c27f7f73-3122-45ab-8b6c-c4a0f336f99a" in namespace "projected-6508" to be "success or failure" May 22 13:20:32.011: INFO: Pod "downwardapi-volume-c27f7f73-3122-45ab-8b6c-c4a0f336f99a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044814ms May 22 13:20:34.015: INFO: Pod "downwardapi-volume-c27f7f73-3122-45ab-8b6c-c4a0f336f99a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008539763s May 22 13:20:36.020: INFO: Pod "downwardapi-volume-c27f7f73-3122-45ab-8b6c-c4a0f336f99a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013113007s STEP: Saw pod success May 22 13:20:36.020: INFO: Pod "downwardapi-volume-c27f7f73-3122-45ab-8b6c-c4a0f336f99a" satisfied condition "success or failure" May 22 13:20:36.023: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-c27f7f73-3122-45ab-8b6c-c4a0f336f99a container client-container: STEP: delete the pod May 22 13:20:36.078: INFO: Waiting for pod downwardapi-volume-c27f7f73-3122-45ab-8b6c-c4a0f336f99a to disappear May 22 13:20:36.123: INFO: Pod downwardapi-volume-c27f7f73-3122-45ab-8b6c-c4a0f336f99a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:20:36.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6508" for this suite. May 22 13:20:42.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:20:42.214: INFO: namespace projected-6508 deletion completed in 6.086141967s • [SLOW TEST:10.284 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:20:42.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs May 22 13:20:42.333: INFO: Waiting up to 5m0s for pod "pod-f6acd031-9b99-4b2b-a8af-40bbe2a8c3ed" in namespace "emptydir-3434" to be "success or failure" May 22 13:20:42.346: INFO: Pod "pod-f6acd031-9b99-4b2b-a8af-40bbe2a8c3ed": Phase="Pending", Reason="", readiness=false. Elapsed: 12.830585ms May 22 13:20:44.350: INFO: Pod "pod-f6acd031-9b99-4b2b-a8af-40bbe2a8c3ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016831905s May 22 13:20:46.354: INFO: Pod "pod-f6acd031-9b99-4b2b-a8af-40bbe2a8c3ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020906739s STEP: Saw pod success May 22 13:20:46.354: INFO: Pod "pod-f6acd031-9b99-4b2b-a8af-40bbe2a8c3ed" satisfied condition "success or failure" May 22 13:20:46.357: INFO: Trying to get logs from node iruya-worker pod pod-f6acd031-9b99-4b2b-a8af-40bbe2a8c3ed container test-container: STEP: delete the pod May 22 13:20:46.377: INFO: Waiting for pod pod-f6acd031-9b99-4b2b-a8af-40bbe2a8c3ed to disappear May 22 13:20:46.382: INFO: Pod pod-f6acd031-9b99-4b2b-a8af-40bbe2a8c3ed no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:20:46.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3434" for this suite. May 22 13:20:52.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:20:52.468: INFO: namespace emptydir-3434 deletion completed in 6.084074324s • [SLOW TEST:10.254 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:20:52.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:20:58.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3654" for this suite. May 22 13:21:04.228: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:21:04.306: INFO: namespace watch-3654 deletion completed in 6.180103366s • [SLOW TEST:11.837 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:21:04.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC May 22 13:21:04.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2516' May 22 13:21:04.648: INFO: stderr: "" May 22 13:21:04.648: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 22 13:21:05.653: INFO: Selector matched 1 pods for map[app:redis] May 22 13:21:05.653: INFO: Found 0 / 1 May 22 13:21:06.653: INFO: Selector matched 1 pods for map[app:redis] May 22 13:21:06.653: INFO: Found 0 / 1 May 22 13:21:07.653: INFO: Selector matched 1 pods for map[app:redis] May 22 13:21:07.653: INFO: Found 0 / 1 May 22 13:21:08.653: INFO: Selector matched 1 pods for map[app:redis] May 22 13:21:08.653: INFO: Found 1 / 1 May 22 13:21:08.653: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 22 13:21:08.656: INFO: Selector matched 1 pods for map[app:redis] May 22 13:21:08.656: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 22 13:21:08.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-bht2w --namespace=kubectl-2516 -p {"metadata":{"annotations":{"x":"y"}}}' May 22 13:21:08.751: INFO: stderr: "" May 22 13:21:08.751: INFO: stdout: "pod/redis-master-bht2w patched\n" STEP: checking annotations May 22 13:21:08.753: INFO: Selector matched 1 pods for map[app:redis] May 22 13:21:08.753: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:21:08.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2516" for this suite. May 22 13:21:30.771: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:21:30.850: INFO: namespace kubectl-2516 deletion completed in 22.094252242s • [SLOW TEST:26.544 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:21:30.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-5ae11d88-5651-44e0-9a88-7785331fe85d STEP: Creating a pod to test consume configMaps May 22 13:21:30.954: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-38e41f9d-6a2f-4c2f-adc6-4f1f7b3a5d06" in namespace "projected-5531" to be "success or failure" May 22 13:21:30.976: INFO: Pod "pod-projected-configmaps-38e41f9d-6a2f-4c2f-adc6-4f1f7b3a5d06": Phase="Pending", Reason="", readiness=false. Elapsed: 22.157009ms May 22 13:21:32.987: INFO: Pod "pod-projected-configmaps-38e41f9d-6a2f-4c2f-adc6-4f1f7b3a5d06": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032956522s May 22 13:21:34.990: INFO: Pod "pod-projected-configmaps-38e41f9d-6a2f-4c2f-adc6-4f1f7b3a5d06": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036494013s STEP: Saw pod success May 22 13:21:34.990: INFO: Pod "pod-projected-configmaps-38e41f9d-6a2f-4c2f-adc6-4f1f7b3a5d06" satisfied condition "success or failure" May 22 13:21:34.992: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-38e41f9d-6a2f-4c2f-adc6-4f1f7b3a5d06 container projected-configmap-volume-test: STEP: delete the pod May 22 13:21:35.026: INFO: Waiting for pod pod-projected-configmaps-38e41f9d-6a2f-4c2f-adc6-4f1f7b3a5d06 to disappear May 22 13:21:35.035: INFO: Pod pod-projected-configmaps-38e41f9d-6a2f-4c2f-adc6-4f1f7b3a5d06 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:21:35.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5531" for this suite. May 22 13:21:41.059: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:21:41.118: INFO: namespace projected-5531 deletion completed in 6.080155308s • [SLOW TEST:10.268 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:21:41.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 22 13:21:41.199: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:21:51.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2964" for this suite. May 22 13:21:57.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:21:58.011: INFO: namespace pods-2964 deletion completed in 6.082121149s • [SLOW TEST:16.893 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:21:58.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 22 13:21:58.075: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4bd070fb-6bcb-4581-a020-941561fda1ed" in namespace "downward-api-8574" to be "success or failure" May 22 13:21:58.078: INFO: Pod "downwardapi-volume-4bd070fb-6bcb-4581-a020-941561fda1ed": Phase="Pending", Reason="", readiness=false. Elapsed: 3.301689ms May 22 13:22:00.082: INFO: Pod "downwardapi-volume-4bd070fb-6bcb-4581-a020-941561fda1ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007554323s May 22 13:22:02.086: INFO: Pod "downwardapi-volume-4bd070fb-6bcb-4581-a020-941561fda1ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011814608s STEP: Saw pod success May 22 13:22:02.086: INFO: Pod "downwardapi-volume-4bd070fb-6bcb-4581-a020-941561fda1ed" satisfied condition "success or failure" May 22 13:22:02.090: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-4bd070fb-6bcb-4581-a020-941561fda1ed container client-container: STEP: delete the pod May 22 13:22:02.240: INFO: Waiting for pod downwardapi-volume-4bd070fb-6bcb-4581-a020-941561fda1ed to disappear May 22 13:22:02.299: INFO: Pod downwardapi-volume-4bd070fb-6bcb-4581-a020-941561fda1ed no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:22:02.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8574" for this suite. May 22 13:22:08.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:22:08.394: INFO: namespace downward-api-8574 deletion completed in 6.090245524s • [SLOW TEST:10.383 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:22:08.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-cc9752c9-d202-4311-965f-99a5d158e426 STEP: Creating configMap with name cm-test-opt-upd-a3fab013-16b6-4856-a0ad-ace1892becbe STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-cc9752c9-d202-4311-965f-99a5d158e426 STEP: Updating configmap cm-test-opt-upd-a3fab013-16b6-4856-a0ad-ace1892becbe STEP: Creating configMap with name cm-test-opt-create-fa781478-7d10-4edc-b900-0183a1602bf3 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:22:16.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6115" for this suite. May 22 13:22:38.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:22:38.695: INFO: namespace projected-6115 deletion completed in 22.092497776s • [SLOW TEST:30.301 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:22:38.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-ce896a55-7934-4232-9501-bc42fc8fb86b in namespace container-probe-7033 May 22 13:22:42.795: INFO: Started pod liveness-ce896a55-7934-4232-9501-bc42fc8fb86b in namespace container-probe-7033 STEP: checking the pod's current state and verifying that restartCount is present May 22 13:22:42.798: INFO: Initial restart count of pod liveness-ce896a55-7934-4232-9501-bc42fc8fb86b is 0 May 22 13:23:02.839: INFO: Restart count of pod container-probe-7033/liveness-ce896a55-7934-4232-9501-bc42fc8fb86b is now 1 (20.0414791s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:23:02.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7033" for this suite. May 22 13:23:08.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:23:08.967: INFO: namespace container-probe-7033 deletion completed in 6.085218322s • [SLOW TEST:30.272 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:23:08.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-5824/configmap-test-cdaff7c7-464c-40d7-a3e4-61bdad463301 STEP: Creating a pod to test consume configMaps May 22 13:23:09.051: INFO: Waiting up to 5m0s for pod "pod-configmaps-b3bd8469-db4c-4e7e-b84a-dd1c01cf0d95" in namespace "configmap-5824" to be "success or failure" May 22 13:23:09.055: INFO: Pod "pod-configmaps-b3bd8469-db4c-4e7e-b84a-dd1c01cf0d95": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118968ms May 22 13:23:11.060: INFO: Pod "pod-configmaps-b3bd8469-db4c-4e7e-b84a-dd1c01cf0d95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008506821s May 22 13:23:13.063: INFO: Pod "pod-configmaps-b3bd8469-db4c-4e7e-b84a-dd1c01cf0d95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0124107s STEP: Saw pod success May 22 13:23:13.064: INFO: Pod "pod-configmaps-b3bd8469-db4c-4e7e-b84a-dd1c01cf0d95" satisfied condition "success or failure" May 22 13:23:13.066: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-b3bd8469-db4c-4e7e-b84a-dd1c01cf0d95 container env-test: STEP: delete the pod May 22 13:23:13.087: INFO: Waiting for pod pod-configmaps-b3bd8469-db4c-4e7e-b84a-dd1c01cf0d95 to disappear May 22 13:23:13.091: INFO: Pod pod-configmaps-b3bd8469-db4c-4e7e-b84a-dd1c01cf0d95 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:23:13.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5824" for this suite. May 22 13:23:19.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:23:19.230: INFO: namespace configmap-5824 deletion completed in 6.1344374s • [SLOW TEST:10.262 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:23:19.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-53ff4c03-9fa9-400d-9f46-ebf06252dee3 STEP: Creating a pod to test consume secrets May 22 13:23:19.315: INFO: Waiting up to 5m0s for pod "pod-secrets-22915bb8-0c8f-4774-b729-311e0309964c" in namespace "secrets-3282" to be "success or failure" May 22 13:23:19.319: INFO: Pod "pod-secrets-22915bb8-0c8f-4774-b729-311e0309964c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.841633ms May 22 13:23:21.348: INFO: Pod "pod-secrets-22915bb8-0c8f-4774-b729-311e0309964c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032901107s May 22 13:23:23.352: INFO: Pod "pod-secrets-22915bb8-0c8f-4774-b729-311e0309964c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036654622s STEP: Saw pod success May 22 13:23:23.352: INFO: Pod "pod-secrets-22915bb8-0c8f-4774-b729-311e0309964c" satisfied condition "success or failure" May 22 13:23:23.355: INFO: Trying to get logs from node iruya-worker pod pod-secrets-22915bb8-0c8f-4774-b729-311e0309964c container secret-volume-test: STEP: delete the pod May 22 13:23:23.482: INFO: Waiting for pod pod-secrets-22915bb8-0c8f-4774-b729-311e0309964c to disappear May 22 13:23:23.493: INFO: Pod pod-secrets-22915bb8-0c8f-4774-b729-311e0309964c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:23:23.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3282" for this suite. May 22 13:23:29.516: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:23:29.613: INFO: namespace secrets-3282 deletion completed in 6.11433477s • [SLOW TEST:10.384 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:23:29.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin May 22 13:23:29.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9684 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 22 13:23:33.182: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0522 13:23:33.115743 1769 log.go:172] (0xc000141080) (0xc0009f6a00) Create stream\nI0522 13:23:33.115843 1769 log.go:172] (0xc000141080) (0xc0009f6a00) Stream added, broadcasting: 1\nI0522 13:23:33.120439 1769 log.go:172] (0xc000141080) Reply frame received for 1\nI0522 13:23:33.120481 1769 log.go:172] (0xc000141080) (0xc0009f6000) Create stream\nI0522 13:23:33.120490 1769 log.go:172] (0xc000141080) (0xc0009f6000) Stream added, broadcasting: 3\nI0522 13:23:33.121726 1769 log.go:172] (0xc000141080) Reply frame received for 3\nI0522 13:23:33.121767 1769 log.go:172] (0xc000141080) (0xc0006660a0) Create stream\nI0522 13:23:33.121782 1769 log.go:172] (0xc000141080) (0xc0006660a0) Stream added, broadcasting: 5\nI0522 13:23:33.122790 1769 log.go:172] (0xc000141080) Reply frame received for 5\nI0522 13:23:33.122846 1769 log.go:172] (0xc000141080) (0xc0009f60a0) Create stream\nI0522 13:23:33.122869 1769 log.go:172] (0xc000141080) (0xc0009f60a0) Stream added, broadcasting: 7\nI0522 13:23:33.123903 1769 log.go:172] (0xc000141080) Reply frame received for 7\nI0522 13:23:33.124039 1769 log.go:172] (0xc0009f6000) (3) Writing data frame\nI0522 13:23:33.124158 1769 log.go:172] (0xc0009f6000) (3) Writing data frame\nI0522 13:23:33.124994 1769 log.go:172] (0xc000141080) Data frame received for 5\nI0522 13:23:33.125017 1769 log.go:172] (0xc0006660a0) (5) Data frame handling\nI0522 13:23:33.125029 1769 log.go:172] (0xc0006660a0) (5) Data frame sent\nI0522 13:23:33.125774 1769 log.go:172] (0xc000141080) Data frame received for 5\nI0522 13:23:33.125795 1769 log.go:172] (0xc0006660a0) (5) Data frame handling\nI0522 13:23:33.125814 1769 log.go:172] (0xc0006660a0) (5) Data frame sent\nI0522 13:23:33.160014 1769 log.go:172] (0xc000141080) Data frame received for 7\nI0522 13:23:33.160045 1769 log.go:172] (0xc0009f60a0) (7) Data frame handling\nI0522 13:23:33.160084 1769 log.go:172] (0xc000141080) Data frame received for 5\nI0522 13:23:33.160138 1769 log.go:172] (0xc0006660a0) (5) Data frame handling\nI0522 13:23:33.160867 1769 log.go:172] (0xc000141080) Data frame received for 1\nI0522 13:23:33.160895 1769 log.go:172] (0xc0009f6a00) (1) Data frame handling\nI0522 13:23:33.160915 1769 log.go:172] (0xc000141080) (0xc0009f6000) Stream removed, broadcasting: 3\nI0522 13:23:33.160959 1769 log.go:172] (0xc0009f6a00) (1) Data frame sent\nI0522 13:23:33.160984 1769 log.go:172] (0xc000141080) (0xc0009f6a00) Stream removed, broadcasting: 1\nI0522 13:23:33.161072 1769 log.go:172] (0xc000141080) (0xc0009f6a00) Stream removed, broadcasting: 1\nI0522 13:23:33.161384 1769 log.go:172] (0xc000141080) (0xc0009f6000) Stream removed, broadcasting: 3\nI0522 13:23:33.161409 1769 log.go:172] (0xc000141080) (0xc0006660a0) Stream removed, broadcasting: 5\nI0522 13:23:33.161421 1769 log.go:172] (0xc000141080) (0xc0009f60a0) Stream removed, broadcasting: 7\nI0522 13:23:33.161603 1769 log.go:172] (0xc000141080) Go away received\n" May 22 13:23:33.182: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:23:35.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9684" for this suite. May 22 13:23:41.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:23:41.280: INFO: namespace kubectl-9684 deletion completed in 6.08573313s • [SLOW TEST:11.666 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:23:41.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-d01c3906-51d2-4a9a-9b0b-e114558856d8 STEP: Creating a pod to test consume configMaps May 22 13:23:41.416: INFO: Waiting up to 5m0s for pod "pod-configmaps-cd446e0d-2303-47a2-8173-5543f9b08204" in namespace "configmap-3094" to be "success or failure" May 22 13:23:41.421: INFO: Pod "pod-configmaps-cd446e0d-2303-47a2-8173-5543f9b08204": Phase="Pending", Reason="", readiness=false. Elapsed: 4.135482ms May 22 13:23:43.424: INFO: Pod "pod-configmaps-cd446e0d-2303-47a2-8173-5543f9b08204": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007775312s May 22 13:23:45.429: INFO: Pod "pod-configmaps-cd446e0d-2303-47a2-8173-5543f9b08204": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012782547s STEP: Saw pod success May 22 13:23:45.429: INFO: Pod "pod-configmaps-cd446e0d-2303-47a2-8173-5543f9b08204" satisfied condition "success or failure" May 22 13:23:45.433: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-cd446e0d-2303-47a2-8173-5543f9b08204 container configmap-volume-test: STEP: delete the pod May 22 13:23:45.524: INFO: Waiting for pod pod-configmaps-cd446e0d-2303-47a2-8173-5543f9b08204 to disappear May 22 13:23:45.529: INFO: Pod pod-configmaps-cd446e0d-2303-47a2-8173-5543f9b08204 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:23:45.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3094" for this suite. May 22 13:23:51.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:23:51.672: INFO: namespace configmap-3094 deletion completed in 6.139620444s • [SLOW TEST:10.391 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:23:51.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-493 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet May 22 13:23:51.767: INFO: Found 0 stateful pods, waiting for 3 May 22 13:24:01.773: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 22 13:24:01.773: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 22 13:24:01.773: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 22 13:24:11.774: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 22 13:24:11.774: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 22 13:24:11.774: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 22 13:24:11.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-493 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 22 13:24:12.045: INFO: stderr: "I0522 13:24:11.916748 1793 log.go:172] (0xc0003cc420) (0xc00061a820) Create stream\nI0522 13:24:11.916831 1793 log.go:172] (0xc0003cc420) (0xc00061a820) Stream added, broadcasting: 1\nI0522 13:24:11.919599 1793 log.go:172] (0xc0003cc420) Reply frame received for 1\nI0522 13:24:11.919645 1793 log.go:172] (0xc0003cc420) (0xc000900000) Create stream\nI0522 13:24:11.919659 1793 log.go:172] (0xc0003cc420) (0xc000900000) Stream added, broadcasting: 3\nI0522 13:24:11.920698 1793 log.go:172] (0xc0003cc420) Reply frame received for 3\nI0522 13:24:11.920744 1793 log.go:172] (0xc0003cc420) (0xc00075e000) Create stream\nI0522 13:24:11.920763 1793 log.go:172] (0xc0003cc420) (0xc00075e000) Stream added, broadcasting: 5\nI0522 13:24:11.922177 1793 log.go:172] (0xc0003cc420) Reply frame received for 5\nI0522 13:24:12.006208 1793 log.go:172] (0xc0003cc420) Data frame received for 5\nI0522 13:24:12.006249 1793 log.go:172] (0xc00075e000) (5) Data frame handling\nI0522 13:24:12.006272 1793 log.go:172] (0xc00075e000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0522 13:24:12.036209 1793 log.go:172] (0xc0003cc420) Data frame received for 5\nI0522 13:24:12.036269 1793 log.go:172] (0xc00075e000) (5) Data frame handling\nI0522 13:24:12.036300 1793 log.go:172] (0xc0003cc420) Data frame received for 3\nI0522 13:24:12.036317 1793 log.go:172] (0xc000900000) (3) Data frame handling\nI0522 13:24:12.036335 1793 log.go:172] (0xc000900000) (3) Data frame sent\nI0522 13:24:12.036546 1793 log.go:172] (0xc0003cc420) Data frame received for 3\nI0522 13:24:12.036582 1793 log.go:172] (0xc000900000) (3) Data frame handling\nI0522 13:24:12.038441 1793 log.go:172] (0xc0003cc420) Data frame received for 1\nI0522 13:24:12.038471 1793 log.go:172] (0xc00061a820) (1) Data frame handling\nI0522 13:24:12.038487 1793 log.go:172] (0xc00061a820) (1) Data frame sent\nI0522 13:24:12.038503 1793 log.go:172] (0xc0003cc420) (0xc00061a820) Stream removed, broadcasting: 1\nI0522 13:24:12.038624 1793 log.go:172] (0xc0003cc420) Go away received\nI0522 13:24:12.038979 1793 log.go:172] (0xc0003cc420) (0xc00061a820) Stream removed, broadcasting: 1\nI0522 13:24:12.039004 1793 log.go:172] (0xc0003cc420) (0xc000900000) Stream removed, broadcasting: 3\nI0522 13:24:12.039015 1793 log.go:172] (0xc0003cc420) (0xc00075e000) Stream removed, broadcasting: 5\n" May 22 13:24:12.045: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 22 13:24:12.045: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 22 13:24:12.096: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 22 13:24:22.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-493 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 22 13:24:22.388: INFO: stderr: "I0522 13:24:22.280126 1813 log.go:172] (0xc000116790) (0xc0007e26e0) Create stream\nI0522 13:24:22.280195 1813 log.go:172] (0xc000116790) (0xc0007e26e0) Stream added, broadcasting: 1\nI0522 13:24:22.282447 1813 log.go:172] (0xc000116790) Reply frame received for 1\nI0522 13:24:22.282479 1813 log.go:172] (0xc000116790) (0xc0007481e0) Create stream\nI0522 13:24:22.282487 1813 log.go:172] (0xc000116790) (0xc0007481e0) Stream added, broadcasting: 3\nI0522 13:24:22.283525 1813 log.go:172] (0xc000116790) Reply frame received for 3\nI0522 13:24:22.283583 1813 log.go:172] (0xc000116790) (0xc0007e2780) Create stream\nI0522 13:24:22.283601 1813 log.go:172] (0xc000116790) (0xc0007e2780) Stream added, broadcasting: 5\nI0522 13:24:22.285923 1813 log.go:172] (0xc000116790) Reply frame received for 5\nI0522 13:24:22.381561 1813 log.go:172] (0xc000116790) Data frame received for 3\nI0522 13:24:22.381602 1813 log.go:172] (0xc0007481e0) (3) Data frame handling\nI0522 13:24:22.381624 1813 log.go:172] (0xc0007481e0) (3) Data frame sent\nI0522 13:24:22.381648 1813 log.go:172] (0xc000116790) Data frame received for 3\nI0522 13:24:22.381667 1813 log.go:172] (0xc0007481e0) (3) Data frame handling\nI0522 13:24:22.381714 1813 log.go:172] (0xc000116790) Data frame received for 5\nI0522 13:24:22.381743 1813 log.go:172] (0xc0007e2780) (5) Data frame handling\nI0522 13:24:22.381775 1813 log.go:172] (0xc0007e2780) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0522 13:24:22.381792 1813 log.go:172] (0xc000116790) Data frame received for 5\nI0522 13:24:22.381819 1813 log.go:172] (0xc0007e2780) (5) Data frame handling\nI0522 13:24:22.383161 1813 log.go:172] (0xc000116790) Data frame received for 1\nI0522 13:24:22.383193 1813 log.go:172] (0xc0007e26e0) (1) Data frame handling\nI0522 13:24:22.383218 1813 log.go:172] (0xc0007e26e0) (1) Data frame sent\nI0522 13:24:22.383231 1813 log.go:172] (0xc000116790) (0xc0007e26e0) Stream removed, broadcasting: 1\nI0522 13:24:22.383380 1813 log.go:172] (0xc000116790) Go away received\nI0522 13:24:22.383658 1813 log.go:172] (0xc000116790) (0xc0007e26e0) Stream removed, broadcasting: 1\nI0522 13:24:22.383703 1813 log.go:172] (0xc000116790) (0xc0007481e0) Stream removed, broadcasting: 3\nI0522 13:24:22.383728 1813 log.go:172] (0xc000116790) (0xc0007e2780) Stream removed, broadcasting: 5\n" May 22 13:24:22.389: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 22 13:24:22.389: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 22 13:24:32.408: INFO: Waiting for StatefulSet statefulset-493/ss2 to complete update May 22 13:24:32.408: INFO: Waiting for Pod statefulset-493/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 22 13:24:32.408: INFO: Waiting for Pod statefulset-493/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 22 13:24:42.416: INFO: Waiting for StatefulSet statefulset-493/ss2 to complete update May 22 13:24:42.416: INFO: Waiting for Pod statefulset-493/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision May 22 13:24:52.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-493 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 22 13:24:52.675: INFO: stderr: "I0522 13:24:52.538138 1834 log.go:172] (0xc000116fd0) (0xc0003a08c0) Create stream\nI0522 13:24:52.538196 1834 log.go:172] (0xc000116fd0) (0xc0003a08c0) Stream added, broadcasting: 1\nI0522 13:24:52.542210 1834 log.go:172] (0xc000116fd0) Reply frame received for 1\nI0522 13:24:52.542236 1834 log.go:172] (0xc000116fd0) (0xc0003a0140) Create stream\nI0522 13:24:52.542243 1834 log.go:172] (0xc000116fd0) (0xc0003a0140) Stream added, broadcasting: 3\nI0522 13:24:52.543028 1834 log.go:172] (0xc000116fd0) Reply frame received for 3\nI0522 13:24:52.543066 1834 log.go:172] (0xc000116fd0) (0xc00033a000) Create stream\nI0522 13:24:52.543077 1834 log.go:172] (0xc000116fd0) (0xc00033a000) Stream added, broadcasting: 5\nI0522 13:24:52.543823 1834 log.go:172] (0xc000116fd0) Reply frame received for 5\nI0522 13:24:52.638021 1834 log.go:172] (0xc000116fd0) Data frame received for 5\nI0522 13:24:52.638054 1834 log.go:172] (0xc00033a000) (5) Data frame handling\nI0522 13:24:52.638072 1834 log.go:172] (0xc00033a000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0522 13:24:52.667175 1834 log.go:172] (0xc000116fd0) Data frame received for 3\nI0522 13:24:52.667218 1834 log.go:172] (0xc0003a0140) (3) Data frame handling\nI0522 13:24:52.667240 1834 log.go:172] (0xc0003a0140) (3) Data frame sent\nI0522 13:24:52.667262 1834 log.go:172] (0xc000116fd0) Data frame received for 3\nI0522 13:24:52.667284 1834 log.go:172] (0xc0003a0140) (3) Data frame handling\nI0522 13:24:52.667477 1834 log.go:172] (0xc000116fd0) Data frame received for 5\nI0522 13:24:52.667507 1834 log.go:172] (0xc00033a000) (5) Data frame handling\nI0522 13:24:52.669389 1834 log.go:172] (0xc000116fd0) Data frame received for 1\nI0522 13:24:52.669407 1834 log.go:172] (0xc0003a08c0) (1) Data frame handling\nI0522 13:24:52.669421 1834 log.go:172] (0xc0003a08c0) (1) Data frame sent\nI0522 13:24:52.669566 1834 log.go:172] (0xc000116fd0) (0xc0003a08c0) Stream removed, broadcasting: 1\nI0522 13:24:52.669585 1834 log.go:172] (0xc000116fd0) Go away received\nI0522 13:24:52.669989 1834 log.go:172] (0xc000116fd0) (0xc0003a08c0) Stream removed, broadcasting: 1\nI0522 13:24:52.670013 1834 log.go:172] (0xc000116fd0) (0xc0003a0140) Stream removed, broadcasting: 3\nI0522 13:24:52.670025 1834 log.go:172] (0xc000116fd0) (0xc00033a000) Stream removed, broadcasting: 5\n" May 22 13:24:52.675: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 22 13:24:52.675: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 22 13:25:02.703: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 22 13:25:12.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-493 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 22 13:25:12.975: INFO: stderr: "I0522 13:25:12.878787 1853 log.go:172] (0xc000944370) (0xc000822780) Create stream\nI0522 13:25:12.878848 1853 log.go:172] (0xc000944370) (0xc000822780) Stream added, broadcasting: 1\nI0522 13:25:12.880935 1853 log.go:172] (0xc000944370) Reply frame received for 1\nI0522 13:25:12.880977 1853 log.go:172] (0xc000944370) (0xc0009ca000) Create stream\nI0522 13:25:12.880992 1853 log.go:172] (0xc000944370) (0xc0009ca000) Stream added, broadcasting: 3\nI0522 13:25:12.882281 1853 log.go:172] (0xc000944370) Reply frame received for 3\nI0522 13:25:12.882318 1853 log.go:172] (0xc000944370) (0xc000822820) Create stream\nI0522 13:25:12.882327 1853 log.go:172] (0xc000944370) (0xc000822820) Stream added, broadcasting: 5\nI0522 13:25:12.883065 1853 log.go:172] (0xc000944370) Reply frame received for 5\nI0522 13:25:12.969990 1853 log.go:172] (0xc000944370) Data frame received for 3\nI0522 13:25:12.970018 1853 log.go:172] (0xc0009ca000) (3) Data frame handling\nI0522 13:25:12.970027 1853 log.go:172] (0xc0009ca000) (3) Data frame sent\nI0522 13:25:12.970033 1853 log.go:172] (0xc000944370) Data frame received for 3\nI0522 13:25:12.970038 1853 log.go:172] (0xc0009ca000) (3) Data frame handling\nI0522 13:25:12.970064 1853 log.go:172] (0xc000944370) Data frame received for 5\nI0522 13:25:12.970070 1853 log.go:172] (0xc000822820) (5) Data frame handling\nI0522 13:25:12.970077 1853 log.go:172] (0xc000822820) (5) Data frame sent\nI0522 13:25:12.970082 1853 log.go:172] (0xc000944370) Data frame received for 5\nI0522 13:25:12.970088 1853 log.go:172] (0xc000822820) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0522 13:25:12.971561 1853 log.go:172] (0xc000944370) Data frame received for 1\nI0522 13:25:12.971583 1853 log.go:172] (0xc000822780) (1) Data frame handling\nI0522 13:25:12.971594 1853 log.go:172] (0xc000822780) (1) Data frame sent\nI0522 13:25:12.971606 1853 log.go:172] (0xc000944370) (0xc000822780) Stream removed, broadcasting: 1\nI0522 13:25:12.971626 1853 log.go:172] (0xc000944370) Go away received\nI0522 13:25:12.971963 1853 log.go:172] (0xc000944370) (0xc000822780) Stream removed, broadcasting: 1\nI0522 13:25:12.971986 1853 log.go:172] (0xc000944370) (0xc0009ca000) Stream removed, broadcasting: 3\nI0522 13:25:12.971994 1853 log.go:172] (0xc000944370) (0xc000822820) Stream removed, broadcasting: 5\n" May 22 13:25:12.975: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 22 13:25:12.975: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 22 13:25:23.116: INFO: Waiting for StatefulSet statefulset-493/ss2 to complete update May 22 13:25:23.116: INFO: Waiting for Pod statefulset-493/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 22 13:25:23.116: INFO: Waiting for Pod statefulset-493/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 22 13:25:33.169: INFO: Waiting for StatefulSet statefulset-493/ss2 to complete update May 22 13:25:33.169: INFO: Waiting for Pod statefulset-493/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 22 13:25:43.123: INFO: Deleting all statefulset in ns statefulset-493 May 22 13:25:43.126: INFO: Scaling statefulset ss2 to 0 May 22 13:26:03.152: INFO: Waiting for statefulset status.replicas updated to 0 May 22 13:26:03.155: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:26:03.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-493" for this suite. May 22 13:26:09.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:26:09.278: INFO: namespace statefulset-493 deletion completed in 6.107622518s • [SLOW TEST:137.606 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:26:09.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-ssds4 in namespace proxy-6190 I0522 13:26:09.408199 6 runners.go:180] Created replication controller with name: proxy-service-ssds4, namespace: proxy-6190, replica count: 1 I0522 13:26:10.458709 6 runners.go:180] proxy-service-ssds4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0522 13:26:11.458972 6 runners.go:180] proxy-service-ssds4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0522 13:26:12.459221 6 runners.go:180] proxy-service-ssds4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0522 13:26:13.459457 6 runners.go:180] proxy-service-ssds4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0522 13:26:14.459711 6 runners.go:180] proxy-service-ssds4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0522 13:26:15.459940 6 runners.go:180] proxy-service-ssds4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0522 13:26:16.460155 6 runners.go:180] proxy-service-ssds4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0522 13:26:17.460374 6 runners.go:180] proxy-service-ssds4 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 22 13:26:17.464: INFO: setup took 8.107960856s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 22 13:26:17.471: INFO: (0) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:162/proxy/: bar (200; 6.854172ms) May 22 13:26:17.471: INFO: (0) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:160/proxy/: foo (200; 7.182304ms) May 22 13:26:17.473: INFO: (0) /api/v1/namespaces/proxy-6190/services/http:proxy-service-ssds4:portname2/proxy/: bar (200; 9.130758ms) May 22 13:26:17.473: INFO: (0) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6:162/proxy/: bar (200; 9.469181ms) May 22 13:26:17.473: INFO: (0) /api/v1/namespaces/proxy-6190/services/http:proxy-service-ssds4:portname1/proxy/: foo (200; 9.64486ms) May 22 13:26:17.473: INFO: (0) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:1080/proxy/: ... (200; 9.487803ms) May 22 13:26:17.473: INFO: (0) /api/v1/namespaces/proxy-6190/services/proxy-service-ssds4:portname2/proxy/: bar (200; 9.642703ms) May 22 13:26:17.474: INFO: (0) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6:160/proxy/: foo (200; 9.922954ms) May 22 13:26:17.474: INFO: (0) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6:1080/proxy/: test<... (200; 9.899211ms) May 22 13:26:17.474: INFO: (0) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6/proxy/: test (200; 10.468782ms) May 22 13:26:17.474: INFO: (0) /api/v1/namespaces/proxy-6190/services/proxy-service-ssds4:portname1/proxy/: foo (200; 10.497261ms) May 22 13:26:17.476: INFO: (0) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:443/proxy/: test (200; 4.115863ms) May 22 13:26:17.489: INFO: (1) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6:162/proxy/: bar (200; 4.301127ms) May 22 13:26:17.489: INFO: (1) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:1080/proxy/: ... (200; 4.701464ms) May 22 13:26:17.489: INFO: (1) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:162/proxy/: bar (200; 4.891894ms) May 22 13:26:17.489: INFO: (1) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:160/proxy/: foo (200; 5.218042ms) May 22 13:26:17.489: INFO: (1) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:443/proxy/: test<... (200; 6.964307ms) May 22 13:26:17.491: INFO: (1) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:460/proxy/: tls baz (200; 6.948464ms) May 22 13:26:17.492: INFO: (1) /api/v1/namespaces/proxy-6190/services/https:proxy-service-ssds4:tlsportname1/proxy/: tls baz (200; 7.957943ms) May 22 13:26:17.493: INFO: (1) /api/v1/namespaces/proxy-6190/services/https:proxy-service-ssds4:tlsportname2/proxy/: tls qux (200; 8.58713ms) May 22 13:26:17.493: INFO: (1) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:462/proxy/: tls qux (200; 8.6181ms) May 22 13:26:17.496: INFO: (2) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6:162/proxy/: bar (200; 2.9071ms) May 22 13:26:17.496: INFO: (2) /api/v1/namespaces/proxy-6190/services/proxy-service-ssds4:portname2/proxy/: bar (200; 3.318181ms) May 22 13:26:17.496: INFO: (2) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:443/proxy/: test (200; 3.561245ms) May 22 13:26:17.497: INFO: (2) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:162/proxy/: bar (200; 3.871083ms) May 22 13:26:17.497: INFO: (2) /api/v1/namespaces/proxy-6190/services/proxy-service-ssds4:portname1/proxy/: foo (200; 3.853122ms) May 22 13:26:17.497: INFO: (2) /api/v1/namespaces/proxy-6190/services/https:proxy-service-ssds4:tlsportname2/proxy/: tls qux (200; 4.506475ms) May 22 13:26:17.498: INFO: (2) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:462/proxy/: tls qux (200; 4.816789ms) May 22 13:26:17.498: INFO: (2) /api/v1/namespaces/proxy-6190/services/http:proxy-service-ssds4:portname2/proxy/: bar (200; 4.798351ms) May 22 13:26:17.498: INFO: (2) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:160/proxy/: foo (200; 4.889344ms) May 22 13:26:17.498: INFO: (2) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6:160/proxy/: foo (200; 4.830961ms) May 22 13:26:17.498: INFO: (2) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:1080/proxy/: ... (200; 4.938502ms) May 22 13:26:17.498: INFO: (2) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6:1080/proxy/: test<... (200; 4.857892ms) May 22 13:26:17.500: INFO: (2) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:460/proxy/: tls baz (200; 7.208412ms) May 22 13:26:17.500: INFO: (2) /api/v1/namespaces/proxy-6190/services/https:proxy-service-ssds4:tlsportname1/proxy/: tls baz (200; 7.237354ms) May 22 13:26:17.503: INFO: (3) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:160/proxy/: foo (200; 2.585541ms) May 22 13:26:17.503: INFO: (3) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:1080/proxy/: ... (200; 3.063793ms) May 22 13:26:17.504: INFO: (3) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:462/proxy/: tls qux (200; 4.203743ms) May 22 13:26:17.505: INFO: (3) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6:160/proxy/: foo (200; 4.786446ms) May 22 13:26:17.505: INFO: (3) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:460/proxy/: tls baz (200; 5.053353ms) May 22 13:26:17.505: INFO: (3) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:162/proxy/: bar (200; 5.048392ms) May 22 13:26:17.505: INFO: (3) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6:162/proxy/: bar (200; 5.149849ms) May 22 13:26:17.506: INFO: (3) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6:1080/proxy/: test<... (200; 5.280044ms) May 22 13:26:17.506: INFO: (3) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:443/proxy/: test (200; 6.369011ms) May 22 13:26:17.507: INFO: (3) /api/v1/namespaces/proxy-6190/services/https:proxy-service-ssds4:tlsportname1/proxy/: tls baz (200; 6.416536ms) May 22 13:26:17.507: INFO: (3) /api/v1/namespaces/proxy-6190/services/proxy-service-ssds4:portname2/proxy/: bar (200; 6.453703ms) May 22 13:26:17.507: INFO: (3) /api/v1/namespaces/proxy-6190/services/https:proxy-service-ssds4:tlsportname2/proxy/: tls qux (200; 6.353772ms) May 22 13:26:17.507: INFO: (3) /api/v1/namespaces/proxy-6190/services/http:proxy-service-ssds4:portname2/proxy/: bar (200; 6.4192ms) May 22 13:26:17.513: INFO: (4) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6:1080/proxy/: test<... (200; 6.703366ms) May 22 13:26:17.513: INFO: (4) /api/v1/namespaces/proxy-6190/services/http:proxy-service-ssds4:portname2/proxy/: bar (200; 6.588231ms) May 22 13:26:17.513: INFO: (4) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:160/proxy/: foo (200; 6.598084ms) May 22 13:26:17.515: INFO: (4) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6/proxy/: test (200; 8.494498ms) May 22 13:26:17.515: INFO: (4) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6:160/proxy/: foo (200; 8.536389ms) May 22 13:26:17.515: INFO: (4) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:1080/proxy/: ... (200; 8.60022ms) May 22 13:26:17.516: INFO: (4) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:443/proxy/: test (200; 2.811527ms) May 22 13:26:17.532: INFO: (5) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6:162/proxy/: bar (200; 2.971716ms) May 22 13:26:17.533: INFO: (5) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:460/proxy/: tls baz (200; 3.086694ms) May 22 13:26:17.533: INFO: (5) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6:1080/proxy/: test<... (200; 3.139247ms) May 22 13:26:17.533: INFO: (5) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6:160/proxy/: foo (200; 3.172825ms) May 22 13:26:17.533: INFO: (5) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:462/proxy/: tls qux (200; 3.20846ms) May 22 13:26:17.533: INFO: (5) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:1080/proxy/: ... (200; 3.093873ms) May 22 13:26:17.533: INFO: (5) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:160/proxy/: foo (200; 3.13842ms) May 22 13:26:17.533: INFO: (5) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:162/proxy/: bar (200; 3.256243ms) May 22 13:26:17.534: INFO: (5) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:443/proxy/: ... (200; 2.409047ms) May 22 13:26:17.539: INFO: (6) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6:162/proxy/: bar (200; 3.463048ms) May 22 13:26:17.540: INFO: (6) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:162/proxy/: bar (200; 4.391084ms) May 22 13:26:17.540: INFO: (6) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:460/proxy/: tls baz (200; 4.533302ms) May 22 13:26:17.540: INFO: (6) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:160/proxy/: foo (200; 4.624311ms) May 22 13:26:17.540: INFO: (6) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6:1080/proxy/: test<... (200; 4.742164ms) May 22 13:26:17.540: INFO: (6) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6/proxy/: test (200; 4.742323ms) May 22 13:26:17.540: INFO: (6) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:443/proxy/: test (200; 2.339949ms) May 22 13:26:17.544: INFO: (7) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6:1080/proxy/: test<... (200; 2.33005ms) May 22 13:26:17.550: INFO: (7) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:460/proxy/: tls baz (200; 7.580028ms) May 22 13:26:17.550: INFO: (7) /api/v1/namespaces/proxy-6190/services/proxy-service-ssds4:portname1/proxy/: foo (200; 7.58825ms) May 22 13:26:17.550: INFO: (7) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:443/proxy/: ... (200; 7.561499ms) May 22 13:26:17.550: INFO: (7) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:162/proxy/: bar (200; 7.586255ms) May 22 13:26:17.550: INFO: (7) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:160/proxy/: foo (200; 7.798251ms) May 22 13:26:17.550: INFO: (7) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:462/proxy/: tls qux (200; 7.738778ms) May 22 13:26:17.550: INFO: (7) /api/v1/namespaces/proxy-6190/services/http:proxy-service-ssds4:portname2/proxy/: bar (200; 8.148284ms) May 22 13:26:17.550: INFO: (7) /api/v1/namespaces/proxy-6190/services/http:proxy-service-ssds4:portname1/proxy/: foo (200; 8.356161ms) May 22 13:26:17.551: INFO: (7) /api/v1/namespaces/proxy-6190/services/https:proxy-service-ssds4:tlsportname2/proxy/: tls qux (200; 8.36286ms) May 22 13:26:17.551: INFO: (7) /api/v1/namespaces/proxy-6190/services/proxy-service-ssds4:portname2/proxy/: bar (200; 8.363397ms) May 22 13:26:17.551: INFO: (7) /api/v1/namespaces/proxy-6190/services/https:proxy-service-ssds4:tlsportname1/proxy/: tls baz (200; 8.681301ms) May 22 13:26:17.551: INFO: (7) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6:160/proxy/: foo (200; 8.585253ms) May 22 13:26:17.553: INFO: (8) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:443/proxy/: test<... (200; 3.995774ms) May 22 13:26:17.555: INFO: (8) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6:160/proxy/: foo (200; 4.088991ms) May 22 13:26:17.555: INFO: (8) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6:162/proxy/: bar (200; 4.188158ms) May 22 13:26:17.555: INFO: (8) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:460/proxy/: tls baz (200; 4.18596ms) May 22 13:26:17.555: INFO: (8) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6/proxy/: test (200; 4.424596ms) May 22 13:26:17.555: INFO: (8) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:462/proxy/: tls qux (200; 4.375064ms) May 22 13:26:17.555: INFO: (8) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:160/proxy/: foo (200; 4.421605ms) May 22 13:26:17.555: INFO: (8) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:162/proxy/: bar (200; 4.535091ms) May 22 13:26:17.555: INFO: (8) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:1080/proxy/: ... (200; 4.443751ms) May 22 13:26:17.556: INFO: (8) /api/v1/namespaces/proxy-6190/services/http:proxy-service-ssds4:portname2/proxy/: bar (200; 4.88515ms) May 22 13:26:17.556: INFO: (8) /api/v1/namespaces/proxy-6190/services/proxy-service-ssds4:portname2/proxy/: bar (200; 4.983001ms) May 22 13:26:17.556: INFO: (8) /api/v1/namespaces/proxy-6190/services/proxy-service-ssds4:portname1/proxy/: foo (200; 4.952062ms) May 22 13:26:17.556: INFO: (8) /api/v1/namespaces/proxy-6190/services/https:proxy-service-ssds4:tlsportname2/proxy/: tls qux (200; 5.015739ms) May 22 13:26:17.556: INFO: (8) /api/v1/namespaces/proxy-6190/services/http:proxy-service-ssds4:portname1/proxy/: foo (200; 5.055533ms) May 22 13:26:17.556: INFO: (8) /api/v1/namespaces/proxy-6190/services/https:proxy-service-ssds4:tlsportname1/proxy/: tls baz (200; 5.185982ms) May 22 13:26:17.559: INFO: (9) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:162/proxy/: bar (200; 2.356534ms) May 22 13:26:17.559: INFO: (9) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:1080/proxy/: ... (200; 2.292752ms) May 22 13:26:17.560: INFO: (9) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:160/proxy/: foo (200; 3.388733ms) May 22 13:26:17.560: INFO: (9) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6/proxy/: test (200; 3.400255ms) May 22 13:26:17.560: INFO: (9) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6:162/proxy/: bar (200; 3.737633ms) May 22 13:26:17.560: INFO: (9) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:462/proxy/: tls qux (200; 4.178611ms) May 22 13:26:17.560: INFO: (9) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:443/proxy/: test<... (200; 4.969411ms) May 22 13:26:17.561: INFO: (9) /api/v1/namespaces/proxy-6190/services/https:proxy-service-ssds4:tlsportname2/proxy/: tls qux (200; 5.017414ms) May 22 13:26:17.561: INFO: (9) /api/v1/namespaces/proxy-6190/services/http:proxy-service-ssds4:portname2/proxy/: bar (200; 5.138935ms) May 22 13:26:17.561: INFO: (9) /api/v1/namespaces/proxy-6190/services/https:proxy-service-ssds4:tlsportname1/proxy/: tls baz (200; 5.157525ms) May 22 13:26:17.561: INFO: (9) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:460/proxy/: tls baz (200; 5.11493ms) May 22 13:26:17.561: INFO: (9) /api/v1/namespaces/proxy-6190/services/http:proxy-service-ssds4:portname1/proxy/: foo (200; 5.023393ms) May 22 13:26:17.561: INFO: (9) /api/v1/namespaces/proxy-6190/services/proxy-service-ssds4:portname1/proxy/: foo (200; 5.242505ms) May 22 13:26:17.567: INFO: (10) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:462/proxy/: tls qux (200; 5.423372ms) May 22 13:26:17.568: INFO: (10) /api/v1/namespaces/proxy-6190/services/http:proxy-service-ssds4:portname1/proxy/: foo (200; 6.286146ms) May 22 13:26:17.568: INFO: (10) /api/v1/namespaces/proxy-6190/services/https:proxy-service-ssds4:tlsportname1/proxy/: tls baz (200; 6.443615ms) May 22 13:26:17.568: INFO: (10) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:443/proxy/: test (200; 6.469785ms) May 22 13:26:17.568: INFO: (10) /api/v1/namespaces/proxy-6190/services/proxy-service-ssds4:portname2/proxy/: bar (200; 6.471133ms) May 22 13:26:17.568: INFO: (10) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6:1080/proxy/: test<... (200; 6.606693ms) May 22 13:26:17.568: INFO: (10) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6:160/proxy/: foo (200; 6.518748ms) May 22 13:26:17.568: INFO: (10) /api/v1/namespaces/proxy-6190/services/http:proxy-service-ssds4:portname2/proxy/: bar (200; 6.783084ms) May 22 13:26:17.568: INFO: (10) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:160/proxy/: foo (200; 6.733478ms) May 22 13:26:17.568: INFO: (10) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:460/proxy/: tls baz (200; 6.773749ms) May 22 13:26:17.568: INFO: (10) /api/v1/namespaces/proxy-6190/services/https:proxy-service-ssds4:tlsportname2/proxy/: tls qux (200; 6.797358ms) May 22 13:26:17.568: INFO: (10) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:1080/proxy/: ... (200; 6.862597ms) May 22 13:26:17.568: INFO: (10) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:162/proxy/: bar (200; 6.891673ms) May 22 13:26:17.569: INFO: (10) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6:162/proxy/: bar (200; 7.388286ms) May 22 13:26:17.573: INFO: (11) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:162/proxy/: bar (200; 3.800959ms) May 22 13:26:17.573: INFO: (11) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6:160/proxy/: foo (200; 3.773362ms) May 22 13:26:17.573: INFO: (11) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6:1080/proxy/: test<... (200; 3.797327ms) May 22 13:26:17.573: INFO: (11) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:443/proxy/: test (200; 4.994903ms) May 22 13:26:17.574: INFO: (11) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:160/proxy/: foo (200; 5.066451ms) May 22 13:26:17.574: INFO: (11) /api/v1/namespaces/proxy-6190/services/http:proxy-service-ssds4:portname2/proxy/: bar (200; 5.086296ms) May 22 13:26:17.574: INFO: (11) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:1080/proxy/: ... (200; 5.108205ms) May 22 13:26:17.574: INFO: (11) /api/v1/namespaces/proxy-6190/services/https:proxy-service-ssds4:tlsportname1/proxy/: tls baz (200; 5.085917ms) May 22 13:26:17.574: INFO: (11) /api/v1/namespaces/proxy-6190/services/proxy-service-ssds4:portname1/proxy/: foo (200; 5.11521ms) May 22 13:26:17.574: INFO: (11) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:460/proxy/: tls baz (200; 5.478661ms) May 22 13:26:17.575: INFO: (11) /api/v1/namespaces/proxy-6190/services/proxy-service-ssds4:portname2/proxy/: bar (200; 5.632941ms) May 22 13:26:17.577: INFO: (12) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:1080/proxy/: ... (200; 2.344268ms) May 22 13:26:17.577: INFO: (12) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6:160/proxy/: foo (200; 2.297734ms) May 22 13:26:17.578: INFO: (12) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6:1080/proxy/: test<... (200; 2.92365ms) May 22 13:26:17.579: INFO: (12) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:162/proxy/: bar (200; 3.881509ms) May 22 13:26:17.579: INFO: (12) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6:162/proxy/: bar (200; 3.223226ms) May 22 13:26:17.579: INFO: (12) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:462/proxy/: tls qux (200; 3.156581ms) May 22 13:26:17.579: INFO: (12) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:460/proxy/: tls baz (200; 3.223758ms) May 22 13:26:17.579: INFO: (12) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:443/proxy/: test (200; 3.78894ms) May 22 13:26:17.579: INFO: (12) /api/v1/namespaces/proxy-6190/services/http:proxy-service-ssds4:portname2/proxy/: bar (200; 3.874152ms) May 22 13:26:17.579: INFO: (12) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:160/proxy/: foo (200; 3.658895ms) May 22 13:26:17.579: INFO: (12) /api/v1/namespaces/proxy-6190/services/http:proxy-service-ssds4:portname1/proxy/: foo (200; 3.534429ms) May 22 13:26:17.579: INFO: (12) /api/v1/namespaces/proxy-6190/services/https:proxy-service-ssds4:tlsportname2/proxy/: tls qux (200; 4.594163ms) May 22 13:26:17.579: INFO: (12) /api/v1/namespaces/proxy-6190/services/proxy-service-ssds4:portname2/proxy/: bar (200; 4.374271ms) May 22 13:26:17.579: INFO: (12) /api/v1/namespaces/proxy-6190/services/https:proxy-service-ssds4:tlsportname1/proxy/: tls baz (200; 4.146836ms) May 22 13:26:17.580: INFO: (12) /api/v1/namespaces/proxy-6190/services/proxy-service-ssds4:portname1/proxy/: foo (200; 3.979701ms) May 22 13:26:17.584: INFO: (13) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:1080/proxy/: ... (200; 3.776313ms) May 22 13:26:17.584: INFO: (13) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:162/proxy/: bar (200; 3.822972ms) May 22 13:26:17.584: INFO: (13) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:462/proxy/: tls qux (200; 4.117995ms) May 22 13:26:17.584: INFO: (13) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6:1080/proxy/: test<... (200; 4.15857ms) May 22 13:26:17.584: INFO: (13) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6:160/proxy/: foo (200; 4.076717ms) May 22 13:26:17.584: INFO: (13) /api/v1/namespaces/proxy-6190/services/https:proxy-service-ssds4:tlsportname1/proxy/: tls baz (200; 4.134643ms) May 22 13:26:17.584: INFO: (13) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6/proxy/: test (200; 4.253962ms) May 22 13:26:17.584: INFO: (13) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:460/proxy/: tls baz (200; 4.109766ms) May 22 13:26:17.584: INFO: (13) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6:162/proxy/: bar (200; 4.204562ms) May 22 13:26:17.584: INFO: (13) /api/v1/namespaces/proxy-6190/services/proxy-service-ssds4:portname1/proxy/: foo (200; 4.190194ms) May 22 13:26:17.584: INFO: (13) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:160/proxy/: foo (200; 4.200067ms) May 22 13:26:17.584: INFO: (13) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:443/proxy/: ... (200; 4.742528ms) May 22 13:26:17.590: INFO: (14) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:160/proxy/: foo (200; 4.661554ms) May 22 13:26:17.590: INFO: (14) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:460/proxy/: tls baz (200; 4.832805ms) May 22 13:26:17.590: INFO: (14) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6:162/proxy/: bar (200; 4.746849ms) May 22 13:26:17.590: INFO: (14) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:443/proxy/: test (200; 4.916827ms) May 22 13:26:17.590: INFO: (14) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6:1080/proxy/: test<... (200; 4.930312ms) May 22 13:26:17.590: INFO: (14) /api/v1/namespaces/proxy-6190/services/https:proxy-service-ssds4:tlsportname2/proxy/: tls qux (200; 5.007768ms) May 22 13:26:17.593: INFO: (15) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:460/proxy/: tls baz (200; 2.337553ms) May 22 13:26:17.593: INFO: (15) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:462/proxy/: tls qux (200; 2.98093ms) May 22 13:26:17.593: INFO: (15) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:443/proxy/: test<... (200; 3.155522ms) May 22 13:26:17.594: INFO: (15) /api/v1/namespaces/proxy-6190/services/proxy-service-ssds4:portname1/proxy/: foo (200; 3.355949ms) May 22 13:26:17.594: INFO: (15) /api/v1/namespaces/proxy-6190/services/https:proxy-service-ssds4:tlsportname1/proxy/: tls baz (200; 3.966124ms) May 22 13:26:17.594: INFO: (15) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6/proxy/: test (200; 2.963458ms) May 22 13:26:17.595: INFO: (15) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:1080/proxy/: ... (200; 3.463738ms) May 22 13:26:17.595: INFO: (15) /api/v1/namespaces/proxy-6190/services/https:proxy-service-ssds4:tlsportname2/proxy/: tls qux (200; 3.69681ms) May 22 13:26:17.595: INFO: (15) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6:160/proxy/: foo (200; 3.903968ms) May 22 13:26:17.595: INFO: (15) /api/v1/namespaces/proxy-6190/services/http:proxy-service-ssds4:portname2/proxy/: bar (200; 4.934609ms) May 22 13:26:17.595: INFO: (15) /api/v1/namespaces/proxy-6190/services/proxy-service-ssds4:portname2/proxy/: bar (200; 3.676479ms) May 22 13:26:17.598: INFO: (16) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:443/proxy/: test (200; 3.589575ms) May 22 13:26:17.599: INFO: (16) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:1080/proxy/: ... (200; 3.633907ms) May 22 13:26:17.599: INFO: (16) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6:1080/proxy/: test<... (200; 3.948101ms) May 22 13:26:17.599: INFO: (16) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:462/proxy/: tls qux (200; 3.893861ms) May 22 13:26:17.599: INFO: (16) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:460/proxy/: tls baz (200; 4.010616ms) May 22 13:26:17.599: INFO: (16) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:162/proxy/: bar (200; 4.056286ms) May 22 13:26:17.599: INFO: (16) /api/v1/namespaces/proxy-6190/services/proxy-service-ssds4:portname2/proxy/: bar (200; 4.337254ms) May 22 13:26:17.599: INFO: (16) /api/v1/namespaces/proxy-6190/services/https:proxy-service-ssds4:tlsportname2/proxy/: tls qux (200; 4.26659ms) May 22 13:26:17.599: INFO: (16) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6:160/proxy/: foo (200; 4.37619ms) May 22 13:26:17.600: INFO: (16) /api/v1/namespaces/proxy-6190/services/http:proxy-service-ssds4:portname1/proxy/: foo (200; 4.62518ms) May 22 13:26:17.600: INFO: (16) /api/v1/namespaces/proxy-6190/services/https:proxy-service-ssds4:tlsportname1/proxy/: tls baz (200; 4.722232ms) May 22 13:26:17.600: INFO: (16) /api/v1/namespaces/proxy-6190/services/http:proxy-service-ssds4:portname2/proxy/: bar (200; 4.782431ms) May 22 13:26:17.603: INFO: (17) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6/proxy/: test (200; 2.835937ms) May 22 13:26:17.603: INFO: (17) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6:160/proxy/: foo (200; 3.10212ms) May 22 13:26:17.604: INFO: (17) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:1080/proxy/: ... (200; 3.922773ms) May 22 13:26:17.604: INFO: (17) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:443/proxy/: test<... (200; 5.412372ms) May 22 13:26:17.606: INFO: (17) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:162/proxy/: bar (200; 5.619526ms) May 22 13:26:17.606: INFO: (17) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6:162/proxy/: bar (200; 5.450888ms) May 22 13:26:17.606: INFO: (17) /api/v1/namespaces/proxy-6190/services/http:proxy-service-ssds4:portname1/proxy/: foo (200; 5.570808ms) May 22 13:26:17.606: INFO: (17) /api/v1/namespaces/proxy-6190/services/http:proxy-service-ssds4:portname2/proxy/: bar (200; 5.591665ms) May 22 13:26:17.610: INFO: (18) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:160/proxy/: foo (200; 3.66848ms) May 22 13:26:17.610: INFO: (18) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:443/proxy/: ... (200; 3.672847ms) May 22 13:26:17.610: INFO: (18) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6:160/proxy/: foo (200; 3.800216ms) May 22 13:26:17.610: INFO: (18) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6:162/proxy/: bar (200; 3.804104ms) May 22 13:26:17.610: INFO: (18) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:460/proxy/: tls baz (200; 3.795528ms) May 22 13:26:17.610: INFO: (18) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6/proxy/: test (200; 3.856198ms) May 22 13:26:17.610: INFO: (18) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:462/proxy/: tls qux (200; 3.822835ms) May 22 13:26:17.610: INFO: (18) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6:1080/proxy/: test<... (200; 3.884122ms) May 22 13:26:17.610: INFO: (18) /api/v1/namespaces/proxy-6190/services/https:proxy-service-ssds4:tlsportname1/proxy/: tls baz (200; 4.160546ms) May 22 13:26:17.612: INFO: (18) /api/v1/namespaces/proxy-6190/services/http:proxy-service-ssds4:portname1/proxy/: foo (200; 5.674116ms) May 22 13:26:17.612: INFO: (18) /api/v1/namespaces/proxy-6190/services/http:proxy-service-ssds4:portname2/proxy/: bar (200; 5.802273ms) May 22 13:26:17.612: INFO: (18) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:162/proxy/: bar (200; 5.839155ms) May 22 13:26:17.612: INFO: (18) /api/v1/namespaces/proxy-6190/services/https:proxy-service-ssds4:tlsportname2/proxy/: tls qux (200; 5.771274ms) May 22 13:26:17.612: INFO: (18) /api/v1/namespaces/proxy-6190/services/proxy-service-ssds4:portname2/proxy/: bar (200; 5.825285ms) May 22 13:26:17.612: INFO: (18) /api/v1/namespaces/proxy-6190/services/proxy-service-ssds4:portname1/proxy/: foo (200; 5.861335ms) May 22 13:26:17.615: INFO: (19) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:160/proxy/: foo (200; 2.948615ms) May 22 13:26:17.615: INFO: (19) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6:160/proxy/: foo (200; 2.922811ms) May 22 13:26:17.615: INFO: (19) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:1080/proxy/: ... (200; 2.952678ms) May 22 13:26:17.615: INFO: (19) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:462/proxy/: tls qux (200; 2.938115ms) May 22 13:26:17.615: INFO: (19) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:460/proxy/: tls baz (200; 3.065835ms) May 22 13:26:17.615: INFO: (19) /api/v1/namespaces/proxy-6190/pods/http:proxy-service-ssds4-nt7x6:162/proxy/: bar (200; 3.197039ms) May 22 13:26:17.615: INFO: (19) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6:162/proxy/: bar (200; 3.232145ms) May 22 13:26:17.615: INFO: (19) /api/v1/namespaces/proxy-6190/pods/https:proxy-service-ssds4-nt7x6:443/proxy/: test<... (200; 3.446361ms) May 22 13:26:17.615: INFO: (19) /api/v1/namespaces/proxy-6190/pods/proxy-service-ssds4-nt7x6/proxy/: test (200; 3.520843ms) May 22 13:26:17.618: INFO: (19) /api/v1/namespaces/proxy-6190/services/proxy-service-ssds4:portname1/proxy/: foo (200; 5.957213ms) May 22 13:26:17.618: INFO: (19) /api/v1/namespaces/proxy-6190/services/proxy-service-ssds4:portname2/proxy/: bar (200; 6.026261ms) May 22 13:26:17.618: INFO: (19) /api/v1/namespaces/proxy-6190/services/http:proxy-service-ssds4:portname2/proxy/: bar (200; 6.017617ms) May 22 13:26:17.618: INFO: (19) /api/v1/namespaces/proxy-6190/services/https:proxy-service-ssds4:tlsportname1/proxy/: tls baz (200; 6.242747ms) May 22 13:26:17.618: INFO: (19) /api/v1/namespaces/proxy-6190/services/http:proxy-service-ssds4:portname1/proxy/: foo (200; 6.221186ms) May 22 13:26:17.618: INFO: (19) /api/v1/namespaces/proxy-6190/services/https:proxy-service-ssds4:tlsportname2/proxy/: tls qux (200; 6.165526ms) STEP: deleting ReplicationController proxy-service-ssds4 in namespace proxy-6190, will wait for the garbage collector to delete the pods May 22 13:26:17.677: INFO: Deleting ReplicationController proxy-service-ssds4 took: 7.580388ms May 22 13:26:17.778: INFO: Terminating ReplicationController proxy-service-ssds4 pods took: 100.223359ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:26:20.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-6190" for this suite. May 22 13:26:26.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:26:26.583: INFO: namespace proxy-6190 deletion completed in 6.100162573s • [SLOW TEST:17.305 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:26:26.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:26:52.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3450" for this suite. May 22 13:26:58.864: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:26:58.945: INFO: namespace namespaces-3450 deletion completed in 6.108728905s STEP: Destroying namespace "nsdeletetest-1973" for this suite. May 22 13:26:58.948: INFO: Namespace nsdeletetest-1973 was already deleted STEP: Destroying namespace "nsdeletetest-6747" for this suite. May 22 13:27:04.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:27:05.031: INFO: namespace nsdeletetest-6747 deletion completed in 6.083747031s • [SLOW TEST:38.448 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:27:05.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0522 13:27:35.646018 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 22 13:27:35.646: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:27:35.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7065" for this suite. May 22 13:27:41.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:27:41.859: INFO: namespace gc-7065 deletion completed in 6.209867572s • [SLOW TEST:36.827 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:27:41.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode May 22 13:27:41.963: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-6327" to be "success or failure" May 22 13:27:41.972: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.286754ms May 22 13:27:43.976: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012205107s May 22 13:27:46.024: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060431958s May 22 13:27:48.046: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.082869743s STEP: Saw pod success May 22 13:27:48.046: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 22 13:27:48.049: INFO: Trying to get logs from node iruya-worker pod pod-host-path-test container test-container-1: STEP: delete the pod May 22 13:27:48.078: INFO: Waiting for pod pod-host-path-test to disappear May 22 13:27:48.090: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:27:48.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-6327" for this suite. May 22 13:27:54.123: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:27:54.197: INFO: namespace hostpath-6327 deletion completed in 6.103182369s • [SLOW TEST:12.338 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:27:54.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 22 13:27:59.341: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:27:59.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7314" for this suite. May 22 13:28:05.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:28:05.480: INFO: namespace container-runtime-7314 deletion completed in 6.106452979s • [SLOW TEST:11.283 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:28:05.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 22 13:28:05.578: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-1280,SelfLink:/api/v1/namespaces/watch-1280/configmaps/e2e-watch-test-watch-closed,UID:f25ad570-d3e2-48ee-ba34-fdf0ff71c4df,ResourceVersion:12295542,Generation:0,CreationTimestamp:2020-05-22 13:28:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 22 13:28:05.578: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-1280,SelfLink:/api/v1/namespaces/watch-1280/configmaps/e2e-watch-test-watch-closed,UID:f25ad570-d3e2-48ee-ba34-fdf0ff71c4df,ResourceVersion:12295543,Generation:0,CreationTimestamp:2020-05-22 13:28:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 22 13:28:05.590: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-1280,SelfLink:/api/v1/namespaces/watch-1280/configmaps/e2e-watch-test-watch-closed,UID:f25ad570-d3e2-48ee-ba34-fdf0ff71c4df,ResourceVersion:12295544,Generation:0,CreationTimestamp:2020-05-22 13:28:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 22 13:28:05.591: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-1280,SelfLink:/api/v1/namespaces/watch-1280/configmaps/e2e-watch-test-watch-closed,UID:f25ad570-d3e2-48ee-ba34-fdf0ff71c4df,ResourceVersion:12295545,Generation:0,CreationTimestamp:2020-05-22 13:28:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:28:05.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1280" for this suite. May 22 13:28:11.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:28:11.687: INFO: namespace watch-1280 deletion completed in 6.086829616s • [SLOW TEST:6.206 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:28:11.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-f740ab07-61a9-47f5-ac9d-c422d96ff16f in namespace container-probe-6351 May 22 13:28:15.788: INFO: Started pod busybox-f740ab07-61a9-47f5-ac9d-c422d96ff16f in namespace container-probe-6351 STEP: checking the pod's current state and verifying that restartCount is present May 22 13:28:15.791: INFO: Initial restart count of pod busybox-f740ab07-61a9-47f5-ac9d-c422d96ff16f is 0 May 22 13:29:05.908: INFO: Restart count of pod container-probe-6351/busybox-f740ab07-61a9-47f5-ac9d-c422d96ff16f is now 1 (50.117345805s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:29:05.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6351" for this suite. May 22 13:29:11.994: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:29:12.065: INFO: namespace container-probe-6351 deletion completed in 6.109199238s • [SLOW TEST:60.377 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:29:12.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs May 22 13:29:12.179: INFO: Waiting up to 5m0s for pod "pod-dd3eae32-7d43-4ef6-bb15-8ffc50f19ea8" in namespace "emptydir-3352" to be "success or failure" May 22 13:29:12.200: INFO: Pod "pod-dd3eae32-7d43-4ef6-bb15-8ffc50f19ea8": Phase="Pending", Reason="", readiness=false. Elapsed: 21.36178ms May 22 13:29:14.204: INFO: Pod "pod-dd3eae32-7d43-4ef6-bb15-8ffc50f19ea8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025173346s May 22 13:29:16.227: INFO: Pod "pod-dd3eae32-7d43-4ef6-bb15-8ffc50f19ea8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048669s STEP: Saw pod success May 22 13:29:16.227: INFO: Pod "pod-dd3eae32-7d43-4ef6-bb15-8ffc50f19ea8" satisfied condition "success or failure" May 22 13:29:16.231: INFO: Trying to get logs from node iruya-worker2 pod pod-dd3eae32-7d43-4ef6-bb15-8ffc50f19ea8 container test-container: STEP: delete the pod May 22 13:29:16.282: INFO: Waiting for pod pod-dd3eae32-7d43-4ef6-bb15-8ffc50f19ea8 to disappear May 22 13:29:16.303: INFO: Pod pod-dd3eae32-7d43-4ef6-bb15-8ffc50f19ea8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:29:16.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3352" for this suite. May 22 13:29:22.319: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:29:22.394: INFO: namespace emptydir-3352 deletion completed in 6.087333581s • [SLOW TEST:10.329 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:29:22.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token May 22 13:29:22.996: INFO: created pod pod-service-account-defaultsa May 22 13:29:22.996: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 22 13:29:23.004: INFO: created pod pod-service-account-mountsa May 22 13:29:23.004: INFO: pod pod-service-account-mountsa service account token volume mount: true May 22 13:29:23.033: INFO: created pod pod-service-account-nomountsa May 22 13:29:23.033: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 22 13:29:23.047: INFO: created pod pod-service-account-defaultsa-mountspec May 22 13:29:23.047: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 22 13:29:23.146: INFO: created pod pod-service-account-mountsa-mountspec May 22 13:29:23.146: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 22 13:29:23.151: INFO: created pod pod-service-account-nomountsa-mountspec May 22 13:29:23.151: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 22 13:29:23.160: INFO: created pod pod-service-account-defaultsa-nomountspec May 22 13:29:23.160: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 22 13:29:23.220: INFO: created pod pod-service-account-mountsa-nomountspec May 22 13:29:23.220: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 22 13:29:23.288: INFO: created pod pod-service-account-nomountsa-nomountspec May 22 13:29:23.288: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:29:23.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5963" for this suite. May 22 13:29:49.502: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:29:49.590: INFO: namespace svcaccounts-5963 deletion completed in 26.159509507s • [SLOW TEST:27.195 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:29:49.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-01e2827d-8bc6-4a10-8686-143ad7d22506 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-01e2827d-8bc6-4a10-8686-143ad7d22506 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:29:55.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9197" for this suite. May 22 13:30:17.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:30:17.855: INFO: namespace projected-9197 deletion completed in 22.085329541s • [SLOW TEST:28.265 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:30:17.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 22 13:30:17.935: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bbc6fb77-c9bb-4b38-98c3-37fe45c56564" in namespace "projected-5896" to be "success or failure" May 22 13:30:17.960: INFO: Pod "downwardapi-volume-bbc6fb77-c9bb-4b38-98c3-37fe45c56564": Phase="Pending", Reason="", readiness=false. Elapsed: 25.536321ms May 22 13:30:20.096: INFO: Pod "downwardapi-volume-bbc6fb77-c9bb-4b38-98c3-37fe45c56564": Phase="Pending", Reason="", readiness=false. Elapsed: 2.161526058s May 22 13:30:22.100: INFO: Pod "downwardapi-volume-bbc6fb77-c9bb-4b38-98c3-37fe45c56564": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.165038413s STEP: Saw pod success May 22 13:30:22.100: INFO: Pod "downwardapi-volume-bbc6fb77-c9bb-4b38-98c3-37fe45c56564" satisfied condition "success or failure" May 22 13:30:22.102: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-bbc6fb77-c9bb-4b38-98c3-37fe45c56564 container client-container: STEP: delete the pod May 22 13:30:22.151: INFO: Waiting for pod downwardapi-volume-bbc6fb77-c9bb-4b38-98c3-37fe45c56564 to disappear May 22 13:30:22.163: INFO: Pod downwardapi-volume-bbc6fb77-c9bb-4b38-98c3-37fe45c56564 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:30:22.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5896" for this suite. May 22 13:30:28.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:30:28.248: INFO: namespace projected-5896 deletion completed in 6.081260129s • [SLOW TEST:10.393 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:30:28.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 22 13:30:28.357: INFO: Creating deployment "nginx-deployment" May 22 13:30:28.368: INFO: Waiting for observed generation 1 May 22 13:30:30.479: INFO: Waiting for all required pods to come up May 22 13:30:30.484: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running May 22 13:30:40.686: INFO: Waiting for deployment "nginx-deployment" to complete May 22 13:30:40.691: INFO: Updating deployment "nginx-deployment" with a non-existent image May 22 13:30:40.698: INFO: Updating deployment nginx-deployment May 22 13:30:40.698: INFO: Waiting for observed generation 2 May 22 13:30:42.740: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 22 13:30:42.743: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 22 13:30:42.807: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 22 13:30:42.880: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 22 13:30:42.880: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 22 13:30:42.882: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 22 13:30:42.885: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas May 22 13:30:42.885: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 May 22 13:30:42.891: INFO: Updating deployment nginx-deployment May 22 13:30:42.891: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas May 22 13:30:42.904: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 22 13:30:42.928: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 22 13:30:43.126: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-4005,SelfLink:/apis/apps/v1/namespaces/deployment-4005/deployments/nginx-deployment,UID:61a14c25-cba3-44e3-9e73-f3c75c42b2c1,ResourceVersion:12296240,Generation:3,CreationTimestamp:2020-05-22 13:30:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-05-22 13:30:41 +0000 UTC 2020-05-22 13:30:28 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-05-22 13:30:42 +0000 UTC 2020-05-22 13:30:42 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} May 22 13:30:43.212: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-4005,SelfLink:/apis/apps/v1/namespaces/deployment-4005/replicasets/nginx-deployment-55fb7cb77f,UID:d697f437-fcfa-4b32-b4e8-601b08f66f3d,ResourceVersion:12296283,Generation:3,CreationTimestamp:2020-05-22 13:30:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 61a14c25-cba3-44e3-9e73-f3c75c42b2c1 0xc00243c9f7 0xc00243c9f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 22 13:30:43.212: INFO: All old ReplicaSets of Deployment "nginx-deployment": May 22 13:30:43.212: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-4005,SelfLink:/apis/apps/v1/namespaces/deployment-4005/replicasets/nginx-deployment-7b8c6f4498,UID:dba69145-5cde-4247-ba42-402ee48d28b9,ResourceVersion:12296281,Generation:3,CreationTimestamp:2020-05-22 13:30:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 61a14c25-cba3-44e3-9e73-f3c75c42b2c1 0xc00243cac7 0xc00243cac8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} May 22 13:30:43.277: INFO: Pod "nginx-deployment-55fb7cb77f-58rrr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-58rrr,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4005,SelfLink:/api/v1/namespaces/deployment-4005/pods/nginx-deployment-55fb7cb77f-58rrr,UID:b8e4e18b-a74e-4480-8cc2-14a1e4e0c154,ResourceVersion:12296197,Generation:0,CreationTimestamp:2020-05-22 13:30:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d697f437-fcfa-4b32-b4e8-601b08f66f3d 0xc00216ffe7 0xc00216ffe8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxw7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxw7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hxw7s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cb4160} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cb4180}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:40 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-22 13:30:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 13:30:43.277: INFO: Pod "nginx-deployment-55fb7cb77f-5g6kn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-5g6kn,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4005,SelfLink:/api/v1/namespaces/deployment-4005/pods/nginx-deployment-55fb7cb77f-5g6kn,UID:955048de-ed74-49e0-85df-065b95f6a59e,ResourceVersion:12296275,Generation:0,CreationTimestamp:2020-05-22 13:30:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d697f437-fcfa-4b32-b4e8-601b08f66f3d 0xc002cb4317 0xc002cb4318}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxw7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxw7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hxw7s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cb44c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cb44f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:43 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 13:30:43.277: INFO: Pod "nginx-deployment-55fb7cb77f-6tdvv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-6tdvv,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4005,SelfLink:/api/v1/namespaces/deployment-4005/pods/nginx-deployment-55fb7cb77f-6tdvv,UID:4a4c2c0a-38f0-4814-97f7-d794eb17f135,ResourceVersion:12296289,Generation:0,CreationTimestamp:2020-05-22 13:30:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d697f437-fcfa-4b32-b4e8-601b08f66f3d 0xc002cb4577 0xc002cb4578}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxw7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxw7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hxw7s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cb45f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cb4610}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:42 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-22 13:30:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 13:30:43.278: INFO: Pod "nginx-deployment-55fb7cb77f-7rq2z" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-7rq2z,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4005,SelfLink:/api/v1/namespaces/deployment-4005/pods/nginx-deployment-55fb7cb77f-7rq2z,UID:3d304972-b7bf-47dc-b5a7-d1a5bbd76bc2,ResourceVersion:12296282,Generation:0,CreationTimestamp:2020-05-22 13:30:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d697f437-fcfa-4b32-b4e8-601b08f66f3d 0xc002cb46f7 0xc002cb46f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxw7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxw7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hxw7s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cb4770} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cb4790}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:43 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 13:30:43.278: INFO: Pod "nginx-deployment-55fb7cb77f-f9r6l" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-f9r6l,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4005,SelfLink:/api/v1/namespaces/deployment-4005/pods/nginx-deployment-55fb7cb77f-f9r6l,UID:17a4b38e-6f08-4c3d-9a39-004e2453fe52,ResourceVersion:12296193,Generation:0,CreationTimestamp:2020-05-22 13:30:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d697f437-fcfa-4b32-b4e8-601b08f66f3d 0xc002cb4817 0xc002cb4818}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxw7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxw7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hxw7s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cb4890} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cb48b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:40 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-22 13:30:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 13:30:43.278: INFO: Pod "nginx-deployment-55fb7cb77f-lprbj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-lprbj,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4005,SelfLink:/api/v1/namespaces/deployment-4005/pods/nginx-deployment-55fb7cb77f-lprbj,UID:fe15452f-5770-4340-b5e7-9946c382a0c3,ResourceVersion:12296272,Generation:0,CreationTimestamp:2020-05-22 13:30:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d697f437-fcfa-4b32-b4e8-601b08f66f3d 0xc002cb4987 0xc002cb4988}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxw7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxw7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hxw7s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cb4a10} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cb4a30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:43 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 13:30:43.278: INFO: Pod "nginx-deployment-55fb7cb77f-lqn49" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-lqn49,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4005,SelfLink:/api/v1/namespaces/deployment-4005/pods/nginx-deployment-55fb7cb77f-lqn49,UID:36313907-cb23-4fc9-8d06-b3f1b224bcb3,ResourceVersion:12296273,Generation:0,CreationTimestamp:2020-05-22 13:30:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d697f437-fcfa-4b32-b4e8-601b08f66f3d 0xc002cb4ab7 0xc002cb4ab8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxw7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxw7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hxw7s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cb4b30} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cb4b50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:43 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 13:30:43.279: INFO: Pod "nginx-deployment-55fb7cb77f-nfqq6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-nfqq6,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4005,SelfLink:/api/v1/namespaces/deployment-4005/pods/nginx-deployment-55fb7cb77f-nfqq6,UID:448fde9b-8993-40f8-bc95-59d28ad97f0f,ResourceVersion:12296215,Generation:0,CreationTimestamp:2020-05-22 13:30:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d697f437-fcfa-4b32-b4e8-601b08f66f3d 0xc002cb4bd7 0xc002cb4bd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxw7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxw7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hxw7s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cb4c50} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cb4c70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:40 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-22 13:30:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 13:30:43.279: INFO: Pod "nginx-deployment-55fb7cb77f-rgh6c" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-rgh6c,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4005,SelfLink:/api/v1/namespaces/deployment-4005/pods/nginx-deployment-55fb7cb77f-rgh6c,UID:a381be3c-a1ea-470c-8a97-aa11f8b5dbf8,ResourceVersion:12296217,Generation:0,CreationTimestamp:2020-05-22 13:30:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d697f437-fcfa-4b32-b4e8-601b08f66f3d 0xc002cb4d47 0xc002cb4d48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxw7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxw7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hxw7s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cb4dc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cb4de0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:40 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-22 13:30:41 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 13:30:43.279: INFO: Pod "nginx-deployment-55fb7cb77f-slqsk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-slqsk,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4005,SelfLink:/api/v1/namespaces/deployment-4005/pods/nginx-deployment-55fb7cb77f-slqsk,UID:e153ad5d-1137-4bcd-8455-86ed15074090,ResourceVersion:12296211,Generation:0,CreationTimestamp:2020-05-22 13:30:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d697f437-fcfa-4b32-b4e8-601b08f66f3d 0xc002cb4eb7 0xc002cb4eb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxw7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxw7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hxw7s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cb4f30} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cb4f50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:40 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-22 13:30:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 13:30:43.279: INFO: Pod "nginx-deployment-55fb7cb77f-sxq94" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-sxq94,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4005,SelfLink:/api/v1/namespaces/deployment-4005/pods/nginx-deployment-55fb7cb77f-sxq94,UID:67dc381d-78ab-4d38-8a61-0231d30f3f64,ResourceVersion:12296260,Generation:0,CreationTimestamp:2020-05-22 13:30:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d697f437-fcfa-4b32-b4e8-601b08f66f3d 0xc002cb5027 0xc002cb5028}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxw7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxw7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hxw7s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cb50a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cb50c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:43 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 13:30:43.279: INFO: Pod "nginx-deployment-55fb7cb77f-v24hb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-v24hb,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4005,SelfLink:/api/v1/namespaces/deployment-4005/pods/nginx-deployment-55fb7cb77f-v24hb,UID:3d66f4e6-f586-4dbf-964d-353720620f99,ResourceVersion:12296268,Generation:0,CreationTimestamp:2020-05-22 13:30:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d697f437-fcfa-4b32-b4e8-601b08f66f3d 0xc002cb5147 0xc002cb5148}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxw7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxw7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hxw7s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cb51c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cb51e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:43 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 13:30:43.280: INFO: Pod "nginx-deployment-55fb7cb77f-wnccc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-wnccc,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4005,SelfLink:/api/v1/namespaces/deployment-4005/pods/nginx-deployment-55fb7cb77f-wnccc,UID:ba2715c2-21ee-4ca3-b283-b0715fcbc44d,ResourceVersion:12296254,Generation:0,CreationTimestamp:2020-05-22 13:30:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d697f437-fcfa-4b32-b4e8-601b08f66f3d 0xc002cb5267 0xc002cb5268}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxw7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxw7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-hxw7s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cb52e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cb5300}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:43 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 13:30:43.280: INFO: Pod "nginx-deployment-7b8c6f4498-27b82" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-27b82,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4005,SelfLink:/api/v1/namespaces/deployment-4005/pods/nginx-deployment-7b8c6f4498-27b82,UID:25b3b286-a428-4bcf-8736-02721a86f4a5,ResourceVersion:12296266,Generation:0,CreationTimestamp:2020-05-22 13:30:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dba69145-5cde-4247-ba42-402ee48d28b9 0xc002cb5397 0xc002cb5398}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxw7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxw7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hxw7s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cb5410} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cb5430}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:43 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 13:30:43.280: INFO: Pod "nginx-deployment-7b8c6f4498-2dznd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2dznd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4005,SelfLink:/api/v1/namespaces/deployment-4005/pods/nginx-deployment-7b8c6f4498-2dznd,UID:4e8b6073-2596-4697-8a0e-c78a1a320e0a,ResourceVersion:12296263,Generation:0,CreationTimestamp:2020-05-22 13:30:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dba69145-5cde-4247-ba42-402ee48d28b9 0xc002cb54b7 0xc002cb54b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxw7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxw7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hxw7s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cb5530} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cb5550}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:43 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 13:30:43.280: INFO: Pod "nginx-deployment-7b8c6f4498-75rwk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-75rwk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4005,SelfLink:/api/v1/namespaces/deployment-4005/pods/nginx-deployment-7b8c6f4498-75rwk,UID:f633eae7-8094-4c4b-a51e-2095c0ab38e0,ResourceVersion:12296265,Generation:0,CreationTimestamp:2020-05-22 13:30:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dba69145-5cde-4247-ba42-402ee48d28b9 0xc002cb55d7 0xc002cb55d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxw7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxw7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hxw7s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cb5650} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cb5670}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:43 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 13:30:43.280: INFO: Pod "nginx-deployment-7b8c6f4498-9bsqk" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9bsqk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4005,SelfLink:/api/v1/namespaces/deployment-4005/pods/nginx-deployment-7b8c6f4498-9bsqk,UID:6690783c-99f8-4c83-926a-3d170709c7fa,ResourceVersion:12296155,Generation:0,CreationTimestamp:2020-05-22 13:30:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dba69145-5cde-4247-ba42-402ee48d28b9 0xc002cb56f7 0xc002cb56f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxw7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxw7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hxw7s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cb5780} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cb57a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:28 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.134,StartTime:2020-05-22 13:30:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-22 13:30:38 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://abc1f97e418a85b8c47f110639f93c2587dac2a1454b3907eb6bf9747822ef34}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 13:30:43.281: INFO: Pod "nginx-deployment-7b8c6f4498-9sgfc" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9sgfc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4005,SelfLink:/api/v1/namespaces/deployment-4005/pods/nginx-deployment-7b8c6f4498-9sgfc,UID:4b70f64f-18c7-4a8a-af90-8e561e62d4c8,ResourceVersion:12296108,Generation:0,CreationTimestamp:2020-05-22 13:30:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dba69145-5cde-4247-ba42-402ee48d28b9 0xc002cb5877 0xc002cb5878}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxw7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxw7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hxw7s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cb5900} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cb5920}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:34 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:28 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.186,StartTime:2020-05-22 13:30:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-22 13:30:32 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://b282792ef3820424e8be0e6fc1d5f8b957ab4896565c1ba04e0c2d8562d30419}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 13:30:43.281: INFO: Pod "nginx-deployment-7b8c6f4498-g6vc6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-g6vc6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4005,SelfLink:/api/v1/namespaces/deployment-4005/pods/nginx-deployment-7b8c6f4498-g6vc6,UID:6b5325cb-5cb5-4e43-8676-ebdb9f01d9f9,ResourceVersion:12296271,Generation:0,CreationTimestamp:2020-05-22 13:30:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dba69145-5cde-4247-ba42-402ee48d28b9 0xc002cb59f7 0xc002cb59f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxw7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxw7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hxw7s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cb5a70} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cb5a90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:43 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 13:30:43.281: INFO: Pod "nginx-deployment-7b8c6f4498-h9fwq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-h9fwq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4005,SelfLink:/api/v1/namespaces/deployment-4005/pods/nginx-deployment-7b8c6f4498-h9fwq,UID:b5e6a571-9412-44d1-a8c6-4448658b0bed,ResourceVersion:12296278,Generation:0,CreationTimestamp:2020-05-22 13:30:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dba69145-5cde-4247-ba42-402ee48d28b9 0xc002cb5b17 0xc002cb5b18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxw7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxw7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hxw7s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cb5b90} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cb5bb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:43 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 13:30:43.281: INFO: Pod "nginx-deployment-7b8c6f4498-hfc82" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hfc82,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4005,SelfLink:/api/v1/namespaces/deployment-4005/pods/nginx-deployment-7b8c6f4498-hfc82,UID:3d5173c8-85fe-4f6a-af95-6581e3431b72,ResourceVersion:12296117,Generation:0,CreationTimestamp:2020-05-22 13:30:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dba69145-5cde-4247-ba42-402ee48d28b9 0xc002cb5c37 0xc002cb5c38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxw7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxw7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hxw7s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cb5cb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cb5cd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:36 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:36 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:28 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.187,StartTime:2020-05-22 13:30:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-22 13:30:35 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://58b5449f4ded9b03589420765a4281e40f023c3ef1fcec28b0f85126f6116f2e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 13:30:43.281: INFO: Pod "nginx-deployment-7b8c6f4498-jnzzr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jnzzr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4005,SelfLink:/api/v1/namespaces/deployment-4005/pods/nginx-deployment-7b8c6f4498-jnzzr,UID:cfbc6ca0-1f7a-4c25-8f2e-c96ac4ef561a,ResourceVersion:12296279,Generation:0,CreationTimestamp:2020-05-22 13:30:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dba69145-5cde-4247-ba42-402ee48d28b9 0xc002cb5da7 0xc002cb5da8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxw7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxw7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hxw7s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cb5e20} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cb5e40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:43 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 13:30:43.281: INFO: Pod "nginx-deployment-7b8c6f4498-jwfg6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jwfg6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4005,SelfLink:/api/v1/namespaces/deployment-4005/pods/nginx-deployment-7b8c6f4498-jwfg6,UID:99d4b5f3-a1d7-4c62-84be-616c649dbea5,ResourceVersion:12296245,Generation:0,CreationTimestamp:2020-05-22 13:30:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dba69145-5cde-4247-ba42-402ee48d28b9 0xc002cb5ec7 0xc002cb5ec8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxw7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxw7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hxw7s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cb5f40} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cb5f60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:42 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 13:30:43.282: INFO: Pod "nginx-deployment-7b8c6f4498-mc228" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mc228,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4005,SelfLink:/api/v1/namespaces/deployment-4005/pods/nginx-deployment-7b8c6f4498-mc228,UID:e53d6a48-d838-4c11-ab93-187db6e442ee,ResourceVersion:12296146,Generation:0,CreationTimestamp:2020-05-22 13:30:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dba69145-5cde-4247-ba42-402ee48d28b9 0xc002cb5fe7 0xc002cb5fe8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxw7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxw7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hxw7s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002fb2090} {node.kubernetes.io/unreachable Exists NoExecute 0xc002fb20e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:38 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:28 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.133,StartTime:2020-05-22 13:30:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-22 13:30:37 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://3415fb353417f00603860d591a95e27fc07727930ebaa2c0bc17d00c5ace84af}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 13:30:43.282: INFO: Pod "nginx-deployment-7b8c6f4498-mfvrx" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mfvrx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4005,SelfLink:/api/v1/namespaces/deployment-4005/pods/nginx-deployment-7b8c6f4498-mfvrx,UID:e5c7b5b8-5452-4474-abf1-5d047839b895,ResourceVersion:12296135,Generation:0,CreationTimestamp:2020-05-22 13:30:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dba69145-5cde-4247-ba42-402ee48d28b9 0xc002fb21d7 0xc002fb21d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxw7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxw7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hxw7s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002fb2250} {node.kubernetes.io/unreachable Exists NoExecute 0xc002fb2280}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:38 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:28 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.188,StartTime:2020-05-22 13:30:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-22 13:30:37 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://6bbc4a816837334d01f9960270b5fe43283f8d703d6afea16ae173b354ed11b3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 13:30:43.282: INFO: Pod "nginx-deployment-7b8c6f4498-nhjl9" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nhjl9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4005,SelfLink:/api/v1/namespaces/deployment-4005/pods/nginx-deployment-7b8c6f4498-nhjl9,UID:c99cc8f7-9548-4d25-a782-13d38baaf4ec,ResourceVersion:12296139,Generation:0,CreationTimestamp:2020-05-22 13:30:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dba69145-5cde-4247-ba42-402ee48d28b9 0xc002fb2357 0xc002fb2358}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxw7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxw7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hxw7s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002fb23e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002fb2400}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:38 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:28 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.132,StartTime:2020-05-22 13:30:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-22 13:30:37 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://907e29d911adc6af441a1eae9c624aae51f1f0aa8d75ccdc49389bd2206e453d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 13:30:43.282: INFO: Pod "nginx-deployment-7b8c6f4498-nwqm8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nwqm8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4005,SelfLink:/api/v1/namespaces/deployment-4005/pods/nginx-deployment-7b8c6f4498-nwqm8,UID:d318f080-0ab0-4403-b942-ea982115407b,ResourceVersion:12296270,Generation:0,CreationTimestamp:2020-05-22 13:30:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dba69145-5cde-4247-ba42-402ee48d28b9 0xc002fb24d7 0xc002fb24d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxw7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxw7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hxw7s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002fb2550} {node.kubernetes.io/unreachable Exists NoExecute 0xc002fb2570}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:42 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-22 13:30:42 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 13:30:43.282: INFO: Pod "nginx-deployment-7b8c6f4498-pp9x9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-pp9x9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4005,SelfLink:/api/v1/namespaces/deployment-4005/pods/nginx-deployment-7b8c6f4498-pp9x9,UID:79e9db28-9e83-4673-a8b5-57f5b4d60088,ResourceVersion:12296264,Generation:0,CreationTimestamp:2020-05-22 13:30:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dba69145-5cde-4247-ba42-402ee48d28b9 0xc002fb2637 0xc002fb2638}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxw7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxw7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hxw7s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002fb26c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002fb26e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:43 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 13:30:43.283: INFO: Pod "nginx-deployment-7b8c6f4498-qwvkn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qwvkn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4005,SelfLink:/api/v1/namespaces/deployment-4005/pods/nginx-deployment-7b8c6f4498-qwvkn,UID:a1c28dcc-06b4-499d-9beb-d8307099d3cf,ResourceVersion:12296276,Generation:0,CreationTimestamp:2020-05-22 13:30:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dba69145-5cde-4247-ba42-402ee48d28b9 0xc002fb2767 0xc002fb2768}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxw7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxw7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hxw7s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002fb27e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002fb2800}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:43 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 13:30:43.283: INFO: Pod "nginx-deployment-7b8c6f4498-vwtbb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vwtbb,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4005,SelfLink:/api/v1/namespaces/deployment-4005/pods/nginx-deployment-7b8c6f4498-vwtbb,UID:6ee82ab9-f249-4128-9065-b1575cc74cf2,ResourceVersion:12296290,Generation:0,CreationTimestamp:2020-05-22 13:30:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dba69145-5cde-4247-ba42-402ee48d28b9 0xc002fb2887 0xc002fb2888}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxw7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxw7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hxw7s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002fb2900} {node.kubernetes.io/unreachable Exists NoExecute 0xc002fb2920}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:42 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-22 13:30:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 13:30:43.283: INFO: Pod "nginx-deployment-7b8c6f4498-wk8vt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-wk8vt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4005,SelfLink:/api/v1/namespaces/deployment-4005/pods/nginx-deployment-7b8c6f4498-wk8vt,UID:b748ff7a-8349-4721-9802-df96d8beefe6,ResourceVersion:12296269,Generation:0,CreationTimestamp:2020-05-22 13:30:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dba69145-5cde-4247-ba42-402ee48d28b9 0xc002fb29e7 0xc002fb29e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxw7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxw7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hxw7s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002fb2a60} {node.kubernetes.io/unreachable Exists NoExecute 0xc002fb2a80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:43 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 13:30:43.283: INFO: Pod "nginx-deployment-7b8c6f4498-xf2b5" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xf2b5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4005,SelfLink:/api/v1/namespaces/deployment-4005/pods/nginx-deployment-7b8c6f4498-xf2b5,UID:2fa00d94-2163-447b-aa9a-45043db9d6a1,ResourceVersion:12296127,Generation:0,CreationTimestamp:2020-05-22 13:30:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dba69145-5cde-4247-ba42-402ee48d28b9 0xc002fb2b17 0xc002fb2b18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxw7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxw7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hxw7s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002fb2b90} {node.kubernetes.io/unreachable Exists NoExecute 0xc002fb2bb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:28 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.131,StartTime:2020-05-22 13:30:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-22 13:30:36 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://eb8036b5bc903736123ba390c96c0644a735d934a2762fa63c2bce90c0c874a9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 22 13:30:43.283: INFO: Pod "nginx-deployment-7b8c6f4498-xgd5b" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xgd5b,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4005,SelfLink:/api/v1/namespaces/deployment-4005/pods/nginx-deployment-7b8c6f4498-xgd5b,UID:0822ca81-2161-4112-b48f-3eec190d8406,ResourceVersion:12296151,Generation:0,CreationTimestamp:2020-05-22 13:30:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 dba69145-5cde-4247-ba42-402ee48d28b9 0xc002fb2c87 0xc002fb2c88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxw7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxw7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hxw7s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002fb2d00} {node.kubernetes.io/unreachable Exists NoExecute 0xc002fb2d20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:30:28 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.189,StartTime:2020-05-22 13:30:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-22 13:30:38 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://e2827b5642f01a45f4d97eb6b17283409e286008d7fde300a882762bf4cf16a1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:30:43.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4005" for this suite. May 22 13:31:07.480: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:31:07.540: INFO: namespace deployment-4005 deletion completed in 24.196842132s • [SLOW TEST:39.292 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:31:07.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-aea793b2-d014-4c34-8cb4-3659c53ee2e8 STEP: Creating a pod to test consume secrets May 22 13:31:07.645: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a56f539b-a90d-4a76-95e1-7af9aab27a52" in namespace "projected-5739" to be "success or failure" May 22 13:31:07.654: INFO: Pod "pod-projected-secrets-a56f539b-a90d-4a76-95e1-7af9aab27a52": Phase="Pending", Reason="", readiness=false. Elapsed: 8.612339ms May 22 13:31:09.658: INFO: Pod "pod-projected-secrets-a56f539b-a90d-4a76-95e1-7af9aab27a52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013126767s May 22 13:31:11.661: INFO: Pod "pod-projected-secrets-a56f539b-a90d-4a76-95e1-7af9aab27a52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016183327s STEP: Saw pod success May 22 13:31:11.662: INFO: Pod "pod-projected-secrets-a56f539b-a90d-4a76-95e1-7af9aab27a52" satisfied condition "success or failure" May 22 13:31:11.664: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-a56f539b-a90d-4a76-95e1-7af9aab27a52 container projected-secret-volume-test: STEP: delete the pod May 22 13:31:11.694: INFO: Waiting for pod pod-projected-secrets-a56f539b-a90d-4a76-95e1-7af9aab27a52 to disappear May 22 13:31:11.702: INFO: Pod pod-projected-secrets-a56f539b-a90d-4a76-95e1-7af9aab27a52 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:31:11.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5739" for this suite. May 22 13:31:17.729: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:31:17.811: INFO: namespace projected-5739 deletion completed in 6.105959414s • [SLOW TEST:10.270 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:31:17.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 22 13:31:17.959: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 22 13:31:17.968: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:31:17.974: INFO: Number of nodes with available pods: 0 May 22 13:31:17.974: INFO: Node iruya-worker is running more than one daemon pod May 22 13:31:18.990: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:31:18.993: INFO: Number of nodes with available pods: 0 May 22 13:31:18.993: INFO: Node iruya-worker is running more than one daemon pod May 22 13:31:19.978: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:31:19.980: INFO: Number of nodes with available pods: 0 May 22 13:31:19.980: INFO: Node iruya-worker is running more than one daemon pod May 22 13:31:20.996: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:31:21.016: INFO: Number of nodes with available pods: 0 May 22 13:31:21.016: INFO: Node iruya-worker is running more than one daemon pod May 22 13:31:21.997: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:31:22.001: INFO: Number of nodes with available pods: 1 May 22 13:31:22.001: INFO: Node iruya-worker2 is running more than one daemon pod May 22 13:31:22.979: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:31:22.983: INFO: Number of nodes with available pods: 2 May 22 13:31:22.983: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 22 13:31:23.052: INFO: Wrong image for pod: daemon-set-fs625. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 13:31:23.052: INFO: Wrong image for pod: daemon-set-qmtvj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 13:31:23.059: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:31:24.064: INFO: Wrong image for pod: daemon-set-fs625. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 13:31:24.065: INFO: Wrong image for pod: daemon-set-qmtvj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 13:31:24.068: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:31:25.063: INFO: Wrong image for pod: daemon-set-fs625. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 13:31:25.063: INFO: Wrong image for pod: daemon-set-qmtvj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 13:31:25.063: INFO: Pod daemon-set-qmtvj is not available May 22 13:31:25.066: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:31:26.065: INFO: Wrong image for pod: daemon-set-fs625. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 13:31:26.065: INFO: Wrong image for pod: daemon-set-qmtvj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 13:31:26.065: INFO: Pod daemon-set-qmtvj is not available May 22 13:31:26.069: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:31:27.065: INFO: Wrong image for pod: daemon-set-fs625. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 13:31:27.065: INFO: Wrong image for pod: daemon-set-qmtvj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 13:31:27.065: INFO: Pod daemon-set-qmtvj is not available May 22 13:31:27.069: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:31:28.065: INFO: Wrong image for pod: daemon-set-fs625. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 13:31:28.065: INFO: Wrong image for pod: daemon-set-qmtvj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 13:31:28.065: INFO: Pod daemon-set-qmtvj is not available May 22 13:31:28.070: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:31:29.064: INFO: Wrong image for pod: daemon-set-fs625. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 13:31:29.064: INFO: Wrong image for pod: daemon-set-qmtvj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 13:31:29.064: INFO: Pod daemon-set-qmtvj is not available May 22 13:31:29.068: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:31:30.064: INFO: Wrong image for pod: daemon-set-fs625. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 13:31:30.064: INFO: Wrong image for pod: daemon-set-qmtvj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 13:31:30.064: INFO: Pod daemon-set-qmtvj is not available May 22 13:31:30.068: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:31:31.066: INFO: Wrong image for pod: daemon-set-fs625. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 13:31:31.066: INFO: Wrong image for pod: daemon-set-qmtvj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 13:31:31.066: INFO: Pod daemon-set-qmtvj is not available May 22 13:31:31.069: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:31:32.065: INFO: Wrong image for pod: daemon-set-fs625. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 13:31:32.065: INFO: Wrong image for pod: daemon-set-qmtvj. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 13:31:32.065: INFO: Pod daemon-set-qmtvj is not available May 22 13:31:32.069: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:31:33.064: INFO: Wrong image for pod: daemon-set-fs625. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 13:31:33.064: INFO: Pod daemon-set-pcmwq is not available May 22 13:31:33.068: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:31:34.064: INFO: Wrong image for pod: daemon-set-fs625. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 13:31:34.064: INFO: Pod daemon-set-pcmwq is not available May 22 13:31:34.067: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:31:35.068: INFO: Wrong image for pod: daemon-set-fs625. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 13:31:35.069: INFO: Pod daemon-set-pcmwq is not available May 22 13:31:35.072: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:31:36.066: INFO: Wrong image for pod: daemon-set-fs625. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 13:31:36.069: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:31:37.064: INFO: Wrong image for pod: daemon-set-fs625. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 13:31:37.067: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:31:38.065: INFO: Wrong image for pod: daemon-set-fs625. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 22 13:31:38.065: INFO: Pod daemon-set-fs625 is not available May 22 13:31:38.069: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:31:39.064: INFO: Pod daemon-set-cbq76 is not available May 22 13:31:39.069: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 22 13:31:39.072: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:31:39.075: INFO: Number of nodes with available pods: 1 May 22 13:31:39.075: INFO: Node iruya-worker2 is running more than one daemon pod May 22 13:31:40.081: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:31:40.084: INFO: Number of nodes with available pods: 1 May 22 13:31:40.084: INFO: Node iruya-worker2 is running more than one daemon pod May 22 13:31:41.086: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:31:41.090: INFO: Number of nodes with available pods: 1 May 22 13:31:41.090: INFO: Node iruya-worker2 is running more than one daemon pod May 22 13:31:42.080: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:31:42.083: INFO: Number of nodes with available pods: 2 May 22 13:31:42.083: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4592, will wait for the garbage collector to delete the pods May 22 13:31:42.158: INFO: Deleting DaemonSet.extensions daemon-set took: 7.080428ms May 22 13:31:42.459: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.701517ms May 22 13:31:52.271: INFO: Number of nodes with available pods: 0 May 22 13:31:52.271: INFO: Number of running nodes: 0, number of available pods: 0 May 22 13:31:52.274: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4592/daemonsets","resourceVersion":"12296722"},"items":null} May 22 13:31:52.277: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4592/pods","resourceVersion":"12296722"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:31:52.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4592" for this suite. May 22 13:31:58.306: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:31:58.381: INFO: namespace daemonsets-4592 deletion completed in 6.091668955s • [SLOW TEST:40.570 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:31:58.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all May 22 13:31:58.419: INFO: Waiting up to 5m0s for pod "client-containers-1d995ba3-af0e-440e-a559-a31c82be9a24" in namespace "containers-592" to be "success or failure" May 22 13:31:58.449: INFO: Pod "client-containers-1d995ba3-af0e-440e-a559-a31c82be9a24": Phase="Pending", Reason="", readiness=false. Elapsed: 30.046916ms May 22 13:32:00.454: INFO: Pod "client-containers-1d995ba3-af0e-440e-a559-a31c82be9a24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0348926s May 22 13:32:02.458: INFO: Pod "client-containers-1d995ba3-af0e-440e-a559-a31c82be9a24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038688627s STEP: Saw pod success May 22 13:32:02.458: INFO: Pod "client-containers-1d995ba3-af0e-440e-a559-a31c82be9a24" satisfied condition "success or failure" May 22 13:32:02.461: INFO: Trying to get logs from node iruya-worker2 pod client-containers-1d995ba3-af0e-440e-a559-a31c82be9a24 container test-container: STEP: delete the pod May 22 13:32:02.608: INFO: Waiting for pod client-containers-1d995ba3-af0e-440e-a559-a31c82be9a24 to disappear May 22 13:32:02.640: INFO: Pod client-containers-1d995ba3-af0e-440e-a559-a31c82be9a24 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:32:02.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-592" for this suite. May 22 13:32:08.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:32:08.779: INFO: namespace containers-592 deletion completed in 6.136051665s • [SLOW TEST:10.398 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:32:08.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 22 13:32:08.879: INFO: Waiting up to 5m0s for pod "downwardapi-volume-168e3696-040c-486f-ba40-f4cc41a1b638" in namespace "projected-4516" to be "success or failure" May 22 13:32:08.885: INFO: Pod "downwardapi-volume-168e3696-040c-486f-ba40-f4cc41a1b638": Phase="Pending", Reason="", readiness=false. Elapsed: 5.723155ms May 22 13:32:10.889: INFO: Pod "downwardapi-volume-168e3696-040c-486f-ba40-f4cc41a1b638": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009670854s May 22 13:32:12.892: INFO: Pod "downwardapi-volume-168e3696-040c-486f-ba40-f4cc41a1b638": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012691371s STEP: Saw pod success May 22 13:32:12.892: INFO: Pod "downwardapi-volume-168e3696-040c-486f-ba40-f4cc41a1b638" satisfied condition "success or failure" May 22 13:32:12.894: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-168e3696-040c-486f-ba40-f4cc41a1b638 container client-container: STEP: delete the pod May 22 13:32:12.925: INFO: Waiting for pod downwardapi-volume-168e3696-040c-486f-ba40-f4cc41a1b638 to disappear May 22 13:32:12.949: INFO: Pod downwardapi-volume-168e3696-040c-486f-ba40-f4cc41a1b638 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:32:12.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4516" for this suite. May 22 13:32:18.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:32:19.059: INFO: namespace projected-4516 deletion completed in 6.106775124s • [SLOW TEST:10.280 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:32:19.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 22 13:32:26.792: INFO: 5 pods remaining May 22 13:32:26.792: INFO: 0 pods has nil DeletionTimestamp May 22 13:32:26.792: INFO: May 22 13:32:27.703: INFO: 0 pods remaining May 22 13:32:27.703: INFO: 0 pods has nil DeletionTimestamp May 22 13:32:27.703: INFO: STEP: Gathering metrics W0522 13:32:28.842590 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 22 13:32:28.842: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:32:28.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9544" for this suite. May 22 13:32:35.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:32:35.156: INFO: namespace gc-9544 deletion completed in 6.250149939s • [SLOW TEST:16.096 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:32:35.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-0d30c0af-5bb6-4920-a20c-0d4f652b2e46 STEP: Creating a pod to test consume secrets May 22 13:32:35.263: INFO: Waiting up to 5m0s for pod "pod-secrets-25d2f9da-849e-4803-b5bd-f00f795e6cd3" in namespace "secrets-1087" to be "success or failure" May 22 13:32:35.283: INFO: Pod "pod-secrets-25d2f9da-849e-4803-b5bd-f00f795e6cd3": Phase="Pending", Reason="", readiness=false. Elapsed: 20.013806ms May 22 13:32:37.287: INFO: Pod "pod-secrets-25d2f9da-849e-4803-b5bd-f00f795e6cd3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023612135s May 22 13:32:39.292: INFO: Pod "pod-secrets-25d2f9da-849e-4803-b5bd-f00f795e6cd3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028820096s STEP: Saw pod success May 22 13:32:39.292: INFO: Pod "pod-secrets-25d2f9da-849e-4803-b5bd-f00f795e6cd3" satisfied condition "success or failure" May 22 13:32:39.295: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-25d2f9da-849e-4803-b5bd-f00f795e6cd3 container secret-volume-test: STEP: delete the pod May 22 13:32:39.489: INFO: Waiting for pod pod-secrets-25d2f9da-849e-4803-b5bd-f00f795e6cd3 to disappear May 22 13:32:39.500: INFO: Pod pod-secrets-25d2f9da-849e-4803-b5bd-f00f795e6cd3 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:32:39.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1087" for this suite. May 22 13:32:45.599: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:32:45.776: INFO: namespace secrets-1087 deletion completed in 6.250367386s • [SLOW TEST:10.620 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:32:45.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-1131/secret-test-fc9d40c3-22ad-4c43-8446-ef8f983222c5 STEP: Creating a pod to test consume secrets May 22 13:32:46.012: INFO: Waiting up to 5m0s for pod "pod-configmaps-428dda6c-853d-4458-9163-c0c74fbe5a05" in namespace "secrets-1131" to be "success or failure" May 22 13:32:46.015: INFO: Pod "pod-configmaps-428dda6c-853d-4458-9163-c0c74fbe5a05": Phase="Pending", Reason="", readiness=false. Elapsed: 3.208898ms May 22 13:32:48.484: INFO: Pod "pod-configmaps-428dda6c-853d-4458-9163-c0c74fbe5a05": Phase="Pending", Reason="", readiness=false. Elapsed: 2.472149169s May 22 13:32:50.488: INFO: Pod "pod-configmaps-428dda6c-853d-4458-9163-c0c74fbe5a05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.476269237s STEP: Saw pod success May 22 13:32:50.489: INFO: Pod "pod-configmaps-428dda6c-853d-4458-9163-c0c74fbe5a05" satisfied condition "success or failure" May 22 13:32:50.491: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-428dda6c-853d-4458-9163-c0c74fbe5a05 container env-test: STEP: delete the pod May 22 13:32:50.531: INFO: Waiting for pod pod-configmaps-428dda6c-853d-4458-9163-c0c74fbe5a05 to disappear May 22 13:32:50.550: INFO: Pod pod-configmaps-428dda6c-853d-4458-9163-c0c74fbe5a05 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:32:50.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1131" for this suite. May 22 13:32:56.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:32:56.642: INFO: namespace secrets-1131 deletion completed in 6.087021058s • [SLOW TEST:10.865 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:32:56.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-d419757e-2e05-4bd3-945c-fb71861a7a8e [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:32:56.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3871" for this suite. May 22 13:33:02.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:33:02.848: INFO: namespace configmap-3871 deletion completed in 6.084876595s • [SLOW TEST:6.206 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:33:02.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:33:10.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3436" for this suite. May 22 13:33:17.010: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:33:17.087: INFO: namespace kubelet-test-3436 deletion completed in 6.089472551s • [SLOW TEST:14.238 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:33:17.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium May 22 13:33:17.143: INFO: Waiting up to 5m0s for pod "pod-24d90b4e-b96d-452a-a6af-796d90242857" in namespace "emptydir-6370" to be "success or failure" May 22 13:33:17.190: INFO: Pod "pod-24d90b4e-b96d-452a-a6af-796d90242857": Phase="Pending", Reason="", readiness=false. Elapsed: 46.742219ms May 22 13:33:19.194: INFO: Pod "pod-24d90b4e-b96d-452a-a6af-796d90242857": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05134638s May 22 13:33:21.198: INFO: Pod "pod-24d90b4e-b96d-452a-a6af-796d90242857": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0552032s STEP: Saw pod success May 22 13:33:21.198: INFO: Pod "pod-24d90b4e-b96d-452a-a6af-796d90242857" satisfied condition "success or failure" May 22 13:33:21.201: INFO: Trying to get logs from node iruya-worker pod pod-24d90b4e-b96d-452a-a6af-796d90242857 container test-container: STEP: delete the pod May 22 13:33:21.245: INFO: Waiting for pod pod-24d90b4e-b96d-452a-a6af-796d90242857 to disappear May 22 13:33:21.262: INFO: Pod pod-24d90b4e-b96d-452a-a6af-796d90242857 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:33:21.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6370" for this suite. May 22 13:33:27.277: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:33:27.371: INFO: namespace emptydir-6370 deletion completed in 6.105564737s • [SLOW TEST:10.283 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:33:27.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-5822 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-5822 STEP: Creating statefulset with conflicting port in namespace statefulset-5822 STEP: Waiting until pod test-pod will start running in namespace statefulset-5822 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-5822 May 22 13:33:31.543: INFO: Observed stateful pod in namespace: statefulset-5822, name: ss-0, uid: b9219957-0cc6-4acb-9b0a-4edfa3aa417d, status phase: Pending. Waiting for statefulset controller to delete. May 22 13:33:32.123: INFO: Observed stateful pod in namespace: statefulset-5822, name: ss-0, uid: b9219957-0cc6-4acb-9b0a-4edfa3aa417d, status phase: Failed. Waiting for statefulset controller to delete. May 22 13:33:32.191: INFO: Observed stateful pod in namespace: statefulset-5822, name: ss-0, uid: b9219957-0cc6-4acb-9b0a-4edfa3aa417d, status phase: Failed. Waiting for statefulset controller to delete. May 22 13:33:32.194: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-5822 STEP: Removing pod with conflicting port in namespace statefulset-5822 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-5822 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 22 13:33:36.279: INFO: Deleting all statefulset in ns statefulset-5822 May 22 13:33:36.283: INFO: Scaling statefulset ss to 0 May 22 13:33:56.326: INFO: Waiting for statefulset status.replicas updated to 0 May 22 13:33:56.329: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:33:56.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5822" for this suite. May 22 13:34:02.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:34:02.447: INFO: namespace statefulset-5822 deletion completed in 6.083398061s • [SLOW TEST:35.076 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:34:02.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-a70c53dd-d75f-4eaa-9b34-744c96b2b4f8 STEP: Creating a pod to test consume configMaps May 22 13:34:02.554: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c9361fe9-2290-411b-b87c-994824b49be9" in namespace "projected-3188" to be "success or failure" May 22 13:34:02.564: INFO: Pod "pod-projected-configmaps-c9361fe9-2290-411b-b87c-994824b49be9": Phase="Pending", Reason="", readiness=false. Elapsed: 9.10478ms May 22 13:34:04.699: INFO: Pod "pod-projected-configmaps-c9361fe9-2290-411b-b87c-994824b49be9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.144647581s May 22 13:34:06.704: INFO: Pod "pod-projected-configmaps-c9361fe9-2290-411b-b87c-994824b49be9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.149127861s STEP: Saw pod success May 22 13:34:06.704: INFO: Pod "pod-projected-configmaps-c9361fe9-2290-411b-b87c-994824b49be9" satisfied condition "success or failure" May 22 13:34:06.707: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-c9361fe9-2290-411b-b87c-994824b49be9 container projected-configmap-volume-test: STEP: delete the pod May 22 13:34:06.732: INFO: Waiting for pod pod-projected-configmaps-c9361fe9-2290-411b-b87c-994824b49be9 to disappear May 22 13:34:06.743: INFO: Pod pod-projected-configmaps-c9361fe9-2290-411b-b87c-994824b49be9 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:34:06.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3188" for this suite. May 22 13:34:12.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:34:12.861: INFO: namespace projected-3188 deletion completed in 6.114766329s • [SLOW TEST:10.412 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:34:12.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 22 13:34:12.976: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f6034834-c8e1-4f88-864f-3056853cb5be" in namespace "downward-api-8606" to be "success or failure" May 22 13:34:13.007: INFO: Pod "downwardapi-volume-f6034834-c8e1-4f88-864f-3056853cb5be": Phase="Pending", Reason="", readiness=false. Elapsed: 31.15345ms May 22 13:34:15.011: INFO: Pod "downwardapi-volume-f6034834-c8e1-4f88-864f-3056853cb5be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034321536s May 22 13:34:17.015: INFO: Pod "downwardapi-volume-f6034834-c8e1-4f88-864f-3056853cb5be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039191717s STEP: Saw pod success May 22 13:34:17.016: INFO: Pod "downwardapi-volume-f6034834-c8e1-4f88-864f-3056853cb5be" satisfied condition "success or failure" May 22 13:34:17.019: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-f6034834-c8e1-4f88-864f-3056853cb5be container client-container: STEP: delete the pod May 22 13:34:17.058: INFO: Waiting for pod downwardapi-volume-f6034834-c8e1-4f88-864f-3056853cb5be to disappear May 22 13:34:17.073: INFO: Pod downwardapi-volume-f6034834-c8e1-4f88-864f-3056853cb5be no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:34:17.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8606" for this suite. May 22 13:34:23.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:34:23.158: INFO: namespace downward-api-8606 deletion completed in 6.081142239s • [SLOW TEST:10.297 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:34:23.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 22 13:34:23.262: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.010456ms) May 22 13:34:23.264: INFO: (1) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.317151ms) May 22 13:34:23.267: INFO: (2) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.372749ms) May 22 13:34:23.269: INFO: (3) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.32452ms) May 22 13:34:23.271: INFO: (4) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.389406ms) May 22 13:34:23.274: INFO: (5) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.283642ms) May 22 13:34:23.276: INFO: (6) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.2955ms) May 22 13:34:23.279: INFO: (7) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.639776ms) May 22 13:34:23.281: INFO: (8) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.445281ms) May 22 13:34:23.284: INFO: (9) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.50445ms) May 22 13:34:23.286: INFO: (10) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.769971ms) May 22 13:34:23.289: INFO: (11) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.917285ms) May 22 13:34:23.292: INFO: (12) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.026261ms) May 22 13:34:23.296: INFO: (13) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.059406ms) May 22 13:34:23.298: INFO: (14) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.857426ms) May 22 13:34:23.302: INFO: (15) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.322767ms) May 22 13:34:23.306: INFO: (16) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.903421ms) May 22 13:34:23.309: INFO: (17) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.444231ms) May 22 13:34:23.314: INFO: (18) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.440539ms) May 22 13:34:23.317: INFO: (19) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.191273ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:34:23.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-9" for this suite. May 22 13:34:29.332: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:34:29.410: INFO: namespace proxy-9 deletion completed in 6.090091004s • [SLOW TEST:6.251 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:34:29.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 22 13:34:51.504: INFO: Container started at 2020-05-22 13:34:32 +0000 UTC, pod became ready at 2020-05-22 13:34:49 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:34:51.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9505" for this suite. May 22 13:35:13.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:35:13.602: INFO: namespace container-probe-9505 deletion completed in 22.095306369s • [SLOW TEST:44.192 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:35:13.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition May 22 13:35:13.687: INFO: Waiting up to 5m0s for pod "var-expansion-a435db43-0865-4e6e-b81c-cf72cb49787b" in namespace "var-expansion-7858" to be "success or failure" May 22 13:35:13.691: INFO: Pod "var-expansion-a435db43-0865-4e6e-b81c-cf72cb49787b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.67779ms May 22 13:35:15.695: INFO: Pod "var-expansion-a435db43-0865-4e6e-b81c-cf72cb49787b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007453198s May 22 13:35:17.700: INFO: Pod "var-expansion-a435db43-0865-4e6e-b81c-cf72cb49787b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012602505s STEP: Saw pod success May 22 13:35:17.700: INFO: Pod "var-expansion-a435db43-0865-4e6e-b81c-cf72cb49787b" satisfied condition "success or failure" May 22 13:35:17.703: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-a435db43-0865-4e6e-b81c-cf72cb49787b container dapi-container: STEP: delete the pod May 22 13:35:17.733: INFO: Waiting for pod var-expansion-a435db43-0865-4e6e-b81c-cf72cb49787b to disappear May 22 13:35:17.749: INFO: Pod var-expansion-a435db43-0865-4e6e-b81c-cf72cb49787b no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:35:17.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7858" for this suite. May 22 13:35:23.850: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:35:23.920: INFO: namespace var-expansion-7858 deletion completed in 6.166684229s • [SLOW TEST:10.317 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:35:23.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 22 13:35:28.554: INFO: Successfully updated pod "labelsupdatecdf3e834-b9ea-4e61-8396-7a92437464e5" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:35:30.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5456" for this suite. May 22 13:35:52.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:35:52.742: INFO: namespace downward-api-5456 deletion completed in 22.122512452s • [SLOW TEST:28.822 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:35:52.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 22 13:35:52.784: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1139,SelfLink:/api/v1/namespaces/watch-1139/configmaps/e2e-watch-test-configmap-a,UID:db8ee039-a4b3-4713-a46a-d3d00c7d7fb2,ResourceVersion:12297799,Generation:0,CreationTimestamp:2020-05-22 13:35:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 22 13:35:52.784: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1139,SelfLink:/api/v1/namespaces/watch-1139/configmaps/e2e-watch-test-configmap-a,UID:db8ee039-a4b3-4713-a46a-d3d00c7d7fb2,ResourceVersion:12297799,Generation:0,CreationTimestamp:2020-05-22 13:35:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 22 13:36:02.792: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1139,SelfLink:/api/v1/namespaces/watch-1139/configmaps/e2e-watch-test-configmap-a,UID:db8ee039-a4b3-4713-a46a-d3d00c7d7fb2,ResourceVersion:12297820,Generation:0,CreationTimestamp:2020-05-22 13:35:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 22 13:36:02.792: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1139,SelfLink:/api/v1/namespaces/watch-1139/configmaps/e2e-watch-test-configmap-a,UID:db8ee039-a4b3-4713-a46a-d3d00c7d7fb2,ResourceVersion:12297820,Generation:0,CreationTimestamp:2020-05-22 13:35:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 22 13:36:12.803: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1139,SelfLink:/api/v1/namespaces/watch-1139/configmaps/e2e-watch-test-configmap-a,UID:db8ee039-a4b3-4713-a46a-d3d00c7d7fb2,ResourceVersion:12297841,Generation:0,CreationTimestamp:2020-05-22 13:35:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 22 13:36:12.803: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1139,SelfLink:/api/v1/namespaces/watch-1139/configmaps/e2e-watch-test-configmap-a,UID:db8ee039-a4b3-4713-a46a-d3d00c7d7fb2,ResourceVersion:12297841,Generation:0,CreationTimestamp:2020-05-22 13:35:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 22 13:36:22.811: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1139,SelfLink:/api/v1/namespaces/watch-1139/configmaps/e2e-watch-test-configmap-a,UID:db8ee039-a4b3-4713-a46a-d3d00c7d7fb2,ResourceVersion:12297862,Generation:0,CreationTimestamp:2020-05-22 13:35:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 22 13:36:22.811: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1139,SelfLink:/api/v1/namespaces/watch-1139/configmaps/e2e-watch-test-configmap-a,UID:db8ee039-a4b3-4713-a46a-d3d00c7d7fb2,ResourceVersion:12297862,Generation:0,CreationTimestamp:2020-05-22 13:35:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 22 13:36:32.816: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-1139,SelfLink:/api/v1/namespaces/watch-1139/configmaps/e2e-watch-test-configmap-b,UID:a1f3836b-4466-4108-9644-ff91db6336ac,ResourceVersion:12297882,Generation:0,CreationTimestamp:2020-05-22 13:36:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 22 13:36:32.816: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-1139,SelfLink:/api/v1/namespaces/watch-1139/configmaps/e2e-watch-test-configmap-b,UID:a1f3836b-4466-4108-9644-ff91db6336ac,ResourceVersion:12297882,Generation:0,CreationTimestamp:2020-05-22 13:36:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 22 13:36:42.824: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-1139,SelfLink:/api/v1/namespaces/watch-1139/configmaps/e2e-watch-test-configmap-b,UID:a1f3836b-4466-4108-9644-ff91db6336ac,ResourceVersion:12297901,Generation:0,CreationTimestamp:2020-05-22 13:36:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 22 13:36:42.824: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-1139,SelfLink:/api/v1/namespaces/watch-1139/configmaps/e2e-watch-test-configmap-b,UID:a1f3836b-4466-4108-9644-ff91db6336ac,ResourceVersion:12297901,Generation:0,CreationTimestamp:2020-05-22 13:36:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:36:52.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1139" for this suite. May 22 13:36:58.857: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:36:58.978: INFO: namespace watch-1139 deletion completed in 6.147182248s • [SLOW TEST:66.235 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:36:58.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium May 22 13:36:59.033: INFO: Waiting up to 5m0s for pod "pod-8c070afb-8730-4ca0-afcb-797e0bc05fb3" in namespace "emptydir-9722" to be "success or failure" May 22 13:36:59.045: INFO: Pod "pod-8c070afb-8730-4ca0-afcb-797e0bc05fb3": Phase="Pending", Reason="", readiness=false. Elapsed: 11.65867ms May 22 13:37:01.049: INFO: Pod "pod-8c070afb-8730-4ca0-afcb-797e0bc05fb3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016153813s May 22 13:37:03.053: INFO: Pod "pod-8c070afb-8730-4ca0-afcb-797e0bc05fb3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020109197s STEP: Saw pod success May 22 13:37:03.053: INFO: Pod "pod-8c070afb-8730-4ca0-afcb-797e0bc05fb3" satisfied condition "success or failure" May 22 13:37:03.056: INFO: Trying to get logs from node iruya-worker pod pod-8c070afb-8730-4ca0-afcb-797e0bc05fb3 container test-container: STEP: delete the pod May 22 13:37:03.089: INFO: Waiting for pod pod-8c070afb-8730-4ca0-afcb-797e0bc05fb3 to disappear May 22 13:37:03.107: INFO: Pod pod-8c070afb-8730-4ca0-afcb-797e0bc05fb3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:37:03.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9722" for this suite. May 22 13:37:09.127: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:37:09.201: INFO: namespace emptydir-9722 deletion completed in 6.09080388s • [SLOW TEST:10.223 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:37:09.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:37:13.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7092" for this suite. May 22 13:38:03.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:38:03.378: INFO: namespace kubelet-test-7092 deletion completed in 50.093076687s • [SLOW TEST:54.177 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:38:03.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 22 13:38:03.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-1293' May 22 13:38:06.219: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 22 13:38:06.219: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc May 22 13:38:06.264: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-zm5vk] May 22 13:38:06.264: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-zm5vk" in namespace "kubectl-1293" to be "running and ready" May 22 13:38:06.288: INFO: Pod "e2e-test-nginx-rc-zm5vk": Phase="Pending", Reason="", readiness=false. Elapsed: 23.627751ms May 22 13:38:08.293: INFO: Pod "e2e-test-nginx-rc-zm5vk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028756529s May 22 13:38:10.297: INFO: Pod "e2e-test-nginx-rc-zm5vk": Phase="Running", Reason="", readiness=true. Elapsed: 4.032854189s May 22 13:38:10.297: INFO: Pod "e2e-test-nginx-rc-zm5vk" satisfied condition "running and ready" May 22 13:38:10.297: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-zm5vk] May 22 13:38:10.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-1293' May 22 13:38:10.411: INFO: stderr: "" May 22 13:38:10.411: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 May 22 13:38:10.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-1293' May 22 13:38:10.518: INFO: stderr: "" May 22 13:38:10.518: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:38:10.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1293" for this suite. May 22 13:38:32.547: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:38:32.622: INFO: namespace kubectl-1293 deletion completed in 22.098559305s • [SLOW TEST:29.243 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:38:32.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 22 13:38:32.730: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:38:32.735: INFO: Number of nodes with available pods: 0 May 22 13:38:32.735: INFO: Node iruya-worker is running more than one daemon pod May 22 13:38:33.739: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:38:33.742: INFO: Number of nodes with available pods: 0 May 22 13:38:33.742: INFO: Node iruya-worker is running more than one daemon pod May 22 13:38:34.840: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:38:34.842: INFO: Number of nodes with available pods: 0 May 22 13:38:34.842: INFO: Node iruya-worker is running more than one daemon pod May 22 13:38:35.739: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:38:35.742: INFO: Number of nodes with available pods: 0 May 22 13:38:35.742: INFO: Node iruya-worker is running more than one daemon pod May 22 13:38:36.749: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:38:36.752: INFO: Number of nodes with available pods: 1 May 22 13:38:36.752: INFO: Node iruya-worker2 is running more than one daemon pod May 22 13:38:37.758: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:38:37.760: INFO: Number of nodes with available pods: 1 May 22 13:38:37.760: INFO: Node iruya-worker2 is running more than one daemon pod May 22 13:38:38.739: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:38:38.743: INFO: Number of nodes with available pods: 2 May 22 13:38:38.743: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 22 13:38:38.799: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:38:38.802: INFO: Number of nodes with available pods: 1 May 22 13:38:38.802: INFO: Node iruya-worker2 is running more than one daemon pod May 22 13:38:39.809: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:38:39.813: INFO: Number of nodes with available pods: 1 May 22 13:38:39.813: INFO: Node iruya-worker2 is running more than one daemon pod May 22 13:38:40.809: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:38:40.813: INFO: Number of nodes with available pods: 1 May 22 13:38:40.813: INFO: Node iruya-worker2 is running more than one daemon pod May 22 13:38:41.810: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:38:41.817: INFO: Number of nodes with available pods: 1 May 22 13:38:41.817: INFO: Node iruya-worker2 is running more than one daemon pod May 22 13:38:42.808: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:38:42.811: INFO: Number of nodes with available pods: 1 May 22 13:38:42.811: INFO: Node iruya-worker2 is running more than one daemon pod May 22 13:38:43.811: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:38:43.814: INFO: Number of nodes with available pods: 1 May 22 13:38:43.814: INFO: Node iruya-worker2 is running more than one daemon pod May 22 13:38:44.807: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:38:44.811: INFO: Number of nodes with available pods: 1 May 22 13:38:44.811: INFO: Node iruya-worker2 is running more than one daemon pod May 22 13:38:45.823: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:38:45.826: INFO: Number of nodes with available pods: 1 May 22 13:38:45.826: INFO: Node iruya-worker2 is running more than one daemon pod May 22 13:38:46.807: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:38:46.811: INFO: Number of nodes with available pods: 1 May 22 13:38:46.811: INFO: Node iruya-worker2 is running more than one daemon pod May 22 13:38:47.807: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:38:47.810: INFO: Number of nodes with available pods: 1 May 22 13:38:47.810: INFO: Node iruya-worker2 is running more than one daemon pod May 22 13:38:48.808: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:38:48.812: INFO: Number of nodes with available pods: 1 May 22 13:38:48.812: INFO: Node iruya-worker2 is running more than one daemon pod May 22 13:38:49.807: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:38:49.811: INFO: Number of nodes with available pods: 1 May 22 13:38:49.811: INFO: Node iruya-worker2 is running more than one daemon pod May 22 13:38:50.808: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:38:50.812: INFO: Number of nodes with available pods: 1 May 22 13:38:50.812: INFO: Node iruya-worker2 is running more than one daemon pod May 22 13:38:51.807: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:38:51.811: INFO: Number of nodes with available pods: 1 May 22 13:38:51.811: INFO: Node iruya-worker2 is running more than one daemon pod May 22 13:38:52.807: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:38:52.811: INFO: Number of nodes with available pods: 1 May 22 13:38:52.811: INFO: Node iruya-worker2 is running more than one daemon pod May 22 13:38:53.807: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:38:53.811: INFO: Number of nodes with available pods: 1 May 22 13:38:53.811: INFO: Node iruya-worker2 is running more than one daemon pod May 22 13:38:54.807: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:38:54.810: INFO: Number of nodes with available pods: 1 May 22 13:38:54.810: INFO: Node iruya-worker2 is running more than one daemon pod May 22 13:38:55.807: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:38:55.810: INFO: Number of nodes with available pods: 2 May 22 13:38:55.810: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3351, will wait for the garbage collector to delete the pods May 22 13:38:55.890: INFO: Deleting DaemonSet.extensions daemon-set took: 23.939774ms May 22 13:38:56.191: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.29818ms May 22 13:39:01.903: INFO: Number of nodes with available pods: 0 May 22 13:39:01.903: INFO: Number of running nodes: 0, number of available pods: 0 May 22 13:39:01.906: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3351/daemonsets","resourceVersion":"12298312"},"items":null} May 22 13:39:01.908: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3351/pods","resourceVersion":"12298312"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:39:01.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3351" for this suite. May 22 13:39:07.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:39:08.010: INFO: namespace daemonsets-3351 deletion completed in 6.089776559s • [SLOW TEST:35.388 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:39:08.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-90dbffb8-9d06-4159-8475-bed5f913c405 STEP: Creating a pod to test consume configMaps May 22 13:39:08.177: INFO: Waiting up to 5m0s for pod "pod-configmaps-ec975e6c-7a3a-48e3-80c6-12d2e9127c6e" in namespace "configmap-7851" to be "success or failure" May 22 13:39:08.295: INFO: Pod "pod-configmaps-ec975e6c-7a3a-48e3-80c6-12d2e9127c6e": Phase="Pending", Reason="", readiness=false. Elapsed: 118.324533ms May 22 13:39:10.367: INFO: Pod "pod-configmaps-ec975e6c-7a3a-48e3-80c6-12d2e9127c6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.190034168s May 22 13:39:12.372: INFO: Pod "pod-configmaps-ec975e6c-7a3a-48e3-80c6-12d2e9127c6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.194875962s STEP: Saw pod success May 22 13:39:12.372: INFO: Pod "pod-configmaps-ec975e6c-7a3a-48e3-80c6-12d2e9127c6e" satisfied condition "success or failure" May 22 13:39:12.375: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-ec975e6c-7a3a-48e3-80c6-12d2e9127c6e container configmap-volume-test: STEP: delete the pod May 22 13:39:12.436: INFO: Waiting for pod pod-configmaps-ec975e6c-7a3a-48e3-80c6-12d2e9127c6e to disappear May 22 13:39:12.462: INFO: Pod pod-configmaps-ec975e6c-7a3a-48e3-80c6-12d2e9127c6e no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:39:12.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7851" for this suite. May 22 13:39:18.482: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:39:18.582: INFO: namespace configmap-7851 deletion completed in 6.116216966s • [SLOW TEST:10.572 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:39:18.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 22 13:39:18.665: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 22 13:39:18.671: INFO: Number of nodes with available pods: 0 May 22 13:39:18.671: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 22 13:39:18.742: INFO: Number of nodes with available pods: 0 May 22 13:39:18.742: INFO: Node iruya-worker is running more than one daemon pod May 22 13:39:19.747: INFO: Number of nodes with available pods: 0 May 22 13:39:19.747: INFO: Node iruya-worker is running more than one daemon pod May 22 13:39:20.746: INFO: Number of nodes with available pods: 0 May 22 13:39:20.746: INFO: Node iruya-worker is running more than one daemon pod May 22 13:39:21.750: INFO: Number of nodes with available pods: 1 May 22 13:39:21.750: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 22 13:39:21.784: INFO: Number of nodes with available pods: 1 May 22 13:39:21.784: INFO: Number of running nodes: 0, number of available pods: 1 May 22 13:39:22.788: INFO: Number of nodes with available pods: 0 May 22 13:39:22.788: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 22 13:39:22.808: INFO: Number of nodes with available pods: 0 May 22 13:39:22.808: INFO: Node iruya-worker is running more than one daemon pod May 22 13:39:23.813: INFO: Number of nodes with available pods: 0 May 22 13:39:23.813: INFO: Node iruya-worker is running more than one daemon pod May 22 13:39:24.812: INFO: Number of nodes with available pods: 0 May 22 13:39:24.812: INFO: Node iruya-worker is running more than one daemon pod May 22 13:39:25.812: INFO: Number of nodes with available pods: 0 May 22 13:39:25.812: INFO: Node iruya-worker is running more than one daemon pod May 22 13:39:26.812: INFO: Number of nodes with available pods: 0 May 22 13:39:26.812: INFO: Node iruya-worker is running more than one daemon pod May 22 13:39:27.813: INFO: Number of nodes with available pods: 0 May 22 13:39:27.813: INFO: Node iruya-worker is running more than one daemon pod May 22 13:39:28.820: INFO: Number of nodes with available pods: 0 May 22 13:39:28.820: INFO: Node iruya-worker is running more than one daemon pod May 22 13:39:29.813: INFO: Number of nodes with available pods: 0 May 22 13:39:29.813: INFO: Node iruya-worker is running more than one daemon pod May 22 13:39:30.812: INFO: Number of nodes with available pods: 0 May 22 13:39:30.812: INFO: Node iruya-worker is running more than one daemon pod May 22 13:39:31.813: INFO: Number of nodes with available pods: 0 May 22 13:39:31.813: INFO: Node iruya-worker is running more than one daemon pod May 22 13:39:32.813: INFO: Number of nodes with available pods: 0 May 22 13:39:32.813: INFO: Node iruya-worker is running more than one daemon pod May 22 13:39:33.813: INFO: Number of nodes with available pods: 0 May 22 13:39:33.813: INFO: Node iruya-worker is running more than one daemon pod May 22 13:39:34.812: INFO: Number of nodes with available pods: 1 May 22 13:39:34.812: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-264, will wait for the garbage collector to delete the pods May 22 13:39:34.875: INFO: Deleting DaemonSet.extensions daemon-set took: 4.823077ms May 22 13:39:35.175: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.337237ms May 22 13:39:42.178: INFO: Number of nodes with available pods: 0 May 22 13:39:42.178: INFO: Number of running nodes: 0, number of available pods: 0 May 22 13:39:42.181: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-264/daemonsets","resourceVersion":"12298489"},"items":null} May 22 13:39:42.183: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-264/pods","resourceVersion":"12298489"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:39:42.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-264" for this suite. May 22 13:39:48.309: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:39:48.382: INFO: namespace daemonsets-264 deletion completed in 6.099219217s • [SLOW TEST:29.800 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:39:48.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-6989/configmap-test-39e0f94d-7f5b-4be5-be28-59c0d9ec1396 STEP: Creating a pod to test consume configMaps May 22 13:39:48.469: INFO: Waiting up to 5m0s for pod "pod-configmaps-24ac1481-7411-4ae9-af02-1e76e8a06a24" in namespace "configmap-6989" to be "success or failure" May 22 13:39:48.473: INFO: Pod "pod-configmaps-24ac1481-7411-4ae9-af02-1e76e8a06a24": Phase="Pending", Reason="", readiness=false. Elapsed: 3.633217ms May 22 13:39:50.476: INFO: Pod "pod-configmaps-24ac1481-7411-4ae9-af02-1e76e8a06a24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006894651s May 22 13:39:52.481: INFO: Pod "pod-configmaps-24ac1481-7411-4ae9-af02-1e76e8a06a24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01187198s STEP: Saw pod success May 22 13:39:52.481: INFO: Pod "pod-configmaps-24ac1481-7411-4ae9-af02-1e76e8a06a24" satisfied condition "success or failure" May 22 13:39:52.485: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-24ac1481-7411-4ae9-af02-1e76e8a06a24 container env-test: STEP: delete the pod May 22 13:39:52.537: INFO: Waiting for pod pod-configmaps-24ac1481-7411-4ae9-af02-1e76e8a06a24 to disappear May 22 13:39:52.539: INFO: Pod pod-configmaps-24ac1481-7411-4ae9-af02-1e76e8a06a24 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:39:52.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6989" for this suite. May 22 13:39:58.554: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:39:58.643: INFO: namespace configmap-6989 deletion completed in 6.100519331s • [SLOW TEST:10.261 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:39:58.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 22 13:40:02.768: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:40:02.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4520" for this suite. May 22 13:40:08.812: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:40:08.889: INFO: namespace container-runtime-4520 deletion completed in 6.088250688s • [SLOW TEST:10.247 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:40:08.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod May 22 13:40:08.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2338' May 22 13:40:09.241: INFO: stderr: "" May 22 13:40:09.241: INFO: stdout: "pod/pause created\n" May 22 13:40:09.241: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 22 13:40:09.241: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-2338" to be "running and ready" May 22 13:40:09.266: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 24.572476ms May 22 13:40:11.278: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037199404s May 22 13:40:13.282: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.041029073s May 22 13:40:13.282: INFO: Pod "pause" satisfied condition "running and ready" May 22 13:40:13.282: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod May 22 13:40:13.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-2338' May 22 13:40:13.384: INFO: stderr: "" May 22 13:40:13.384: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 22 13:40:13.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2338' May 22 13:40:13.480: INFO: stderr: "" May 22 13:40:13.481: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod May 22 13:40:13.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-2338' May 22 13:40:13.586: INFO: stderr: "" May 22 13:40:13.586: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 22 13:40:13.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2338' May 22 13:40:13.684: INFO: stderr: "" May 22 13:40:13.684: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources May 22 13:40:13.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2338' May 22 13:40:13.837: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 22 13:40:13.837: INFO: stdout: "pod \"pause\" force deleted\n" May 22 13:40:13.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-2338' May 22 13:40:13.944: INFO: stderr: "No resources found.\n" May 22 13:40:13.944: INFO: stdout: "" May 22 13:40:13.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-2338 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 22 13:40:14.046: INFO: stderr: "" May 22 13:40:14.046: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:40:14.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2338" for this suite. May 22 13:40:20.100: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:40:20.192: INFO: namespace kubectl-2338 deletion completed in 6.14282055s • [SLOW TEST:11.302 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:40:20.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-f4d1c9d6-543f-4c2f-b7c8-664f30751a2d STEP: Creating a pod to test consume configMaps May 22 13:40:20.272: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-970be394-3795-4c03-bb5c-d63cc876910c" in namespace "projected-448" to be "success or failure" May 22 13:40:20.276: INFO: Pod "pod-projected-configmaps-970be394-3795-4c03-bb5c-d63cc876910c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.533447ms May 22 13:40:22.280: INFO: Pod "pod-projected-configmaps-970be394-3795-4c03-bb5c-d63cc876910c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007801301s May 22 13:40:24.344: INFO: Pod "pod-projected-configmaps-970be394-3795-4c03-bb5c-d63cc876910c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.072074952s STEP: Saw pod success May 22 13:40:24.344: INFO: Pod "pod-projected-configmaps-970be394-3795-4c03-bb5c-d63cc876910c" satisfied condition "success or failure" May 22 13:40:24.347: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-970be394-3795-4c03-bb5c-d63cc876910c container projected-configmap-volume-test: STEP: delete the pod May 22 13:40:24.508: INFO: Waiting for pod pod-projected-configmaps-970be394-3795-4c03-bb5c-d63cc876910c to disappear May 22 13:40:24.528: INFO: Pod pod-projected-configmaps-970be394-3795-4c03-bb5c-d63cc876910c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:40:24.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-448" for this suite. May 22 13:40:30.575: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:40:30.650: INFO: namespace projected-448 deletion completed in 6.11829241s • [SLOW TEST:10.458 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:40:30.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 May 22 13:40:30.763: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 22 13:40:30.771: INFO: Waiting for terminating namespaces to be deleted... May 22 13:40:30.774: INFO: Logging pods the kubelet thinks is on node iruya-worker before test May 22 13:40:30.779: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 22 13:40:30.779: INFO: Container kube-proxy ready: true, restart count 0 May 22 13:40:30.779: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 22 13:40:30.779: INFO: Container kindnet-cni ready: true, restart count 0 May 22 13:40:30.779: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test May 22 13:40:30.785: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) May 22 13:40:30.785: INFO: Container coredns ready: true, restart count 0 May 22 13:40:30.785: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) May 22 13:40:30.785: INFO: Container coredns ready: true, restart count 0 May 22 13:40:30.785: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) May 22 13:40:30.785: INFO: Container kube-proxy ready: true, restart count 0 May 22 13:40:30.785: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) May 22 13:40:30.785: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16115d64c3c44324], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:40:31.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6287" for this suite. May 22 13:40:37.825: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:40:37.908: INFO: namespace sched-pred-6287 deletion completed in 6.098054062s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.258 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:40:37.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 22 13:40:42.097: INFO: Waiting up to 5m0s for pod "client-envvars-c53ee266-d8f3-495d-9435-43d197189f43" in namespace "pods-2551" to be "success or failure" May 22 13:40:42.106: INFO: Pod "client-envvars-c53ee266-d8f3-495d-9435-43d197189f43": Phase="Pending", Reason="", readiness=false. Elapsed: 8.884652ms May 22 13:40:44.110: INFO: Pod "client-envvars-c53ee266-d8f3-495d-9435-43d197189f43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012906633s May 22 13:40:46.114: INFO: Pod "client-envvars-c53ee266-d8f3-495d-9435-43d197189f43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0173491s STEP: Saw pod success May 22 13:40:46.115: INFO: Pod "client-envvars-c53ee266-d8f3-495d-9435-43d197189f43" satisfied condition "success or failure" May 22 13:40:46.118: INFO: Trying to get logs from node iruya-worker2 pod client-envvars-c53ee266-d8f3-495d-9435-43d197189f43 container env3cont: STEP: delete the pod May 22 13:40:46.141: INFO: Waiting for pod client-envvars-c53ee266-d8f3-495d-9435-43d197189f43 to disappear May 22 13:40:46.145: INFO: Pod client-envvars-c53ee266-d8f3-495d-9435-43d197189f43 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:40:46.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2551" for this suite. May 22 13:41:36.172: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:41:36.237: INFO: namespace pods-2551 deletion completed in 50.089470761s • [SLOW TEST:58.329 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:41:36.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 22 13:41:36.317: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:41:40.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8601" for this suite. May 22 13:42:18.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:42:18.553: INFO: namespace pods-8601 deletion completed in 38.107702539s • [SLOW TEST:42.315 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:42:18.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-701c42f7-3a4a-468f-bef4-2fc8d40ac30f STEP: Creating a pod to test consume configMaps May 22 13:42:18.687: INFO: Waiting up to 5m0s for pod "pod-configmaps-bf825fbf-aa4d-4784-8498-bc23257f92c6" in namespace "configmap-3751" to be "success or failure" May 22 13:42:18.701: INFO: Pod "pod-configmaps-bf825fbf-aa4d-4784-8498-bc23257f92c6": Phase="Pending", Reason="", readiness=false. Elapsed: 13.432679ms May 22 13:42:20.705: INFO: Pod "pod-configmaps-bf825fbf-aa4d-4784-8498-bc23257f92c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017463897s May 22 13:42:22.709: INFO: Pod "pod-configmaps-bf825fbf-aa4d-4784-8498-bc23257f92c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021682201s STEP: Saw pod success May 22 13:42:22.709: INFO: Pod "pod-configmaps-bf825fbf-aa4d-4784-8498-bc23257f92c6" satisfied condition "success or failure" May 22 13:42:22.712: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-bf825fbf-aa4d-4784-8498-bc23257f92c6 container configmap-volume-test: STEP: delete the pod May 22 13:42:22.775: INFO: Waiting for pod pod-configmaps-bf825fbf-aa4d-4784-8498-bc23257f92c6 to disappear May 22 13:42:22.793: INFO: Pod pod-configmaps-bf825fbf-aa4d-4784-8498-bc23257f92c6 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:42:22.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3751" for this suite. May 22 13:42:28.826: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:42:28.938: INFO: namespace configmap-3751 deletion completed in 6.141329925s • [SLOW TEST:10.384 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:42:28.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-65204ca7-f22e-40b5-91e2-df07b4ee2764 STEP: Creating a pod to test consume secrets May 22 13:42:29.018: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-91f28f86-0a8e-4013-bd82-2c9dded91b33" in namespace "projected-2234" to be "success or failure" May 22 13:42:29.021: INFO: Pod "pod-projected-secrets-91f28f86-0a8e-4013-bd82-2c9dded91b33": Phase="Pending", Reason="", readiness=false. Elapsed: 3.782714ms May 22 13:42:31.026: INFO: Pod "pod-projected-secrets-91f28f86-0a8e-4013-bd82-2c9dded91b33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008707826s May 22 13:42:33.030: INFO: Pod "pod-projected-secrets-91f28f86-0a8e-4013-bd82-2c9dded91b33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012338922s STEP: Saw pod success May 22 13:42:33.030: INFO: Pod "pod-projected-secrets-91f28f86-0a8e-4013-bd82-2c9dded91b33" satisfied condition "success or failure" May 22 13:42:33.033: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-91f28f86-0a8e-4013-bd82-2c9dded91b33 container secret-volume-test: STEP: delete the pod May 22 13:42:33.055: INFO: Waiting for pod pod-projected-secrets-91f28f86-0a8e-4013-bd82-2c9dded91b33 to disappear May 22 13:42:33.063: INFO: Pod pod-projected-secrets-91f28f86-0a8e-4013-bd82-2c9dded91b33 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:42:33.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2234" for this suite. May 22 13:42:39.086: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:42:39.154: INFO: namespace projected-2234 deletion completed in 6.086894211s • [SLOW TEST:10.215 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:42:39.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-e9b248b6-18a6-4bbb-bb85-263e19db6550 STEP: Creating a pod to test consume secrets May 22 13:42:39.264: INFO: Waiting up to 5m0s for pod "pod-secrets-cc7e54e7-6eb8-47f8-922e-89742c93fd2f" in namespace "secrets-2955" to be "success or failure" May 22 13:42:39.328: INFO: Pod "pod-secrets-cc7e54e7-6eb8-47f8-922e-89742c93fd2f": Phase="Pending", Reason="", readiness=false. Elapsed: 64.673204ms May 22 13:42:41.333: INFO: Pod "pod-secrets-cc7e54e7-6eb8-47f8-922e-89742c93fd2f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069559706s May 22 13:42:43.337: INFO: Pod "pod-secrets-cc7e54e7-6eb8-47f8-922e-89742c93fd2f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.073876825s STEP: Saw pod success May 22 13:42:43.337: INFO: Pod "pod-secrets-cc7e54e7-6eb8-47f8-922e-89742c93fd2f" satisfied condition "success or failure" May 22 13:42:43.341: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-cc7e54e7-6eb8-47f8-922e-89742c93fd2f container secret-env-test: STEP: delete the pod May 22 13:42:43.365: INFO: Waiting for pod pod-secrets-cc7e54e7-6eb8-47f8-922e-89742c93fd2f to disappear May 22 13:42:43.370: INFO: Pod pod-secrets-cc7e54e7-6eb8-47f8-922e-89742c93fd2f no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:42:43.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2955" for this suite. May 22 13:42:49.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:42:49.477: INFO: namespace secrets-2955 deletion completed in 6.105098532s • [SLOW TEST:10.323 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:42:49.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod May 22 13:42:53.553: INFO: Pod pod-hostip-7070a224-8475-48ba-87e7-ae88b4bb5eeb has hostIP: 172.17.0.6 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:42:53.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4287" for this suite. May 22 13:43:15.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:43:15.643: INFO: namespace pods-4287 deletion completed in 22.086648472s • [SLOW TEST:26.165 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:43:15.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults May 22 13:43:15.781: INFO: Waiting up to 5m0s for pod "client-containers-6f2e5920-fd49-49bf-9f0e-0b592eea97b6" in namespace "containers-8249" to be "success or failure" May 22 13:43:15.811: INFO: Pod "client-containers-6f2e5920-fd49-49bf-9f0e-0b592eea97b6": Phase="Pending", Reason="", readiness=false. Elapsed: 29.288474ms May 22 13:43:17.815: INFO: Pod "client-containers-6f2e5920-fd49-49bf-9f0e-0b592eea97b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03398151s May 22 13:43:19.819: INFO: Pod "client-containers-6f2e5920-fd49-49bf-9f0e-0b592eea97b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037659408s STEP: Saw pod success May 22 13:43:19.819: INFO: Pod "client-containers-6f2e5920-fd49-49bf-9f0e-0b592eea97b6" satisfied condition "success or failure" May 22 13:43:19.822: INFO: Trying to get logs from node iruya-worker2 pod client-containers-6f2e5920-fd49-49bf-9f0e-0b592eea97b6 container test-container: STEP: delete the pod May 22 13:43:19.898: INFO: Waiting for pod client-containers-6f2e5920-fd49-49bf-9f0e-0b592eea97b6 to disappear May 22 13:43:19.901: INFO: Pod client-containers-6f2e5920-fd49-49bf-9f0e-0b592eea97b6 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:43:19.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8249" for this suite. May 22 13:43:25.931: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:43:26.016: INFO: namespace containers-8249 deletion completed in 6.112014102s • [SLOW TEST:10.373 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:43:26.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-6647 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6647 to expose endpoints map[] May 22 13:43:26.113: INFO: Get endpoints failed (12.196155ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 22 13:43:27.130: INFO: successfully validated that service endpoint-test2 in namespace services-6647 exposes endpoints map[] (1.029224467s elapsed) STEP: Creating pod pod1 in namespace services-6647 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6647 to expose endpoints map[pod1:[80]] May 22 13:43:30.379: INFO: successfully validated that service endpoint-test2 in namespace services-6647 exposes endpoints map[pod1:[80]] (3.243129645s elapsed) STEP: Creating pod pod2 in namespace services-6647 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6647 to expose endpoints map[pod1:[80] pod2:[80]] May 22 13:43:33.484: INFO: successfully validated that service endpoint-test2 in namespace services-6647 exposes endpoints map[pod1:[80] pod2:[80]] (3.100721034s elapsed) STEP: Deleting pod pod1 in namespace services-6647 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6647 to expose endpoints map[pod2:[80]] May 22 13:43:33.522: INFO: successfully validated that service endpoint-test2 in namespace services-6647 exposes endpoints map[pod2:[80]] (25.881489ms elapsed) STEP: Deleting pod pod2 in namespace services-6647 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6647 to expose endpoints map[] May 22 13:43:34.543: INFO: successfully validated that service endpoint-test2 in namespace services-6647 exposes endpoints map[] (1.016687527s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:43:34.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6647" for this suite. May 22 13:43:56.650: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:43:56.721: INFO: namespace services-6647 deletion completed in 22.086550981s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:30.705 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:43:56.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 22 13:44:06.974: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8196 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 22 13:44:06.974: INFO: >>> kubeConfig: /root/.kube/config I0522 13:44:07.009296 6 log.go:172] (0xc001d32210) (0xc001202460) Create stream I0522 13:44:07.009333 6 log.go:172] (0xc001d32210) (0xc001202460) Stream added, broadcasting: 1 I0522 13:44:07.011868 6 log.go:172] (0xc001d32210) Reply frame received for 1 I0522 13:44:07.011921 6 log.go:172] (0xc001d32210) (0xc001a059a0) Create stream I0522 13:44:07.011933 6 log.go:172] (0xc001d32210) (0xc001a059a0) Stream added, broadcasting: 3 I0522 13:44:07.012821 6 log.go:172] (0xc001d32210) Reply frame received for 3 I0522 13:44:07.012856 6 log.go:172] (0xc001d32210) (0xc0025e2000) Create stream I0522 13:44:07.012866 6 log.go:172] (0xc001d32210) (0xc0025e2000) Stream added, broadcasting: 5 I0522 13:44:07.013973 6 log.go:172] (0xc001d32210) Reply frame received for 5 I0522 13:44:07.101585 6 log.go:172] (0xc001d32210) Data frame received for 5 I0522 13:44:07.101626 6 log.go:172] (0xc0025e2000) (5) Data frame handling I0522 13:44:07.101661 6 log.go:172] (0xc001d32210) Data frame received for 3 I0522 13:44:07.101686 6 log.go:172] (0xc001a059a0) (3) Data frame handling I0522 13:44:07.101705 6 log.go:172] (0xc001a059a0) (3) Data frame sent I0522 13:44:07.101723 6 log.go:172] (0xc001d32210) Data frame received for 3 I0522 13:44:07.101735 6 log.go:172] (0xc001a059a0) (3) Data frame handling I0522 13:44:07.103219 6 log.go:172] (0xc001d32210) Data frame received for 1 I0522 13:44:07.103250 6 log.go:172] (0xc001202460) (1) Data frame handling I0522 13:44:07.103282 6 log.go:172] (0xc001202460) (1) Data frame sent I0522 13:44:07.103301 6 log.go:172] (0xc001d32210) (0xc001202460) Stream removed, broadcasting: 1 I0522 13:44:07.103317 6 log.go:172] (0xc001d32210) Go away received I0522 13:44:07.103436 6 log.go:172] (0xc001d32210) (0xc001202460) Stream removed, broadcasting: 1 I0522 13:44:07.103452 6 log.go:172] (0xc001d32210) (0xc001a059a0) Stream removed, broadcasting: 3 I0522 13:44:07.103463 6 log.go:172] (0xc001d32210) (0xc0025e2000) Stream removed, broadcasting: 5 May 22 13:44:07.103: INFO: Exec stderr: "" May 22 13:44:07.103: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8196 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 22 13:44:07.103: INFO: >>> kubeConfig: /root/.kube/config I0522 13:44:07.128138 6 log.go:172] (0xc0014e06e0) (0xc0025e23c0) Create stream I0522 13:44:07.128169 6 log.go:172] (0xc0014e06e0) (0xc0025e23c0) Stream added, broadcasting: 1 I0522 13:44:07.130533 6 log.go:172] (0xc0014e06e0) Reply frame received for 1 I0522 13:44:07.130582 6 log.go:172] (0xc0014e06e0) (0xc0025e2460) Create stream I0522 13:44:07.130599 6 log.go:172] (0xc0014e06e0) (0xc0025e2460) Stream added, broadcasting: 3 I0522 13:44:07.131374 6 log.go:172] (0xc0014e06e0) Reply frame received for 3 I0522 13:44:07.131408 6 log.go:172] (0xc0014e06e0) (0xc0025e2500) Create stream I0522 13:44:07.131418 6 log.go:172] (0xc0014e06e0) (0xc0025e2500) Stream added, broadcasting: 5 I0522 13:44:07.132104 6 log.go:172] (0xc0014e06e0) Reply frame received for 5 I0522 13:44:07.189031 6 log.go:172] (0xc0014e06e0) Data frame received for 5 I0522 13:44:07.189057 6 log.go:172] (0xc0025e2500) (5) Data frame handling I0522 13:44:07.189084 6 log.go:172] (0xc0014e06e0) Data frame received for 3 I0522 13:44:07.189095 6 log.go:172] (0xc0025e2460) (3) Data frame handling I0522 13:44:07.189246 6 log.go:172] (0xc0025e2460) (3) Data frame sent I0522 13:44:07.189264 6 log.go:172] (0xc0014e06e0) Data frame received for 3 I0522 13:44:07.189271 6 log.go:172] (0xc0025e2460) (3) Data frame handling I0522 13:44:07.191065 6 log.go:172] (0xc0014e06e0) Data frame received for 1 I0522 13:44:07.191106 6 log.go:172] (0xc0025e23c0) (1) Data frame handling I0522 13:44:07.191132 6 log.go:172] (0xc0025e23c0) (1) Data frame sent I0522 13:44:07.191162 6 log.go:172] (0xc0014e06e0) (0xc0025e23c0) Stream removed, broadcasting: 1 I0522 13:44:07.191254 6 log.go:172] (0xc0014e06e0) (0xc0025e23c0) Stream removed, broadcasting: 1 I0522 13:44:07.191268 6 log.go:172] (0xc0014e06e0) (0xc0025e2460) Stream removed, broadcasting: 3 I0522 13:44:07.191364 6 log.go:172] (0xc0014e06e0) Go away received I0522 13:44:07.191477 6 log.go:172] (0xc0014e06e0) (0xc0025e2500) Stream removed, broadcasting: 5 May 22 13:44:07.191: INFO: Exec stderr: "" May 22 13:44:07.191: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8196 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 22 13:44:07.191: INFO: >>> kubeConfig: /root/.kube/config I0522 13:44:07.216034 6 log.go:172] (0xc001194e70) (0xc001a05d60) Create stream I0522 13:44:07.216061 6 log.go:172] (0xc001194e70) (0xc001a05d60) Stream added, broadcasting: 1 I0522 13:44:07.218842 6 log.go:172] (0xc001194e70) Reply frame received for 1 I0522 13:44:07.218900 6 log.go:172] (0xc001194e70) (0xc001202640) Create stream I0522 13:44:07.218920 6 log.go:172] (0xc001194e70) (0xc001202640) Stream added, broadcasting: 3 I0522 13:44:07.219934 6 log.go:172] (0xc001194e70) Reply frame received for 3 I0522 13:44:07.219985 6 log.go:172] (0xc001194e70) (0xc001202780) Create stream I0522 13:44:07.220008 6 log.go:172] (0xc001194e70) (0xc001202780) Stream added, broadcasting: 5 I0522 13:44:07.220925 6 log.go:172] (0xc001194e70) Reply frame received for 5 I0522 13:44:07.292876 6 log.go:172] (0xc001194e70) Data frame received for 3 I0522 13:44:07.292917 6 log.go:172] (0xc001202640) (3) Data frame handling I0522 13:44:07.292977 6 log.go:172] (0xc001194e70) Data frame received for 5 I0522 13:44:07.293026 6 log.go:172] (0xc001202780) (5) Data frame handling I0522 13:44:07.293056 6 log.go:172] (0xc001202640) (3) Data frame sent I0522 13:44:07.293078 6 log.go:172] (0xc001194e70) Data frame received for 3 I0522 13:44:07.293086 6 log.go:172] (0xc001202640) (3) Data frame handling I0522 13:44:07.294972 6 log.go:172] (0xc001194e70) Data frame received for 1 I0522 13:44:07.294993 6 log.go:172] (0xc001a05d60) (1) Data frame handling I0522 13:44:07.295010 6 log.go:172] (0xc001a05d60) (1) Data frame sent I0522 13:44:07.295023 6 log.go:172] (0xc001194e70) (0xc001a05d60) Stream removed, broadcasting: 1 I0522 13:44:07.295051 6 log.go:172] (0xc001194e70) Go away received I0522 13:44:07.295166 6 log.go:172] (0xc001194e70) (0xc001a05d60) Stream removed, broadcasting: 1 I0522 13:44:07.295186 6 log.go:172] (0xc001194e70) (0xc001202640) Stream removed, broadcasting: 3 I0522 13:44:07.295206 6 log.go:172] (0xc001194e70) (0xc001202780) Stream removed, broadcasting: 5 May 22 13:44:07.295: INFO: Exec stderr: "" May 22 13:44:07.295: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8196 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 22 13:44:07.295: INFO: >>> kubeConfig: /root/.kube/config I0522 13:44:07.346208 6 log.go:172] (0xc001195ad0) (0xc002b88140) Create stream I0522 13:44:07.346237 6 log.go:172] (0xc001195ad0) (0xc002b88140) Stream added, broadcasting: 1 I0522 13:44:07.349104 6 log.go:172] (0xc001195ad0) Reply frame received for 1 I0522 13:44:07.349364 6 log.go:172] (0xc001195ad0) (0xc001202820) Create stream I0522 13:44:07.349378 6 log.go:172] (0xc001195ad0) (0xc001202820) Stream added, broadcasting: 3 I0522 13:44:07.350733 6 log.go:172] (0xc001195ad0) Reply frame received for 3 I0522 13:44:07.350789 6 log.go:172] (0xc001195ad0) (0xc0012028c0) Create stream I0522 13:44:07.350810 6 log.go:172] (0xc001195ad0) (0xc0012028c0) Stream added, broadcasting: 5 I0522 13:44:07.352401 6 log.go:172] (0xc001195ad0) Reply frame received for 5 I0522 13:44:07.503323 6 log.go:172] (0xc001195ad0) Data frame received for 5 I0522 13:44:07.503349 6 log.go:172] (0xc0012028c0) (5) Data frame handling I0522 13:44:07.503364 6 log.go:172] (0xc001195ad0) Data frame received for 3 I0522 13:44:07.503371 6 log.go:172] (0xc001202820) (3) Data frame handling I0522 13:44:07.503382 6 log.go:172] (0xc001202820) (3) Data frame sent I0522 13:44:07.503389 6 log.go:172] (0xc001195ad0) Data frame received for 3 I0522 13:44:07.503398 6 log.go:172] (0xc001202820) (3) Data frame handling I0522 13:44:07.504381 6 log.go:172] (0xc001195ad0) Data frame received for 1 I0522 13:44:07.504406 6 log.go:172] (0xc002b88140) (1) Data frame handling I0522 13:44:07.504430 6 log.go:172] (0xc002b88140) (1) Data frame sent I0522 13:44:07.504446 6 log.go:172] (0xc001195ad0) (0xc002b88140) Stream removed, broadcasting: 1 I0522 13:44:07.504499 6 log.go:172] (0xc001195ad0) Go away received I0522 13:44:07.504521 6 log.go:172] (0xc001195ad0) (0xc002b88140) Stream removed, broadcasting: 1 I0522 13:44:07.504544 6 log.go:172] (0xc001195ad0) (0xc001202820) Stream removed, broadcasting: 3 I0522 13:44:07.504569 6 log.go:172] (0xc001195ad0) (0xc0012028c0) Stream removed, broadcasting: 5 May 22 13:44:07.504: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 22 13:44:07.504: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8196 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 22 13:44:07.504: INFO: >>> kubeConfig: /root/.kube/config I0522 13:44:07.528278 6 log.go:172] (0xc0021e8210) (0xc000217220) Create stream I0522 13:44:07.528302 6 log.go:172] (0xc0021e8210) (0xc000217220) Stream added, broadcasting: 1 I0522 13:44:07.531056 6 log.go:172] (0xc0021e8210) Reply frame received for 1 I0522 13:44:07.531094 6 log.go:172] (0xc0021e8210) (0xc001202960) Create stream I0522 13:44:07.531108 6 log.go:172] (0xc0021e8210) (0xc001202960) Stream added, broadcasting: 3 I0522 13:44:07.532157 6 log.go:172] (0xc0021e8210) Reply frame received for 3 I0522 13:44:07.532200 6 log.go:172] (0xc0021e8210) (0xc000217360) Create stream I0522 13:44:07.532217 6 log.go:172] (0xc0021e8210) (0xc000217360) Stream added, broadcasting: 5 I0522 13:44:07.533024 6 log.go:172] (0xc0021e8210) Reply frame received for 5 I0522 13:44:07.603257 6 log.go:172] (0xc0021e8210) Data frame received for 5 I0522 13:44:07.603303 6 log.go:172] (0xc000217360) (5) Data frame handling I0522 13:44:07.603337 6 log.go:172] (0xc0021e8210) Data frame received for 3 I0522 13:44:07.603354 6 log.go:172] (0xc001202960) (3) Data frame handling I0522 13:44:07.603367 6 log.go:172] (0xc001202960) (3) Data frame sent I0522 13:44:07.603377 6 log.go:172] (0xc0021e8210) Data frame received for 3 I0522 13:44:07.603386 6 log.go:172] (0xc001202960) (3) Data frame handling I0522 13:44:07.604742 6 log.go:172] (0xc0021e8210) Data frame received for 1 I0522 13:44:07.604776 6 log.go:172] (0xc000217220) (1) Data frame handling I0522 13:44:07.604799 6 log.go:172] (0xc000217220) (1) Data frame sent I0522 13:44:07.604821 6 log.go:172] (0xc0021e8210) (0xc000217220) Stream removed, broadcasting: 1 I0522 13:44:07.604851 6 log.go:172] (0xc0021e8210) Go away received I0522 13:44:07.604959 6 log.go:172] (0xc0021e8210) (0xc000217220) Stream removed, broadcasting: 1 I0522 13:44:07.604983 6 log.go:172] (0xc0021e8210) (0xc001202960) Stream removed, broadcasting: 3 I0522 13:44:07.605004 6 log.go:172] (0xc0021e8210) (0xc000217360) Stream removed, broadcasting: 5 May 22 13:44:07.605: INFO: Exec stderr: "" May 22 13:44:07.605: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8196 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 22 13:44:07.605: INFO: >>> kubeConfig: /root/.kube/config I0522 13:44:07.637517 6 log.go:172] (0xc0024b6420) (0xc002b88460) Create stream I0522 13:44:07.637555 6 log.go:172] (0xc0024b6420) (0xc002b88460) Stream added, broadcasting: 1 I0522 13:44:07.639706 6 log.go:172] (0xc0024b6420) Reply frame received for 1 I0522 13:44:07.639741 6 log.go:172] (0xc0024b6420) (0xc0025bf180) Create stream I0522 13:44:07.639751 6 log.go:172] (0xc0024b6420) (0xc0025bf180) Stream added, broadcasting: 3 I0522 13:44:07.640713 6 log.go:172] (0xc0024b6420) Reply frame received for 3 I0522 13:44:07.640746 6 log.go:172] (0xc0024b6420) (0xc000217400) Create stream I0522 13:44:07.640759 6 log.go:172] (0xc0024b6420) (0xc000217400) Stream added, broadcasting: 5 I0522 13:44:07.641601 6 log.go:172] (0xc0024b6420) Reply frame received for 5 I0522 13:44:07.715092 6 log.go:172] (0xc0024b6420) Data frame received for 5 I0522 13:44:07.715148 6 log.go:172] (0xc000217400) (5) Data frame handling I0522 13:44:07.715194 6 log.go:172] (0xc0024b6420) Data frame received for 3 I0522 13:44:07.715226 6 log.go:172] (0xc0025bf180) (3) Data frame handling I0522 13:44:07.715273 6 log.go:172] (0xc0025bf180) (3) Data frame sent I0522 13:44:07.715294 6 log.go:172] (0xc0024b6420) Data frame received for 3 I0522 13:44:07.715306 6 log.go:172] (0xc0025bf180) (3) Data frame handling I0522 13:44:07.716534 6 log.go:172] (0xc0024b6420) Data frame received for 1 I0522 13:44:07.716560 6 log.go:172] (0xc002b88460) (1) Data frame handling I0522 13:44:07.716575 6 log.go:172] (0xc002b88460) (1) Data frame sent I0522 13:44:07.716590 6 log.go:172] (0xc0024b6420) (0xc002b88460) Stream removed, broadcasting: 1 I0522 13:44:07.716610 6 log.go:172] (0xc0024b6420) Go away received I0522 13:44:07.716753 6 log.go:172] (0xc0024b6420) (0xc002b88460) Stream removed, broadcasting: 1 I0522 13:44:07.716805 6 log.go:172] (0xc0024b6420) (0xc0025bf180) Stream removed, broadcasting: 3 I0522 13:44:07.716827 6 log.go:172] (0xc0024b6420) (0xc000217400) Stream removed, broadcasting: 5 May 22 13:44:07.716: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 22 13:44:07.716: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8196 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 22 13:44:07.716: INFO: >>> kubeConfig: /root/.kube/config I0522 13:44:07.748077 6 log.go:172] (0xc0014e18c0) (0xc0025e2820) Create stream I0522 13:44:07.748115 6 log.go:172] (0xc0014e18c0) (0xc0025e2820) Stream added, broadcasting: 1 I0522 13:44:07.750053 6 log.go:172] (0xc0014e18c0) Reply frame received for 1 I0522 13:44:07.750080 6 log.go:172] (0xc0014e18c0) (0xc002b88500) Create stream I0522 13:44:07.750089 6 log.go:172] (0xc0014e18c0) (0xc002b88500) Stream added, broadcasting: 3 I0522 13:44:07.750856 6 log.go:172] (0xc0014e18c0) Reply frame received for 3 I0522 13:44:07.750887 6 log.go:172] (0xc0014e18c0) (0xc0002174a0) Create stream I0522 13:44:07.750895 6 log.go:172] (0xc0014e18c0) (0xc0002174a0) Stream added, broadcasting: 5 I0522 13:44:07.751532 6 log.go:172] (0xc0014e18c0) Reply frame received for 5 I0522 13:44:07.802780 6 log.go:172] (0xc0014e18c0) Data frame received for 5 I0522 13:44:07.802822 6 log.go:172] (0xc0002174a0) (5) Data frame handling I0522 13:44:07.802844 6 log.go:172] (0xc0014e18c0) Data frame received for 3 I0522 13:44:07.802854 6 log.go:172] (0xc002b88500) (3) Data frame handling I0522 13:44:07.802870 6 log.go:172] (0xc002b88500) (3) Data frame sent I0522 13:44:07.803282 6 log.go:172] (0xc0014e18c0) Data frame received for 3 I0522 13:44:07.803306 6 log.go:172] (0xc002b88500) (3) Data frame handling I0522 13:44:07.804830 6 log.go:172] (0xc0014e18c0) Data frame received for 1 I0522 13:44:07.804857 6 log.go:172] (0xc0025e2820) (1) Data frame handling I0522 13:44:07.804878 6 log.go:172] (0xc0025e2820) (1) Data frame sent I0522 13:44:07.804900 6 log.go:172] (0xc0014e18c0) (0xc0025e2820) Stream removed, broadcasting: 1 I0522 13:44:07.804945 6 log.go:172] (0xc0014e18c0) Go away received I0522 13:44:07.805006 6 log.go:172] (0xc0014e18c0) (0xc0025e2820) Stream removed, broadcasting: 1 I0522 13:44:07.805028 6 log.go:172] (0xc0014e18c0) (0xc002b88500) Stream removed, broadcasting: 3 I0522 13:44:07.805051 6 log.go:172] (0xc0014e18c0) (0xc0002174a0) Stream removed, broadcasting: 5 May 22 13:44:07.805: INFO: Exec stderr: "" May 22 13:44:07.805: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8196 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 22 13:44:07.805: INFO: >>> kubeConfig: /root/.kube/config I0522 13:44:07.833033 6 log.go:172] (0xc0021e9e40) (0xc0002179a0) Create stream I0522 13:44:07.833061 6 log.go:172] (0xc0021e9e40) (0xc0002179a0) Stream added, broadcasting: 1 I0522 13:44:07.834701 6 log.go:172] (0xc0021e9e40) Reply frame received for 1 I0522 13:44:07.834731 6 log.go:172] (0xc0021e9e40) (0xc001202be0) Create stream I0522 13:44:07.834747 6 log.go:172] (0xc0021e9e40) (0xc001202be0) Stream added, broadcasting: 3 I0522 13:44:07.835346 6 log.go:172] (0xc0021e9e40) Reply frame received for 3 I0522 13:44:07.835365 6 log.go:172] (0xc0021e9e40) (0xc000217a40) Create stream I0522 13:44:07.835376 6 log.go:172] (0xc0021e9e40) (0xc000217a40) Stream added, broadcasting: 5 I0522 13:44:07.836116 6 log.go:172] (0xc0021e9e40) Reply frame received for 5 I0522 13:44:07.899214 6 log.go:172] (0xc0021e9e40) Data frame received for 5 I0522 13:44:07.899246 6 log.go:172] (0xc000217a40) (5) Data frame handling I0522 13:44:07.899269 6 log.go:172] (0xc0021e9e40) Data frame received for 3 I0522 13:44:07.899281 6 log.go:172] (0xc001202be0) (3) Data frame handling I0522 13:44:07.899292 6 log.go:172] (0xc001202be0) (3) Data frame sent I0522 13:44:07.899303 6 log.go:172] (0xc0021e9e40) Data frame received for 3 I0522 13:44:07.899314 6 log.go:172] (0xc001202be0) (3) Data frame handling I0522 13:44:07.900686 6 log.go:172] (0xc0021e9e40) Data frame received for 1 I0522 13:44:07.900721 6 log.go:172] (0xc0002179a0) (1) Data frame handling I0522 13:44:07.900734 6 log.go:172] (0xc0002179a0) (1) Data frame sent I0522 13:44:07.900746 6 log.go:172] (0xc0021e9e40) (0xc0002179a0) Stream removed, broadcasting: 1 I0522 13:44:07.900768 6 log.go:172] (0xc0021e9e40) Go away received I0522 13:44:07.900921 6 log.go:172] (0xc0021e9e40) (0xc0002179a0) Stream removed, broadcasting: 1 I0522 13:44:07.900945 6 log.go:172] (0xc0021e9e40) (0xc001202be0) Stream removed, broadcasting: 3 I0522 13:44:07.900956 6 log.go:172] (0xc0021e9e40) (0xc000217a40) Stream removed, broadcasting: 5 May 22 13:44:07.900: INFO: Exec stderr: "" May 22 13:44:07.900: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8196 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 22 13:44:07.901: INFO: >>> kubeConfig: /root/.kube/config I0522 13:44:07.933592 6 log.go:172] (0xc0024b7340) (0xc002b88820) Create stream I0522 13:44:07.933617 6 log.go:172] (0xc0024b7340) (0xc002b88820) Stream added, broadcasting: 1 I0522 13:44:07.935963 6 log.go:172] (0xc0024b7340) Reply frame received for 1 I0522 13:44:07.936014 6 log.go:172] (0xc0024b7340) (0xc002b888c0) Create stream I0522 13:44:07.936031 6 log.go:172] (0xc0024b7340) (0xc002b888c0) Stream added, broadcasting: 3 I0522 13:44:07.937041 6 log.go:172] (0xc0024b7340) Reply frame received for 3 I0522 13:44:07.937088 6 log.go:172] (0xc0024b7340) (0xc000217f40) Create stream I0522 13:44:07.937103 6 log.go:172] (0xc0024b7340) (0xc000217f40) Stream added, broadcasting: 5 I0522 13:44:07.938275 6 log.go:172] (0xc0024b7340) Reply frame received for 5 I0522 13:44:08.019585 6 log.go:172] (0xc0024b7340) Data frame received for 5 I0522 13:44:08.019651 6 log.go:172] (0xc000217f40) (5) Data frame handling I0522 13:44:08.019690 6 log.go:172] (0xc0024b7340) Data frame received for 3 I0522 13:44:08.019712 6 log.go:172] (0xc002b888c0) (3) Data frame handling I0522 13:44:08.019745 6 log.go:172] (0xc002b888c0) (3) Data frame sent I0522 13:44:08.019762 6 log.go:172] (0xc0024b7340) Data frame received for 3 I0522 13:44:08.019775 6 log.go:172] (0xc002b888c0) (3) Data frame handling I0522 13:44:08.021821 6 log.go:172] (0xc0024b7340) Data frame received for 1 I0522 13:44:08.021850 6 log.go:172] (0xc002b88820) (1) Data frame handling I0522 13:44:08.021862 6 log.go:172] (0xc002b88820) (1) Data frame sent I0522 13:44:08.021874 6 log.go:172] (0xc0024b7340) (0xc002b88820) Stream removed, broadcasting: 1 I0522 13:44:08.021892 6 log.go:172] (0xc0024b7340) Go away received I0522 13:44:08.022101 6 log.go:172] (0xc0024b7340) (0xc002b88820) Stream removed, broadcasting: 1 I0522 13:44:08.022121 6 log.go:172] (0xc0024b7340) (0xc002b888c0) Stream removed, broadcasting: 3 I0522 13:44:08.022131 6 log.go:172] (0xc0024b7340) (0xc000217f40) Stream removed, broadcasting: 5 May 22 13:44:08.022: INFO: Exec stderr: "" May 22 13:44:08.022: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8196 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 22 13:44:08.022: INFO: >>> kubeConfig: /root/.kube/config I0522 13:44:08.055471 6 log.go:172] (0xc0024b7d90) (0xc002b88be0) Create stream I0522 13:44:08.055494 6 log.go:172] (0xc0024b7d90) (0xc002b88be0) Stream added, broadcasting: 1 I0522 13:44:08.057737 6 log.go:172] (0xc0024b7d90) Reply frame received for 1 I0522 13:44:08.057781 6 log.go:172] (0xc0024b7d90) (0xc0025e28c0) Create stream I0522 13:44:08.057794 6 log.go:172] (0xc0024b7d90) (0xc0025e28c0) Stream added, broadcasting: 3 I0522 13:44:08.058800 6 log.go:172] (0xc0024b7d90) Reply frame received for 3 I0522 13:44:08.058844 6 log.go:172] (0xc0024b7d90) (0xc0025e2960) Create stream I0522 13:44:08.058854 6 log.go:172] (0xc0024b7d90) (0xc0025e2960) Stream added, broadcasting: 5 I0522 13:44:08.059926 6 log.go:172] (0xc0024b7d90) Reply frame received for 5 I0522 13:44:08.117255 6 log.go:172] (0xc0024b7d90) Data frame received for 5 I0522 13:44:08.117292 6 log.go:172] (0xc0025e2960) (5) Data frame handling I0522 13:44:08.117336 6 log.go:172] (0xc0024b7d90) Data frame received for 3 I0522 13:44:08.117367 6 log.go:172] (0xc0025e28c0) (3) Data frame handling I0522 13:44:08.117383 6 log.go:172] (0xc0025e28c0) (3) Data frame sent I0522 13:44:08.117394 6 log.go:172] (0xc0024b7d90) Data frame received for 3 I0522 13:44:08.117403 6 log.go:172] (0xc0025e28c0) (3) Data frame handling I0522 13:44:08.118671 6 log.go:172] (0xc0024b7d90) Data frame received for 1 I0522 13:44:08.118694 6 log.go:172] (0xc002b88be0) (1) Data frame handling I0522 13:44:08.118713 6 log.go:172] (0xc002b88be0) (1) Data frame sent I0522 13:44:08.118728 6 log.go:172] (0xc0024b7d90) (0xc002b88be0) Stream removed, broadcasting: 1 I0522 13:44:08.118745 6 log.go:172] (0xc0024b7d90) Go away received I0522 13:44:08.118911 6 log.go:172] (0xc0024b7d90) (0xc002b88be0) Stream removed, broadcasting: 1 I0522 13:44:08.118933 6 log.go:172] (0xc0024b7d90) (0xc0025e28c0) Stream removed, broadcasting: 3 I0522 13:44:08.118942 6 log.go:172] (0xc0024b7d90) (0xc0025e2960) Stream removed, broadcasting: 5 May 22 13:44:08.118: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:44:08.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-8196" for this suite. May 22 13:44:54.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:44:54.229: INFO: namespace e2e-kubelet-etc-hosts-8196 deletion completed in 46.106549757s • [SLOW TEST:57.507 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:44:54.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 22 13:44:54.348: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"f68861f7-6641-4eb8-b613-cb5393c60c4f", Controller:(*bool)(0xc0030875e2), BlockOwnerDeletion:(*bool)(0xc0030875e3)}} May 22 13:44:54.437: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"6491f7e0-8d3a-4bdf-b2f9-81a5099d344d", Controller:(*bool)(0xc002bb1e62), BlockOwnerDeletion:(*bool)(0xc002bb1e63)}} May 22 13:44:54.440: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"c57d65ab-5028-4e56-bb36-8f2d55aa1878", Controller:(*bool)(0xc00308778a), BlockOwnerDeletion:(*bool)(0xc00308778b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:44:59.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6684" for this suite. May 22 13:45:05.500: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:45:05.579: INFO: namespace gc-6684 deletion completed in 6.09210965s • [SLOW TEST:11.350 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:45:05.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-d885dc91-b545-49bb-91df-cdfed3e5cacb in namespace container-probe-6480 May 22 13:45:09.688: INFO: Started pod test-webserver-d885dc91-b545-49bb-91df-cdfed3e5cacb in namespace container-probe-6480 STEP: checking the pod's current state and verifying that restartCount is present May 22 13:45:09.691: INFO: Initial restart count of pod test-webserver-d885dc91-b545-49bb-91df-cdfed3e5cacb is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:49:10.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6480" for this suite. May 22 13:49:16.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:49:16.820: INFO: namespace container-probe-6480 deletion completed in 6.097893478s • [SLOW TEST:251.240 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:49:16.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-435ca8fb-917d-4fcb-8f2b-2753eb55dbf1 STEP: Creating a pod to test consume secrets May 22 13:49:16.920: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bb59f84e-936b-4b7e-b7ac-4531e4c3523b" in namespace "projected-5213" to be "success or failure" May 22 13:49:16.939: INFO: Pod "pod-projected-secrets-bb59f84e-936b-4b7e-b7ac-4531e4c3523b": Phase="Pending", Reason="", readiness=false. Elapsed: 19.112636ms May 22 13:49:18.974: INFO: Pod "pod-projected-secrets-bb59f84e-936b-4b7e-b7ac-4531e4c3523b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054335259s May 22 13:49:20.979: INFO: Pod "pod-projected-secrets-bb59f84e-936b-4b7e-b7ac-4531e4c3523b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058699003s STEP: Saw pod success May 22 13:49:20.979: INFO: Pod "pod-projected-secrets-bb59f84e-936b-4b7e-b7ac-4531e4c3523b" satisfied condition "success or failure" May 22 13:49:20.981: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-bb59f84e-936b-4b7e-b7ac-4531e4c3523b container projected-secret-volume-test: STEP: delete the pod May 22 13:49:21.120: INFO: Waiting for pod pod-projected-secrets-bb59f84e-936b-4b7e-b7ac-4531e4c3523b to disappear May 22 13:49:21.144: INFO: Pod pod-projected-secrets-bb59f84e-936b-4b7e-b7ac-4531e4c3523b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:49:21.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5213" for this suite. May 22 13:49:27.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:49:27.261: INFO: namespace projected-5213 deletion completed in 6.113416991s • [SLOW TEST:10.441 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:49:27.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:49:31.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5019" for this suite. May 22 13:50:13.388: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:50:13.468: INFO: namespace kubelet-test-5019 deletion completed in 42.089989541s • [SLOW TEST:46.207 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:50:13.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 22 13:50:18.112: INFO: Successfully updated pod "pod-update-c3ffed66-9a2f-4469-952d-582e9c66400e" STEP: verifying the updated pod is in kubernetes May 22 13:50:18.173: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:50:18.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1081" for this suite. May 22 13:50:40.191: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:50:40.272: INFO: namespace pods-1081 deletion completed in 22.095756874s • [SLOW TEST:26.803 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:50:40.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 22 13:50:40.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5254' May 22 13:50:43.389: INFO: stderr: "" May 22 13:50:43.389: INFO: stdout: "replicationcontroller/redis-master created\n" May 22 13:50:43.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5254' May 22 13:50:43.677: INFO: stderr: "" May 22 13:50:43.677: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. May 22 13:50:44.682: INFO: Selector matched 1 pods for map[app:redis] May 22 13:50:44.683: INFO: Found 0 / 1 May 22 13:50:45.681: INFO: Selector matched 1 pods for map[app:redis] May 22 13:50:45.681: INFO: Found 0 / 1 May 22 13:50:46.683: INFO: Selector matched 1 pods for map[app:redis] May 22 13:50:46.683: INFO: Found 0 / 1 May 22 13:50:47.682: INFO: Selector matched 1 pods for map[app:redis] May 22 13:50:47.682: INFO: Found 1 / 1 May 22 13:50:47.682: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 22 13:50:47.685: INFO: Selector matched 1 pods for map[app:redis] May 22 13:50:47.685: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 22 13:50:47.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-w5hf6 --namespace=kubectl-5254' May 22 13:50:47.791: INFO: stderr: "" May 22 13:50:47.791: INFO: stdout: "Name: redis-master-w5hf6\nNamespace: kubectl-5254\nPriority: 0\nNode: iruya-worker/172.17.0.6\nStart Time: Fri, 22 May 2020 13:50:43 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.178\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://30e56542013c5303cb0fc1143b6c8a7b937f1c2e917e8b3d5b8a0be399d1ea5a\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 22 May 2020 13:50:46 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-s8xgq (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-s8xgq:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-s8xgq\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-5254/redis-master-w5hf6 to iruya-worker\n Normal Pulled 3s kubelet, iruya-worker Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 1s kubelet, iruya-worker Created container redis-master\n Normal Started 1s kubelet, iruya-worker Started container redis-master\n" May 22 13:50:47.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-5254' May 22 13:50:47.933: INFO: stderr: "" May 22 13:50:47.933: INFO: stdout: "Name: redis-master\nNamespace: kubectl-5254\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: redis-master-w5hf6\n" May 22 13:50:47.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-5254' May 22 13:50:48.042: INFO: stderr: "" May 22 13:50:48.042: INFO: stdout: "Name: redis-master\nNamespace: kubectl-5254\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.102.171.71\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.2.178:6379\nSession Affinity: None\nEvents: \n" May 22 13:50:48.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane' May 22 13:50:48.172: INFO: stderr: "" May 22 13:50:48.172: INFO: stdout: "Name: iruya-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:24:20 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Fri, 22 May 2020 13:50:01 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 22 May 2020 13:50:01 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 22 May 2020 13:50:01 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 22 May 2020 13:50:01 +0000 Sun, 15 Mar 2020 18:25:00 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.7\n Hostname: iruya-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 09f14f6f4d1640fcaab2243401c9f154\n System UUID: 7c6ca533-492e-400c-b058-c282f97a69ec\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.15.7\n Kube-Proxy Version: v1.15.7\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-iruya-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 67d\n kube-system kindnet-zn8sx 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 67d\n kube-system kube-apiserver-iruya-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 67d\n kube-system kube-controller-manager-iruya-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 67d\n kube-system kube-proxy-46nsr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 67d\n kube-system kube-scheduler-iruya-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 67d\n local-path-storage local-path-provisioner-d4947b89c-72frh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 67d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 22 13:50:48.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-5254' May 22 13:50:48.282: INFO: stderr: "" May 22 13:50:48.282: INFO: stdout: "Name: kubectl-5254\nLabels: e2e-framework=kubectl\n e2e-run=b25b5038-1534-4f18-a180-9bd1f494280e\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:50:48.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5254" for this suite. May 22 13:51:10.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:51:10.391: INFO: namespace kubectl-5254 deletion completed in 22.105367578s • [SLOW TEST:30.118 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:51:10.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:51:16.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4131" for this suite. May 22 13:51:22.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:51:22.789: INFO: namespace namespaces-4131 deletion completed in 6.085418463s STEP: Destroying namespace "nsdeletetest-2386" for this suite. May 22 13:51:22.791: INFO: Namespace nsdeletetest-2386 was already deleted STEP: Destroying namespace "nsdeletetest-966" for this suite. May 22 13:51:28.823: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:51:28.924: INFO: namespace nsdeletetest-966 deletion completed in 6.132962253s • [SLOW TEST:18.533 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:51:28.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 22 13:51:29.040: INFO: Create a RollingUpdate DaemonSet May 22 13:51:29.044: INFO: Check that daemon pods launch on every node of the cluster May 22 13:51:29.053: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:51:29.058: INFO: Number of nodes with available pods: 0 May 22 13:51:29.058: INFO: Node iruya-worker is running more than one daemon pod May 22 13:51:30.071: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:51:30.074: INFO: Number of nodes with available pods: 0 May 22 13:51:30.074: INFO: Node iruya-worker is running more than one daemon pod May 22 13:51:31.064: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:51:31.067: INFO: Number of nodes with available pods: 0 May 22 13:51:31.067: INFO: Node iruya-worker is running more than one daemon pod May 22 13:51:32.085: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:51:32.204: INFO: Number of nodes with available pods: 0 May 22 13:51:32.204: INFO: Node iruya-worker is running more than one daemon pod May 22 13:51:33.064: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:51:33.068: INFO: Number of nodes with available pods: 0 May 22 13:51:33.068: INFO: Node iruya-worker is running more than one daemon pod May 22 13:51:34.064: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:51:34.067: INFO: Number of nodes with available pods: 2 May 22 13:51:34.067: INFO: Number of running nodes: 2, number of available pods: 2 May 22 13:51:34.067: INFO: Update the DaemonSet to trigger a rollout May 22 13:51:34.073: INFO: Updating DaemonSet daemon-set May 22 13:51:42.088: INFO: Roll back the DaemonSet before rollout is complete May 22 13:51:42.094: INFO: Updating DaemonSet daemon-set May 22 13:51:42.094: INFO: Make sure DaemonSet rollback is complete May 22 13:51:42.113: INFO: Wrong image for pod: daemon-set-t2q6q. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 22 13:51:42.113: INFO: Pod daemon-set-t2q6q is not available May 22 13:51:42.124: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:51:43.129: INFO: Wrong image for pod: daemon-set-t2q6q. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 22 13:51:43.129: INFO: Pod daemon-set-t2q6q is not available May 22 13:51:43.133: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:51:44.205: INFO: Wrong image for pod: daemon-set-t2q6q. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 22 13:51:44.205: INFO: Pod daemon-set-t2q6q is not available May 22 13:51:44.210: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:51:45.129: INFO: Wrong image for pod: daemon-set-t2q6q. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 22 13:51:45.129: INFO: Pod daemon-set-t2q6q is not available May 22 13:51:45.132: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 13:51:46.128: INFO: Pod daemon-set-drz2c is not available May 22 13:51:46.132: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6793, will wait for the garbage collector to delete the pods May 22 13:51:46.199: INFO: Deleting DaemonSet.extensions daemon-set took: 6.472215ms May 22 13:51:46.499: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.205127ms May 22 13:51:49.103: INFO: Number of nodes with available pods: 0 May 22 13:51:49.103: INFO: Number of running nodes: 0, number of available pods: 0 May 22 13:51:49.105: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6793/daemonsets","resourceVersion":"12300634"},"items":null} May 22 13:51:49.108: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6793/pods","resourceVersion":"12300634"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:51:49.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6793" for this suite. May 22 13:51:55.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:51:55.217: INFO: namespace daemonsets-6793 deletion completed in 6.097892241s • [SLOW TEST:26.292 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:51:55.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller May 22 13:51:55.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1633' May 22 13:51:55.613: INFO: stderr: "" May 22 13:51:55.613: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 22 13:51:55.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1633' May 22 13:51:55.762: INFO: stderr: "" May 22 13:51:55.762: INFO: stdout: "update-demo-nautilus-fzk46 update-demo-nautilus-lbxnw " May 22 13:51:55.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fzk46 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1633' May 22 13:51:55.855: INFO: stderr: "" May 22 13:51:55.855: INFO: stdout: "" May 22 13:51:55.855: INFO: update-demo-nautilus-fzk46 is created but not running May 22 13:52:00.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1633' May 22 13:52:00.960: INFO: stderr: "" May 22 13:52:00.960: INFO: stdout: "update-demo-nautilus-fzk46 update-demo-nautilus-lbxnw " May 22 13:52:00.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fzk46 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1633' May 22 13:52:01.049: INFO: stderr: "" May 22 13:52:01.049: INFO: stdout: "true" May 22 13:52:01.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fzk46 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1633' May 22 13:52:01.155: INFO: stderr: "" May 22 13:52:01.155: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 22 13:52:01.155: INFO: validating pod update-demo-nautilus-fzk46 May 22 13:52:01.167: INFO: got data: { "image": "nautilus.jpg" } May 22 13:52:01.167: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 22 13:52:01.167: INFO: update-demo-nautilus-fzk46 is verified up and running May 22 13:52:01.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lbxnw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1633' May 22 13:52:01.269: INFO: stderr: "" May 22 13:52:01.269: INFO: stdout: "true" May 22 13:52:01.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lbxnw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1633' May 22 13:52:01.382: INFO: stderr: "" May 22 13:52:01.382: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 22 13:52:01.382: INFO: validating pod update-demo-nautilus-lbxnw May 22 13:52:01.417: INFO: got data: { "image": "nautilus.jpg" } May 22 13:52:01.417: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 22 13:52:01.418: INFO: update-demo-nautilus-lbxnw is verified up and running STEP: rolling-update to new replication controller May 22 13:52:01.420: INFO: scanned /root for discovery docs: May 22 13:52:01.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-1633' May 22 13:52:24.025: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 22 13:52:24.025: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 22 13:52:24.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1633' May 22 13:52:24.145: INFO: stderr: "" May 22 13:52:24.145: INFO: stdout: "update-demo-kitten-l4zmm update-demo-kitten-xs8m9 " May 22 13:52:24.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-l4zmm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1633' May 22 13:52:24.246: INFO: stderr: "" May 22 13:52:24.246: INFO: stdout: "true" May 22 13:52:24.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-l4zmm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1633' May 22 13:52:24.347: INFO: stderr: "" May 22 13:52:24.347: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 22 13:52:24.347: INFO: validating pod update-demo-kitten-l4zmm May 22 13:52:24.358: INFO: got data: { "image": "kitten.jpg" } May 22 13:52:24.358: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 22 13:52:24.358: INFO: update-demo-kitten-l4zmm is verified up and running May 22 13:52:24.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-xs8m9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1633' May 22 13:52:24.460: INFO: stderr: "" May 22 13:52:24.461: INFO: stdout: "true" May 22 13:52:24.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-xs8m9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1633' May 22 13:52:24.555: INFO: stderr: "" May 22 13:52:24.555: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 22 13:52:24.555: INFO: validating pod update-demo-kitten-xs8m9 May 22 13:52:24.572: INFO: got data: { "image": "kitten.jpg" } May 22 13:52:24.572: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 22 13:52:24.572: INFO: update-demo-kitten-xs8m9 is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:52:24.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1633" for this suite. May 22 13:52:48.590: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:52:48.653: INFO: namespace kubectl-1633 deletion completed in 24.078547335s • [SLOW TEST:53.435 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:52:48.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 22 13:52:48.730: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 22 13:52:53.735: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 22 13:52:53.735: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 22 13:52:55.738: INFO: Creating deployment "test-rollover-deployment" May 22 13:52:55.760: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 22 13:52:57.766: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 22 13:52:57.772: INFO: Ensure that both replica sets have 1 created replica May 22 13:52:57.778: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 22 13:52:57.784: INFO: Updating deployment test-rollover-deployment May 22 13:52:57.784: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 22 13:52:59.807: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 22 13:52:59.815: INFO: Make sure deployment "test-rollover-deployment" is complete May 22 13:52:59.824: INFO: all replica sets need to contain the pod-template-hash label May 22 13:52:59.824: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725752375, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725752375, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725752378, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725752375, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 22 13:53:01.834: INFO: all replica sets need to contain the pod-template-hash label May 22 13:53:01.834: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725752375, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725752375, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725752381, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725752375, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 22 13:53:03.833: INFO: all replica sets need to contain the pod-template-hash label May 22 13:53:03.833: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725752375, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725752375, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725752381, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725752375, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 22 13:53:05.832: INFO: all replica sets need to contain the pod-template-hash label May 22 13:53:05.832: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725752375, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725752375, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725752381, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725752375, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 22 13:53:07.834: INFO: all replica sets need to contain the pod-template-hash label May 22 13:53:07.834: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725752375, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725752375, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725752381, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725752375, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 22 13:53:09.834: INFO: all replica sets need to contain the pod-template-hash label May 22 13:53:09.834: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725752375, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725752375, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725752381, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725752375, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 22 13:53:11.905: INFO: May 22 13:53:11.906: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 22 13:53:11.911: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-2221,SelfLink:/apis/apps/v1/namespaces/deployment-2221/deployments/test-rollover-deployment,UID:c0a6106d-ef82-4b49-a665-2bd16707b2f6,ResourceVersion:12301025,Generation:2,CreationTimestamp:2020-05-22 13:52:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-22 13:52:55 +0000 UTC 2020-05-22 13:52:55 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-22 13:53:11 +0000 UTC 2020-05-22 13:52:55 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 22 13:53:11.914: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-2221,SelfLink:/apis/apps/v1/namespaces/deployment-2221/replicasets/test-rollover-deployment-854595fc44,UID:81952dbd-ba63-47a1-865d-ce0809418b69,ResourceVersion:12301014,Generation:2,CreationTimestamp:2020-05-22 13:52:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment c0a6106d-ef82-4b49-a665-2bd16707b2f6 0xc002a182d7 0xc002a182d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 22 13:53:11.914: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 22 13:53:11.914: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-2221,SelfLink:/apis/apps/v1/namespaces/deployment-2221/replicasets/test-rollover-controller,UID:a7884bb0-1d14-4e53-8f50-e9be6fc24939,ResourceVersion:12301023,Generation:2,CreationTimestamp:2020-05-22 13:52:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment c0a6106d-ef82-4b49-a665-2bd16707b2f6 0xc002a18207 0xc002a18208}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 22 13:53:11.914: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-2221,SelfLink:/apis/apps/v1/namespaces/deployment-2221/replicasets/test-rollover-deployment-9b8b997cf,UID:a95829d4-bad5-4d78-bddb-2034a3408d88,ResourceVersion:12300979,Generation:2,CreationTimestamp:2020-05-22 13:52:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment c0a6106d-ef82-4b49-a665-2bd16707b2f6 0xc002a183a0 0xc002a183a1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 22 13:53:11.916: INFO: Pod "test-rollover-deployment-854595fc44-vnvvh" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-vnvvh,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-2221,SelfLink:/api/v1/namespaces/deployment-2221/pods/test-rollover-deployment-854595fc44-vnvvh,UID:834a271b-c1c3-4c0e-9f38-73d0fadae1f0,ResourceVersion:12300991,Generation:0,CreationTimestamp:2020-05-22 13:52:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 81952dbd-ba63-47a1-865d-ce0809418b69 0xc002a18fb7 0xc002a18fb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8kp9z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8kp9z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-8kp9z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a19030} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a19050}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:52:57 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:53:01 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:53:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:52:57 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.184,StartTime:2020-05-22 13:52:57 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-22 13:53:00 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://f7538ebdd6bb3f0d0379603fdb3f051b384237c542a09d52d3afc6c656e22774}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:53:11.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2221" for this suite. May 22 13:53:17.945: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:53:18.023: INFO: namespace deployment-2221 deletion completed in 6.104393645s • [SLOW TEST:29.370 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:53:18.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 22 13:53:18.123: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. May 22 13:53:18.599: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 22 13:53:20.911: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725752398, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725752398, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725752398, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725752398, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 22 13:53:22.924: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725752398, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725752398, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725752398, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725752398, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 22 13:53:25.651: INFO: Waited 727.056477ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:53:26.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-4226" for this suite. May 22 13:53:32.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:53:32.582: INFO: namespace aggregator-4226 deletion completed in 6.086045778s • [SLOW TEST:14.558 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:53:32.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 22 13:53:32.669: INFO: Waiting up to 5m0s for pod "downward-api-eaa1c0cd-09ec-4a51-a03f-7b00cac5a044" in namespace "downward-api-9001" to be "success or failure" May 22 13:53:32.678: INFO: Pod "downward-api-eaa1c0cd-09ec-4a51-a03f-7b00cac5a044": Phase="Pending", Reason="", readiness=false. Elapsed: 9.044842ms May 22 13:53:34.681: INFO: Pod "downward-api-eaa1c0cd-09ec-4a51-a03f-7b00cac5a044": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012381175s May 22 13:53:36.685: INFO: Pod "downward-api-eaa1c0cd-09ec-4a51-a03f-7b00cac5a044": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016245618s STEP: Saw pod success May 22 13:53:36.685: INFO: Pod "downward-api-eaa1c0cd-09ec-4a51-a03f-7b00cac5a044" satisfied condition "success or failure" May 22 13:53:36.688: INFO: Trying to get logs from node iruya-worker pod downward-api-eaa1c0cd-09ec-4a51-a03f-7b00cac5a044 container dapi-container: STEP: delete the pod May 22 13:53:36.709: INFO: Waiting for pod downward-api-eaa1c0cd-09ec-4a51-a03f-7b00cac5a044 to disappear May 22 13:53:36.739: INFO: Pod downward-api-eaa1c0cd-09ec-4a51-a03f-7b00cac5a044 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:53:36.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9001" for this suite. May 22 13:53:42.758: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:53:42.832: INFO: namespace downward-api-9001 deletion completed in 6.088987235s • [SLOW TEST:10.250 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:53:42.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args May 22 13:53:42.927: INFO: Waiting up to 5m0s for pod "var-expansion-10ed32bc-1099-4d5a-b124-f2a207c7538d" in namespace "var-expansion-383" to be "success or failure" May 22 13:53:42.935: INFO: Pod "var-expansion-10ed32bc-1099-4d5a-b124-f2a207c7538d": Phase="Pending", Reason="", readiness=false. Elapsed: 7.516511ms May 22 13:53:44.939: INFO: Pod "var-expansion-10ed32bc-1099-4d5a-b124-f2a207c7538d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011884633s May 22 13:53:46.944: INFO: Pod "var-expansion-10ed32bc-1099-4d5a-b124-f2a207c7538d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016731202s STEP: Saw pod success May 22 13:53:46.944: INFO: Pod "var-expansion-10ed32bc-1099-4d5a-b124-f2a207c7538d" satisfied condition "success or failure" May 22 13:53:46.947: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-10ed32bc-1099-4d5a-b124-f2a207c7538d container dapi-container: STEP: delete the pod May 22 13:53:47.104: INFO: Waiting for pod var-expansion-10ed32bc-1099-4d5a-b124-f2a207c7538d to disappear May 22 13:53:47.126: INFO: Pod var-expansion-10ed32bc-1099-4d5a-b124-f2a207c7538d no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:53:47.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-383" for this suite. May 22 13:53:53.149: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:53:53.225: INFO: namespace var-expansion-383 deletion completed in 6.094707087s • [SLOW TEST:10.392 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:53:53.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 22 13:53:53.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-2343' May 22 13:53:53.561: INFO: stderr: "" May 22 13:53:53.561: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created May 22 13:53:58.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-2343 -o json' May 22 13:53:58.709: INFO: stderr: "" May 22 13:53:58.709: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-22T13:53:53Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-2343\",\n \"resourceVersion\": \"12301283\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-2343/pods/e2e-test-nginx-pod\",\n \"uid\": \"dc046f4c-85aa-4326-ac73-80a77e6cd38c\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-9kcxh\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-9kcxh\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-9kcxh\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-22T13:53:53Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-22T13:53:57Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-22T13:53:57Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-22T13:53:53Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://209d5ae3fffe108374b7efca690ad95e8ef2f86312061e4ae2e6100ed62c2c6f\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-22T13:53:56Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.6\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.186\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-22T13:53:53Z\"\n }\n}\n" STEP: replace the image in the pod May 22 13:53:58.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-2343' May 22 13:53:59.078: INFO: stderr: "" May 22 13:53:59.078: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 May 22 13:53:59.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-2343' May 22 13:54:12.172: INFO: stderr: "" May 22 13:54:12.172: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:54:12.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2343" for this suite. May 22 13:54:18.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:54:18.278: INFO: namespace kubectl-2343 deletion completed in 6.089181449s • [SLOW TEST:25.053 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:54:18.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-67a58ab2-86b4-4531-8be2-587962fed63a STEP: Creating a pod to test consume configMaps May 22 13:54:18.419: INFO: Waiting up to 5m0s for pod "pod-configmaps-a2c77e1c-bb8a-48d9-99e1-08076c2abad0" in namespace "configmap-654" to be "success or failure" May 22 13:54:18.432: INFO: Pod "pod-configmaps-a2c77e1c-bb8a-48d9-99e1-08076c2abad0": Phase="Pending", Reason="", readiness=false. Elapsed: 13.047638ms May 22 13:54:20.437: INFO: Pod "pod-configmaps-a2c77e1c-bb8a-48d9-99e1-08076c2abad0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017748838s May 22 13:54:22.441: INFO: Pod "pod-configmaps-a2c77e1c-bb8a-48d9-99e1-08076c2abad0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022069894s STEP: Saw pod success May 22 13:54:22.442: INFO: Pod "pod-configmaps-a2c77e1c-bb8a-48d9-99e1-08076c2abad0" satisfied condition "success or failure" May 22 13:54:22.444: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-a2c77e1c-bb8a-48d9-99e1-08076c2abad0 container configmap-volume-test: STEP: delete the pod May 22 13:54:22.477: INFO: Waiting for pod pod-configmaps-a2c77e1c-bb8a-48d9-99e1-08076c2abad0 to disappear May 22 13:54:22.499: INFO: Pod pod-configmaps-a2c77e1c-bb8a-48d9-99e1-08076c2abad0 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:54:22.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-654" for this suite. May 22 13:54:28.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:54:28.615: INFO: namespace configmap-654 deletion completed in 6.111158933s • [SLOW TEST:10.336 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:54:28.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-dea80f8a-d1d3-4ab9-8fcf-6b6707d0d5ee STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:54:34.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9287" for this suite. May 22 13:55:06.796: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:55:06.878: INFO: namespace configmap-9287 deletion completed in 32.120195755s • [SLOW TEST:38.262 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:55:06.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-mjqx STEP: Creating a pod to test atomic-volume-subpath May 22 13:55:06.958: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-mjqx" in namespace "subpath-4006" to be "success or failure" May 22 13:55:07.003: INFO: Pod "pod-subpath-test-projected-mjqx": Phase="Pending", Reason="", readiness=false. Elapsed: 45.466301ms May 22 13:55:09.010: INFO: Pod "pod-subpath-test-projected-mjqx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05192713s May 22 13:55:11.014: INFO: Pod "pod-subpath-test-projected-mjqx": Phase="Running", Reason="", readiness=true. Elapsed: 4.056579678s May 22 13:55:13.019: INFO: Pod "pod-subpath-test-projected-mjqx": Phase="Running", Reason="", readiness=true. Elapsed: 6.060771006s May 22 13:55:15.023: INFO: Pod "pod-subpath-test-projected-mjqx": Phase="Running", Reason="", readiness=true. Elapsed: 8.064820523s May 22 13:55:17.027: INFO: Pod "pod-subpath-test-projected-mjqx": Phase="Running", Reason="", readiness=true. Elapsed: 10.069518698s May 22 13:55:19.032: INFO: Pod "pod-subpath-test-projected-mjqx": Phase="Running", Reason="", readiness=true. Elapsed: 12.073669853s May 22 13:55:21.036: INFO: Pod "pod-subpath-test-projected-mjqx": Phase="Running", Reason="", readiness=true. Elapsed: 14.078209971s May 22 13:55:23.039: INFO: Pod "pod-subpath-test-projected-mjqx": Phase="Running", Reason="", readiness=true. Elapsed: 16.081506891s May 22 13:55:25.043: INFO: Pod "pod-subpath-test-projected-mjqx": Phase="Running", Reason="", readiness=true. Elapsed: 18.084989317s May 22 13:55:27.048: INFO: Pod "pod-subpath-test-projected-mjqx": Phase="Running", Reason="", readiness=true. Elapsed: 20.08967021s May 22 13:55:29.052: INFO: Pod "pod-subpath-test-projected-mjqx": Phase="Running", Reason="", readiness=true. Elapsed: 22.094358722s May 22 13:55:31.056: INFO: Pod "pod-subpath-test-projected-mjqx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.098652376s STEP: Saw pod success May 22 13:55:31.057: INFO: Pod "pod-subpath-test-projected-mjqx" satisfied condition "success or failure" May 22 13:55:31.060: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-projected-mjqx container test-container-subpath-projected-mjqx: STEP: delete the pod May 22 13:55:31.076: INFO: Waiting for pod pod-subpath-test-projected-mjqx to disappear May 22 13:55:31.129: INFO: Pod pod-subpath-test-projected-mjqx no longer exists STEP: Deleting pod pod-subpath-test-projected-mjqx May 22 13:55:31.129: INFO: Deleting pod "pod-subpath-test-projected-mjqx" in namespace "subpath-4006" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:55:31.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4006" for this suite. May 22 13:55:37.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:55:37.224: INFO: namespace subpath-4006 deletion completed in 6.08813771s • [SLOW TEST:30.346 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:55:37.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-d7vj STEP: Creating a pod to test atomic-volume-subpath May 22 13:55:37.312: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-d7vj" in namespace "subpath-1732" to be "success or failure" May 22 13:55:37.321: INFO: Pod "pod-subpath-test-configmap-d7vj": Phase="Pending", Reason="", readiness=false. Elapsed: 7.991983ms May 22 13:55:39.324: INFO: Pod "pod-subpath-test-configmap-d7vj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011782344s May 22 13:55:41.329: INFO: Pod "pod-subpath-test-configmap-d7vj": Phase="Running", Reason="", readiness=true. Elapsed: 4.016313199s May 22 13:55:43.333: INFO: Pod "pod-subpath-test-configmap-d7vj": Phase="Running", Reason="", readiness=true. Elapsed: 6.019957184s May 22 13:55:45.336: INFO: Pod "pod-subpath-test-configmap-d7vj": Phase="Running", Reason="", readiness=true. Elapsed: 8.023151119s May 22 13:55:47.340: INFO: Pod "pod-subpath-test-configmap-d7vj": Phase="Running", Reason="", readiness=true. Elapsed: 10.026935299s May 22 13:55:49.349: INFO: Pod "pod-subpath-test-configmap-d7vj": Phase="Running", Reason="", readiness=true. Elapsed: 12.036617658s May 22 13:55:51.353: INFO: Pod "pod-subpath-test-configmap-d7vj": Phase="Running", Reason="", readiness=true. Elapsed: 14.040531688s May 22 13:55:53.357: INFO: Pod "pod-subpath-test-configmap-d7vj": Phase="Running", Reason="", readiness=true. Elapsed: 16.044805206s May 22 13:55:55.362: INFO: Pod "pod-subpath-test-configmap-d7vj": Phase="Running", Reason="", readiness=true. Elapsed: 18.049234792s May 22 13:55:57.366: INFO: Pod "pod-subpath-test-configmap-d7vj": Phase="Running", Reason="", readiness=true. Elapsed: 20.053440211s May 22 13:55:59.370: INFO: Pod "pod-subpath-test-configmap-d7vj": Phase="Running", Reason="", readiness=true. Elapsed: 22.057810822s May 22 13:56:01.375: INFO: Pod "pod-subpath-test-configmap-d7vj": Phase="Running", Reason="", readiness=true. Elapsed: 24.062441929s May 22 13:56:03.381: INFO: Pod "pod-subpath-test-configmap-d7vj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.068006822s STEP: Saw pod success May 22 13:56:03.381: INFO: Pod "pod-subpath-test-configmap-d7vj" satisfied condition "success or failure" May 22 13:56:03.383: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-d7vj container test-container-subpath-configmap-d7vj: STEP: delete the pod May 22 13:56:03.406: INFO: Waiting for pod pod-subpath-test-configmap-d7vj to disappear May 22 13:56:03.410: INFO: Pod pod-subpath-test-configmap-d7vj no longer exists STEP: Deleting pod pod-subpath-test-configmap-d7vj May 22 13:56:03.410: INFO: Deleting pod "pod-subpath-test-configmap-d7vj" in namespace "subpath-1732" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:56:03.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1732" for this suite. May 22 13:56:09.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:56:09.504: INFO: namespace subpath-1732 deletion completed in 6.089943622s • [SLOW TEST:32.280 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:56:09.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 22 13:56:09.553: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:56:16.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7982" for this suite. May 22 13:56:22.092: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:56:22.171: INFO: namespace init-container-7982 deletion completed in 6.106692891s • [SLOW TEST:12.667 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:56:22.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 22 13:56:28.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-66a8b55f-243b-47f4-88af-e1d01b91a7de -c busybox-main-container --namespace=emptydir-1458 -- cat /usr/share/volumeshare/shareddata.txt' May 22 13:56:28.496: INFO: stderr: "I0522 13:56:28.405938 2618 log.go:172] (0xc00090a0b0) (0xc0007880a0) Create stream\nI0522 13:56:28.405990 2618 log.go:172] (0xc00090a0b0) (0xc0007880a0) Stream added, broadcasting: 1\nI0522 13:56:28.407966 2618 log.go:172] (0xc00090a0b0) Reply frame received for 1\nI0522 13:56:28.408034 2618 log.go:172] (0xc00090a0b0) (0xc000920000) Create stream\nI0522 13:56:28.408066 2618 log.go:172] (0xc00090a0b0) (0xc000920000) Stream added, broadcasting: 3\nI0522 13:56:28.408877 2618 log.go:172] (0xc00090a0b0) Reply frame received for 3\nI0522 13:56:28.408938 2618 log.go:172] (0xc00090a0b0) (0xc0005b0320) Create stream\nI0522 13:56:28.408962 2618 log.go:172] (0xc00090a0b0) (0xc0005b0320) Stream added, broadcasting: 5\nI0522 13:56:28.409868 2618 log.go:172] (0xc00090a0b0) Reply frame received for 5\nI0522 13:56:28.489497 2618 log.go:172] (0xc00090a0b0) Data frame received for 3\nI0522 13:56:28.489519 2618 log.go:172] (0xc000920000) (3) Data frame handling\nI0522 13:56:28.489535 2618 log.go:172] (0xc000920000) (3) Data frame sent\nI0522 13:56:28.489544 2618 log.go:172] (0xc00090a0b0) Data frame received for 3\nI0522 13:56:28.489550 2618 log.go:172] (0xc000920000) (3) Data frame handling\nI0522 13:56:28.489823 2618 log.go:172] (0xc00090a0b0) Data frame received for 5\nI0522 13:56:28.489836 2618 log.go:172] (0xc0005b0320) (5) Data frame handling\nI0522 13:56:28.491700 2618 log.go:172] (0xc00090a0b0) Data frame received for 1\nI0522 13:56:28.491715 2618 log.go:172] (0xc0007880a0) (1) Data frame handling\nI0522 13:56:28.491727 2618 log.go:172] (0xc0007880a0) (1) Data frame sent\nI0522 13:56:28.491840 2618 log.go:172] (0xc00090a0b0) (0xc0007880a0) Stream removed, broadcasting: 1\nI0522 13:56:28.491896 2618 log.go:172] (0xc00090a0b0) Go away received\nI0522 13:56:28.492117 2618 log.go:172] (0xc00090a0b0) (0xc0007880a0) Stream removed, broadcasting: 1\nI0522 13:56:28.492135 2618 log.go:172] (0xc00090a0b0) (0xc000920000) Stream removed, broadcasting: 3\nI0522 13:56:28.492141 2618 log.go:172] (0xc00090a0b0) (0xc0005b0320) Stream removed, broadcasting: 5\n" May 22 13:56:28.496: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:56:28.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1458" for this suite. May 22 13:56:34.517: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:56:34.595: INFO: namespace emptydir-1458 deletion completed in 6.094953636s • [SLOW TEST:12.424 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:56:34.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-6827 STEP: creating a selector STEP: Creating the service pods in kubernetes May 22 13:56:34.671: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 22 13:57:02.782: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.245:8080/dial?request=hostName&protocol=udp&host=10.244.1.244&port=8081&tries=1'] Namespace:pod-network-test-6827 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 22 13:57:02.782: INFO: >>> kubeConfig: /root/.kube/config I0522 13:57:02.822570 6 log.go:172] (0xc001d0ec60) (0xc002152d20) Create stream I0522 13:57:02.822606 6 log.go:172] (0xc001d0ec60) (0xc002152d20) Stream added, broadcasting: 1 I0522 13:57:02.831806 6 log.go:172] (0xc001d0ec60) Reply frame received for 1 I0522 13:57:02.831868 6 log.go:172] (0xc001d0ec60) (0xc002b892c0) Create stream I0522 13:57:02.831896 6 log.go:172] (0xc001d0ec60) (0xc002b892c0) Stream added, broadcasting: 3 I0522 13:57:02.833972 6 log.go:172] (0xc001d0ec60) Reply frame received for 3 I0522 13:57:02.834086 6 log.go:172] (0xc001d0ec60) (0xc002152dc0) Create stream I0522 13:57:02.834133 6 log.go:172] (0xc001d0ec60) (0xc002152dc0) Stream added, broadcasting: 5 I0522 13:57:02.836188 6 log.go:172] (0xc001d0ec60) Reply frame received for 5 I0522 13:57:02.970476 6 log.go:172] (0xc001d0ec60) Data frame received for 3 I0522 13:57:02.970522 6 log.go:172] (0xc002b892c0) (3) Data frame handling I0522 13:57:02.970552 6 log.go:172] (0xc002b892c0) (3) Data frame sent I0522 13:57:02.971092 6 log.go:172] (0xc001d0ec60) Data frame received for 3 I0522 13:57:02.971127 6 log.go:172] (0xc001d0ec60) Data frame received for 5 I0522 13:57:02.971177 6 log.go:172] (0xc002152dc0) (5) Data frame handling I0522 13:57:02.971226 6 log.go:172] (0xc002b892c0) (3) Data frame handling I0522 13:57:02.973075 6 log.go:172] (0xc001d0ec60) Data frame received for 1 I0522 13:57:02.973319 6 log.go:172] (0xc002152d20) (1) Data frame handling I0522 13:57:02.973458 6 log.go:172] (0xc002152d20) (1) Data frame sent I0522 13:57:02.973530 6 log.go:172] (0xc001d0ec60) (0xc002152d20) Stream removed, broadcasting: 1 I0522 13:57:02.973567 6 log.go:172] (0xc001d0ec60) Go away received I0522 13:57:02.973720 6 log.go:172] (0xc001d0ec60) (0xc002152d20) Stream removed, broadcasting: 1 I0522 13:57:02.973736 6 log.go:172] (0xc001d0ec60) (0xc002b892c0) Stream removed, broadcasting: 3 I0522 13:57:02.973743 6 log.go:172] (0xc001d0ec60) (0xc002152dc0) Stream removed, broadcasting: 5 May 22 13:57:02.973: INFO: Waiting for endpoints: map[] May 22 13:57:02.977: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.245:8080/dial?request=hostName&protocol=udp&host=10.244.2.189&port=8081&tries=1'] Namespace:pod-network-test-6827 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 22 13:57:02.977: INFO: >>> kubeConfig: /root/.kube/config I0522 13:57:03.012351 6 log.go:172] (0xc002d9cbb0) (0xc001e86820) Create stream I0522 13:57:03.012376 6 log.go:172] (0xc002d9cbb0) (0xc001e86820) Stream added, broadcasting: 1 I0522 13:57:03.015672 6 log.go:172] (0xc002d9cbb0) Reply frame received for 1 I0522 13:57:03.015731 6 log.go:172] (0xc002d9cbb0) (0xc002152e60) Create stream I0522 13:57:03.015755 6 log.go:172] (0xc002d9cbb0) (0xc002152e60) Stream added, broadcasting: 3 I0522 13:57:03.016727 6 log.go:172] (0xc002d9cbb0) Reply frame received for 3 I0522 13:57:03.016772 6 log.go:172] (0xc002d9cbb0) (0xc0025bf180) Create stream I0522 13:57:03.016788 6 log.go:172] (0xc002d9cbb0) (0xc0025bf180) Stream added, broadcasting: 5 I0522 13:57:03.018242 6 log.go:172] (0xc002d9cbb0) Reply frame received for 5 I0522 13:57:03.077780 6 log.go:172] (0xc002d9cbb0) Data frame received for 3 I0522 13:57:03.077818 6 log.go:172] (0xc002152e60) (3) Data frame handling I0522 13:57:03.077841 6 log.go:172] (0xc002152e60) (3) Data frame sent I0522 13:57:03.078240 6 log.go:172] (0xc002d9cbb0) Data frame received for 5 I0522 13:57:03.078258 6 log.go:172] (0xc0025bf180) (5) Data frame handling I0522 13:57:03.078320 6 log.go:172] (0xc002d9cbb0) Data frame received for 3 I0522 13:57:03.078333 6 log.go:172] (0xc002152e60) (3) Data frame handling I0522 13:57:03.079904 6 log.go:172] (0xc002d9cbb0) Data frame received for 1 I0522 13:57:03.079919 6 log.go:172] (0xc001e86820) (1) Data frame handling I0522 13:57:03.079925 6 log.go:172] (0xc001e86820) (1) Data frame sent I0522 13:57:03.079933 6 log.go:172] (0xc002d9cbb0) (0xc001e86820) Stream removed, broadcasting: 1 I0522 13:57:03.080000 6 log.go:172] (0xc002d9cbb0) (0xc001e86820) Stream removed, broadcasting: 1 I0522 13:57:03.080016 6 log.go:172] (0xc002d9cbb0) (0xc002152e60) Stream removed, broadcasting: 3 I0522 13:57:03.080188 6 log.go:172] (0xc002d9cbb0) Go away received I0522 13:57:03.080228 6 log.go:172] (0xc002d9cbb0) (0xc0025bf180) Stream removed, broadcasting: 5 May 22 13:57:03.080: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:57:03.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6827" for this suite. May 22 13:57:27.112: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:57:27.190: INFO: namespace pod-network-test-6827 deletion completed in 24.105188496s • [SLOW TEST:52.594 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:57:27.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 22 13:57:27.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-9277' May 22 13:57:27.360: INFO: stderr: "" May 22 13:57:27.360: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 May 22 13:57:27.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-9277' May 22 13:57:31.887: INFO: stderr: "" May 22 13:57:31.887: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:57:31.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9277" for this suite. May 22 13:57:37.937: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:57:38.010: INFO: namespace kubectl-9277 deletion completed in 6.118491201s • [SLOW TEST:10.820 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:57:38.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 22 13:57:38.055: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 22 13:57:38.071: INFO: Pod name sample-pod: Found 0 pods out of 1 May 22 13:57:43.075: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 22 13:57:43.075: INFO: Creating deployment "test-rolling-update-deployment" May 22 13:57:43.080: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 22 13:57:43.086: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 22 13:57:45.105: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 22 13:57:45.107: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725752663, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725752663, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725752663, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725752663, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 22 13:57:47.119: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 22 13:57:47.128: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-6226,SelfLink:/apis/apps/v1/namespaces/deployment-6226/deployments/test-rolling-update-deployment,UID:c54a8d2f-bda0-4f40-b452-1e2f0e76d505,ResourceVersion:12302055,Generation:1,CreationTimestamp:2020-05-22 13:57:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-22 13:57:43 +0000 UTC 2020-05-22 13:57:43 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-22 13:57:46 +0000 UTC 2020-05-22 13:57:43 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 22 13:57:47.131: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-6226,SelfLink:/apis/apps/v1/namespaces/deployment-6226/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:eab17937-5e10-4917-8dff-f62faab1ea37,ResourceVersion:12302043,Generation:1,CreationTimestamp:2020-05-22 13:57:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment c54a8d2f-bda0-4f40-b452-1e2f0e76d505 0xc002cb5307 0xc002cb5308}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 22 13:57:47.131: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 22 13:57:47.131: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-6226,SelfLink:/apis/apps/v1/namespaces/deployment-6226/replicasets/test-rolling-update-controller,UID:3021d939-f65a-44b5-b685-286d95953fde,ResourceVersion:12302053,Generation:2,CreationTimestamp:2020-05-22 13:57:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment c54a8d2f-bda0-4f40-b452-1e2f0e76d505 0xc002cb5237 0xc002cb5238}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 22 13:57:47.134: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-d97k4" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-d97k4,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-6226,SelfLink:/api/v1/namespaces/deployment-6226/pods/test-rolling-update-deployment-79f6b9d75c-d97k4,UID:94af4a5c-9760-43c5-af42-db9c8730fc09,ResourceVersion:12302042,Generation:0,CreationTimestamp:2020-05-22 13:57:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c eab17937-5e10-4917-8dff-f62faab1ea37 0xc002cb5be7 0xc002cb5be8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9rxc4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9rxc4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-9rxc4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002cb5c60} {node.kubernetes.io/unreachable Exists NoExecute 0xc002cb5c80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:57:43 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:57:46 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:57:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 13:57:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.246,StartTime:2020-05-22 13:57:43 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-22 13:57:45 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://74b84fb6c1e87ca0f26dd704a313eb2e68aae239b57ec3a87146fe36305810f0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:57:47.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6226" for this suite. May 22 13:57:55.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:57:55.237: INFO: namespace deployment-6226 deletion completed in 8.099648577s • [SLOW TEST:17.227 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:57:55.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:57:55.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6575" for this suite. May 22 13:58:01.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:58:01.537: INFO: namespace kubelet-test-6575 deletion completed in 6.072720917s • [SLOW TEST:6.300 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:58:01.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9705.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9705.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9705.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9705.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 22 13:58:09.661: INFO: DNS probes using dns-test-54417681-6a53-4a04-8286-d3fb484df59e succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9705.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9705.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9705.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9705.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 22 13:58:15.816: INFO: File wheezy_udp@dns-test-service-3.dns-9705.svc.cluster.local from pod dns-9705/dns-test-763db7a0-7a90-42cb-bc7c-c503dff0f159 contains 'foo.example.com. ' instead of 'bar.example.com.' May 22 13:58:15.820: INFO: File jessie_udp@dns-test-service-3.dns-9705.svc.cluster.local from pod dns-9705/dns-test-763db7a0-7a90-42cb-bc7c-c503dff0f159 contains 'foo.example.com. ' instead of 'bar.example.com.' May 22 13:58:15.820: INFO: Lookups using dns-9705/dns-test-763db7a0-7a90-42cb-bc7c-c503dff0f159 failed for: [wheezy_udp@dns-test-service-3.dns-9705.svc.cluster.local jessie_udp@dns-test-service-3.dns-9705.svc.cluster.local] May 22 13:58:20.825: INFO: File wheezy_udp@dns-test-service-3.dns-9705.svc.cluster.local from pod dns-9705/dns-test-763db7a0-7a90-42cb-bc7c-c503dff0f159 contains 'foo.example.com. ' instead of 'bar.example.com.' May 22 13:58:20.828: INFO: File jessie_udp@dns-test-service-3.dns-9705.svc.cluster.local from pod dns-9705/dns-test-763db7a0-7a90-42cb-bc7c-c503dff0f159 contains 'foo.example.com. ' instead of 'bar.example.com.' May 22 13:58:20.828: INFO: Lookups using dns-9705/dns-test-763db7a0-7a90-42cb-bc7c-c503dff0f159 failed for: [wheezy_udp@dns-test-service-3.dns-9705.svc.cluster.local jessie_udp@dns-test-service-3.dns-9705.svc.cluster.local] May 22 13:58:25.826: INFO: File wheezy_udp@dns-test-service-3.dns-9705.svc.cluster.local from pod dns-9705/dns-test-763db7a0-7a90-42cb-bc7c-c503dff0f159 contains 'foo.example.com. ' instead of 'bar.example.com.' May 22 13:58:25.830: INFO: File jessie_udp@dns-test-service-3.dns-9705.svc.cluster.local from pod dns-9705/dns-test-763db7a0-7a90-42cb-bc7c-c503dff0f159 contains 'foo.example.com. ' instead of 'bar.example.com.' May 22 13:58:25.830: INFO: Lookups using dns-9705/dns-test-763db7a0-7a90-42cb-bc7c-c503dff0f159 failed for: [wheezy_udp@dns-test-service-3.dns-9705.svc.cluster.local jessie_udp@dns-test-service-3.dns-9705.svc.cluster.local] May 22 13:58:30.825: INFO: File wheezy_udp@dns-test-service-3.dns-9705.svc.cluster.local from pod dns-9705/dns-test-763db7a0-7a90-42cb-bc7c-c503dff0f159 contains 'foo.example.com. ' instead of 'bar.example.com.' May 22 13:58:30.829: INFO: File jessie_udp@dns-test-service-3.dns-9705.svc.cluster.local from pod dns-9705/dns-test-763db7a0-7a90-42cb-bc7c-c503dff0f159 contains 'foo.example.com. ' instead of 'bar.example.com.' May 22 13:58:30.829: INFO: Lookups using dns-9705/dns-test-763db7a0-7a90-42cb-bc7c-c503dff0f159 failed for: [wheezy_udp@dns-test-service-3.dns-9705.svc.cluster.local jessie_udp@dns-test-service-3.dns-9705.svc.cluster.local] May 22 13:58:35.825: INFO: File wheezy_udp@dns-test-service-3.dns-9705.svc.cluster.local from pod dns-9705/dns-test-763db7a0-7a90-42cb-bc7c-c503dff0f159 contains 'foo.example.com. ' instead of 'bar.example.com.' May 22 13:58:35.828: INFO: File jessie_udp@dns-test-service-3.dns-9705.svc.cluster.local from pod dns-9705/dns-test-763db7a0-7a90-42cb-bc7c-c503dff0f159 contains 'foo.example.com. ' instead of 'bar.example.com.' May 22 13:58:35.828: INFO: Lookups using dns-9705/dns-test-763db7a0-7a90-42cb-bc7c-c503dff0f159 failed for: [wheezy_udp@dns-test-service-3.dns-9705.svc.cluster.local jessie_udp@dns-test-service-3.dns-9705.svc.cluster.local] May 22 13:58:40.829: INFO: DNS probes using dns-test-763db7a0-7a90-42cb-bc7c-c503dff0f159 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9705.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-9705.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9705.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-9705.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 22 13:58:49.399: INFO: DNS probes using dns-test-143b6052-ed36-4151-9020-8d6e8590e154 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:58:49.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9705" for this suite. May 22 13:58:55.639: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:58:55.737: INFO: namespace dns-9705 deletion completed in 6.178092295s • [SLOW TEST:54.199 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:58:55.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 22 13:58:55.943: INFO: Waiting up to 5m0s for pod "downwardapi-volume-354cd271-8fd3-4a54-8549-1628e4390c13" in namespace "downward-api-1625" to be "success or failure" May 22 13:58:55.947: INFO: Pod "downwardapi-volume-354cd271-8fd3-4a54-8549-1628e4390c13": Phase="Pending", Reason="", readiness=false. Elapsed: 3.956729ms May 22 13:58:57.951: INFO: Pod "downwardapi-volume-354cd271-8fd3-4a54-8549-1628e4390c13": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007684253s May 22 13:58:59.955: INFO: Pod "downwardapi-volume-354cd271-8fd3-4a54-8549-1628e4390c13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011309222s STEP: Saw pod success May 22 13:58:59.955: INFO: Pod "downwardapi-volume-354cd271-8fd3-4a54-8549-1628e4390c13" satisfied condition "success or failure" May 22 13:58:59.958: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-354cd271-8fd3-4a54-8549-1628e4390c13 container client-container: STEP: delete the pod May 22 13:59:00.156: INFO: Waiting for pod downwardapi-volume-354cd271-8fd3-4a54-8549-1628e4390c13 to disappear May 22 13:59:00.282: INFO: Pod downwardapi-volume-354cd271-8fd3-4a54-8549-1628e4390c13 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:59:00.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1625" for this suite. May 22 13:59:06.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:59:06.435: INFO: namespace downward-api-1625 deletion completed in 6.150138424s • [SLOW TEST:10.698 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:59:06.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0522 13:59:47.532039 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 22 13:59:47.532: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 13:59:47.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7052" for this suite. May 22 13:59:55.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 13:59:55.633: INFO: namespace gc-7052 deletion completed in 8.098487192s • [SLOW TEST:49.197 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 13:59:55.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 22 13:59:56.109: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7fc289e6-6677-403b-81b6-4c6113892d2d" in namespace "projected-8837" to be "success or failure" May 22 13:59:56.421: INFO: Pod "downwardapi-volume-7fc289e6-6677-403b-81b6-4c6113892d2d": Phase="Pending", Reason="", readiness=false. Elapsed: 311.671376ms May 22 13:59:58.425: INFO: Pod "downwardapi-volume-7fc289e6-6677-403b-81b6-4c6113892d2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.315765366s May 22 14:00:00.429: INFO: Pod "downwardapi-volume-7fc289e6-6677-403b-81b6-4c6113892d2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.31995418s STEP: Saw pod success May 22 14:00:00.429: INFO: Pod "downwardapi-volume-7fc289e6-6677-403b-81b6-4c6113892d2d" satisfied condition "success or failure" May 22 14:00:00.432: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-7fc289e6-6677-403b-81b6-4c6113892d2d container client-container: STEP: delete the pod May 22 14:00:00.590: INFO: Waiting for pod downwardapi-volume-7fc289e6-6677-403b-81b6-4c6113892d2d to disappear May 22 14:00:00.596: INFO: Pod downwardapi-volume-7fc289e6-6677-403b-81b6-4c6113892d2d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:00:00.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8837" for this suite. May 22 14:00:06.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:00:06.699: INFO: namespace projected-8837 deletion completed in 6.100047173s • [SLOW TEST:11.065 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:00:06.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-2010, will wait for the garbage collector to delete the pods May 22 14:00:12.860: INFO: Deleting Job.batch foo took: 6.670749ms May 22 14:00:13.160: INFO: Terminating Job.batch foo pods took: 300.244367ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:00:52.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2010" for this suite. May 22 14:00:58.307: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:00:58.385: INFO: namespace job-2010 deletion completed in 6.114535507s • [SLOW TEST:51.686 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:00:58.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 22 14:00:58.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 22 14:00:58.594: INFO: stderr: "" May 22 14:00:58.594: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.11\", GitCommit:\"d94a81c724ea8e1ccc9002d89b7fe81d58f89ede\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:43Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:28:37Z\", GoVersion:\"go1.12.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:00:58.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1385" for this suite. May 22 14:01:04.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:01:04.704: INFO: namespace kubectl-1385 deletion completed in 6.10499471s • [SLOW TEST:6.318 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:01:04.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-28f06ff5-a651-4a7f-9f29-172abea34683 STEP: Creating secret with name s-test-opt-upd-b9b16d8e-0592-4b1a-a064-cf2e98c3ac96 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-28f06ff5-a651-4a7f-9f29-172abea34683 STEP: Updating secret s-test-opt-upd-b9b16d8e-0592-4b1a-a064-cf2e98c3ac96 STEP: Creating secret with name s-test-opt-create-4e5af6a2-b384-408f-8feb-c4551e1ce719 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:01:14.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2685" for this suite. May 22 14:01:36.886: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:01:36.959: INFO: namespace projected-2685 deletion completed in 22.084613216s • [SLOW TEST:32.255 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:01:36.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 22 14:01:36.995: INFO: Waiting up to 5m0s for pod "downwardapi-volume-21487ba3-5c4e-494d-99a9-f9ce7e1af75f" in namespace "downward-api-8200" to be "success or failure" May 22 14:01:37.022: INFO: Pod "downwardapi-volume-21487ba3-5c4e-494d-99a9-f9ce7e1af75f": Phase="Pending", Reason="", readiness=false. Elapsed: 27.126449ms May 22 14:01:39.025: INFO: Pod "downwardapi-volume-21487ba3-5c4e-494d-99a9-f9ce7e1af75f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030457654s May 22 14:01:41.029: INFO: Pod "downwardapi-volume-21487ba3-5c4e-494d-99a9-f9ce7e1af75f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034280535s STEP: Saw pod success May 22 14:01:41.029: INFO: Pod "downwardapi-volume-21487ba3-5c4e-494d-99a9-f9ce7e1af75f" satisfied condition "success or failure" May 22 14:01:41.032: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-21487ba3-5c4e-494d-99a9-f9ce7e1af75f container client-container: STEP: delete the pod May 22 14:01:41.085: INFO: Waiting for pod downwardapi-volume-21487ba3-5c4e-494d-99a9-f9ce7e1af75f to disappear May 22 14:01:41.113: INFO: Pod downwardapi-volume-21487ba3-5c4e-494d-99a9-f9ce7e1af75f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:01:41.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8200" for this suite. May 22 14:01:47.128: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:01:47.220: INFO: namespace downward-api-8200 deletion completed in 6.102887878s • [SLOW TEST:10.260 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:01:47.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0522 14:01:57.339062 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 22 14:01:57.339: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:01:57.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6474" for this suite. May 22 14:02:03.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:02:03.437: INFO: namespace gc-6474 deletion completed in 6.095451815s • [SLOW TEST:16.217 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:02:03.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 22 14:02:08.067: INFO: Successfully updated pod "pod-update-activedeadlineseconds-54974368-74ae-4f02-a42c-11d701dab5af" May 22 14:02:08.067: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-54974368-74ae-4f02-a42c-11d701dab5af" in namespace "pods-5669" to be "terminated due to deadline exceeded" May 22 14:02:08.089: INFO: Pod "pod-update-activedeadlineseconds-54974368-74ae-4f02-a42c-11d701dab5af": Phase="Running", Reason="", readiness=true. Elapsed: 21.628281ms May 22 14:02:10.094: INFO: Pod "pod-update-activedeadlineseconds-54974368-74ae-4f02-a42c-11d701dab5af": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.026303748s May 22 14:02:10.094: INFO: Pod "pod-update-activedeadlineseconds-54974368-74ae-4f02-a42c-11d701dab5af" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:02:10.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5669" for this suite. May 22 14:02:16.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:02:16.197: INFO: namespace pods-5669 deletion completed in 6.099388606s • [SLOW TEST:12.760 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:02:16.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium May 22 14:02:16.278: INFO: Waiting up to 5m0s for pod "pod-ac4e28f7-5d84-4d0a-9edf-f8c99c74670c" in namespace "emptydir-8616" to be "success or failure" May 22 14:02:16.294: INFO: Pod "pod-ac4e28f7-5d84-4d0a-9edf-f8c99c74670c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.239983ms May 22 14:02:18.298: INFO: Pod "pod-ac4e28f7-5d84-4d0a-9edf-f8c99c74670c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020518631s May 22 14:02:20.405: INFO: Pod "pod-ac4e28f7-5d84-4d0a-9edf-f8c99c74670c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.127468831s STEP: Saw pod success May 22 14:02:20.405: INFO: Pod "pod-ac4e28f7-5d84-4d0a-9edf-f8c99c74670c" satisfied condition "success or failure" May 22 14:02:20.408: INFO: Trying to get logs from node iruya-worker2 pod pod-ac4e28f7-5d84-4d0a-9edf-f8c99c74670c container test-container: STEP: delete the pod May 22 14:02:20.452: INFO: Waiting for pod pod-ac4e28f7-5d84-4d0a-9edf-f8c99c74670c to disappear May 22 14:02:20.467: INFO: Pod pod-ac4e28f7-5d84-4d0a-9edf-f8c99c74670c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:02:20.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8616" for this suite. May 22 14:02:26.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:02:26.561: INFO: namespace emptydir-8616 deletion completed in 6.090512782s • [SLOW TEST:10.363 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:02:26.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-3533 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3533 to expose endpoints map[] May 22 14:02:26.706: INFO: Get endpoints failed (24.204015ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 22 14:02:27.710: INFO: successfully validated that service multi-endpoint-test in namespace services-3533 exposes endpoints map[] (1.028276424s elapsed) STEP: Creating pod pod1 in namespace services-3533 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3533 to expose endpoints map[pod1:[100]] May 22 14:02:30.756: INFO: successfully validated that service multi-endpoint-test in namespace services-3533 exposes endpoints map[pod1:[100]] (3.039854091s elapsed) STEP: Creating pod pod2 in namespace services-3533 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3533 to expose endpoints map[pod1:[100] pod2:[101]] May 22 14:02:34.894: INFO: successfully validated that service multi-endpoint-test in namespace services-3533 exposes endpoints map[pod1:[100] pod2:[101]] (4.133030792s elapsed) STEP: Deleting pod pod1 in namespace services-3533 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3533 to expose endpoints map[pod2:[101]] May 22 14:02:35.938: INFO: successfully validated that service multi-endpoint-test in namespace services-3533 exposes endpoints map[pod2:[101]] (1.03904916s elapsed) STEP: Deleting pod pod2 in namespace services-3533 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3533 to expose endpoints map[] May 22 14:02:36.949: INFO: successfully validated that service multi-endpoint-test in namespace services-3533 exposes endpoints map[] (1.005946613s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:02:36.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3533" for this suite. May 22 14:02:59.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:02:59.092: INFO: namespace services-3533 deletion completed in 22.093447875s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:32.531 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:02:59.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-4322022c-ad28-4e54-b3b3-4a32927c156b STEP: Creating secret with name secret-projected-all-test-volume-8b57f884-0d95-40f6-8ace-efb085af6d1a STEP: Creating a pod to test Check all projections for projected volume plugin May 22 14:02:59.206: INFO: Waiting up to 5m0s for pod "projected-volume-9eec95c5-841f-4d0f-97d7-8919f62a03ff" in namespace "projected-1596" to be "success or failure" May 22 14:02:59.227: INFO: Pod "projected-volume-9eec95c5-841f-4d0f-97d7-8919f62a03ff": Phase="Pending", Reason="", readiness=false. Elapsed: 20.600714ms May 22 14:03:01.230: INFO: Pod "projected-volume-9eec95c5-841f-4d0f-97d7-8919f62a03ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024028545s May 22 14:03:03.234: INFO: Pod "projected-volume-9eec95c5-841f-4d0f-97d7-8919f62a03ff": Phase="Running", Reason="", readiness=true. Elapsed: 4.027776726s May 22 14:03:05.238: INFO: Pod "projected-volume-9eec95c5-841f-4d0f-97d7-8919f62a03ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031418799s STEP: Saw pod success May 22 14:03:05.238: INFO: Pod "projected-volume-9eec95c5-841f-4d0f-97d7-8919f62a03ff" satisfied condition "success or failure" May 22 14:03:05.240: INFO: Trying to get logs from node iruya-worker pod projected-volume-9eec95c5-841f-4d0f-97d7-8919f62a03ff container projected-all-volume-test: STEP: delete the pod May 22 14:03:05.273: INFO: Waiting for pod projected-volume-9eec95c5-841f-4d0f-97d7-8919f62a03ff to disappear May 22 14:03:05.290: INFO: Pod projected-volume-9eec95c5-841f-4d0f-97d7-8919f62a03ff no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:03:05.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1596" for this suite. May 22 14:03:11.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:03:11.413: INFO: namespace projected-1596 deletion completed in 6.119495097s • [SLOW TEST:12.321 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:03:11.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 22 14:03:15.607: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:03:15.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9763" for this suite. May 22 14:03:21.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:03:21.733: INFO: namespace container-runtime-9763 deletion completed in 6.080576257s • [SLOW TEST:10.320 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:03:21.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 22 14:03:21.810: INFO: Waiting up to 5m0s for pod "downwardapi-volume-70ebb371-0038-4844-91d0-42afd5f72a69" in namespace "projected-7778" to be "success or failure" May 22 14:03:21.814: INFO: Pod "downwardapi-volume-70ebb371-0038-4844-91d0-42afd5f72a69": Phase="Pending", Reason="", readiness=false. Elapsed: 3.791865ms May 22 14:03:23.880: INFO: Pod "downwardapi-volume-70ebb371-0038-4844-91d0-42afd5f72a69": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06930568s May 22 14:03:25.883: INFO: Pod "downwardapi-volume-70ebb371-0038-4844-91d0-42afd5f72a69": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.073018471s STEP: Saw pod success May 22 14:03:25.883: INFO: Pod "downwardapi-volume-70ebb371-0038-4844-91d0-42afd5f72a69" satisfied condition "success or failure" May 22 14:03:25.886: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-70ebb371-0038-4844-91d0-42afd5f72a69 container client-container: STEP: delete the pod May 22 14:03:25.928: INFO: Waiting for pod downwardapi-volume-70ebb371-0038-4844-91d0-42afd5f72a69 to disappear May 22 14:03:25.939: INFO: Pod downwardapi-volume-70ebb371-0038-4844-91d0-42afd5f72a69 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:03:25.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7778" for this suite. May 22 14:03:31.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:03:32.055: INFO: namespace projected-7778 deletion completed in 6.112980916s • [SLOW TEST:10.321 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:03:32.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium May 22 14:03:32.139: INFO: Waiting up to 5m0s for pod "pod-d6d8d78a-464f-4b81-b771-2ff984915e76" in namespace "emptydir-8275" to be "success or failure" May 22 14:03:32.144: INFO: Pod "pod-d6d8d78a-464f-4b81-b771-2ff984915e76": Phase="Pending", Reason="", readiness=false. Elapsed: 4.90931ms May 22 14:03:34.148: INFO: Pod "pod-d6d8d78a-464f-4b81-b771-2ff984915e76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008750937s May 22 14:03:36.153: INFO: Pod "pod-d6d8d78a-464f-4b81-b771-2ff984915e76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013173607s STEP: Saw pod success May 22 14:03:36.153: INFO: Pod "pod-d6d8d78a-464f-4b81-b771-2ff984915e76" satisfied condition "success or failure" May 22 14:03:36.155: INFO: Trying to get logs from node iruya-worker2 pod pod-d6d8d78a-464f-4b81-b771-2ff984915e76 container test-container: STEP: delete the pod May 22 14:03:36.175: INFO: Waiting for pod pod-d6d8d78a-464f-4b81-b771-2ff984915e76 to disappear May 22 14:03:36.179: INFO: Pod pod-d6d8d78a-464f-4b81-b771-2ff984915e76 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:03:36.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8275" for this suite. May 22 14:03:42.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:03:42.278: INFO: namespace emptydir-8275 deletion completed in 6.096130843s • [SLOW TEST:10.223 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:03:42.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server May 22 14:03:42.323: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:03:42.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7702" for this suite. May 22 14:03:48.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:03:48.588: INFO: namespace kubectl-7702 deletion completed in 6.155702808s • [SLOW TEST:6.309 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:03:48.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0522 14:03:49.677483 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 22 14:03:49.677: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:03:49.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6018" for this suite. May 22 14:03:55.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:03:55.815: INFO: namespace gc-6018 deletion completed in 6.134926094s • [SLOW TEST:7.227 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:03:55.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 22 14:03:55.866: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:04:03.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4614" for this suite. May 22 14:04:09.711: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:04:09.820: INFO: namespace init-container-4614 deletion completed in 6.13780544s • [SLOW TEST:14.005 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:04:09.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 22 14:04:14.455: INFO: Successfully updated pod "labelsupdatebf92a41b-db6b-41f2-90dc-3a517e817171" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:04:18.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8013" for this suite. May 22 14:04:40.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:04:40.586: INFO: namespace projected-8013 deletion completed in 22.088231282s • [SLOW TEST:30.766 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:04:40.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments May 22 14:04:40.698: INFO: Waiting up to 5m0s for pod "client-containers-dd8137ae-e978-45c4-a868-23efd3a8883b" in namespace "containers-4409" to be "success or failure" May 22 14:04:40.705: INFO: Pod "client-containers-dd8137ae-e978-45c4-a868-23efd3a8883b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.052423ms May 22 14:04:42.709: INFO: Pod "client-containers-dd8137ae-e978-45c4-a868-23efd3a8883b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010973132s May 22 14:04:44.714: INFO: Pod "client-containers-dd8137ae-e978-45c4-a868-23efd3a8883b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015833259s STEP: Saw pod success May 22 14:04:44.714: INFO: Pod "client-containers-dd8137ae-e978-45c4-a868-23efd3a8883b" satisfied condition "success or failure" May 22 14:04:44.717: INFO: Trying to get logs from node iruya-worker pod client-containers-dd8137ae-e978-45c4-a868-23efd3a8883b container test-container: STEP: delete the pod May 22 14:04:44.809: INFO: Waiting for pod client-containers-dd8137ae-e978-45c4-a868-23efd3a8883b to disappear May 22 14:04:44.812: INFO: Pod client-containers-dd8137ae-e978-45c4-a868-23efd3a8883b no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:04:44.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4409" for this suite. May 22 14:04:50.870: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:04:50.964: INFO: namespace containers-4409 deletion completed in 6.141758155s • [SLOW TEST:10.377 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:04:50.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs May 22 14:04:51.141: INFO: Waiting up to 5m0s for pod "pod-d8ee6725-ebf8-4ba3-8bfb-867b012de6c1" in namespace "emptydir-3889" to be "success or failure" May 22 14:04:51.145: INFO: Pod "pod-d8ee6725-ebf8-4ba3-8bfb-867b012de6c1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.773103ms May 22 14:04:53.149: INFO: Pod "pod-d8ee6725-ebf8-4ba3-8bfb-867b012de6c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008009874s May 22 14:04:55.153: INFO: Pod "pod-d8ee6725-ebf8-4ba3-8bfb-867b012de6c1": Phase="Running", Reason="", readiness=true. Elapsed: 4.012132988s May 22 14:04:57.158: INFO: Pod "pod-d8ee6725-ebf8-4ba3-8bfb-867b012de6c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016465835s STEP: Saw pod success May 22 14:04:57.158: INFO: Pod "pod-d8ee6725-ebf8-4ba3-8bfb-867b012de6c1" satisfied condition "success or failure" May 22 14:04:57.161: INFO: Trying to get logs from node iruya-worker2 pod pod-d8ee6725-ebf8-4ba3-8bfb-867b012de6c1 container test-container: STEP: delete the pod May 22 14:04:57.183: INFO: Waiting for pod pod-d8ee6725-ebf8-4ba3-8bfb-867b012de6c1 to disappear May 22 14:04:57.252: INFO: Pod pod-d8ee6725-ebf8-4ba3-8bfb-867b012de6c1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:04:57.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3889" for this suite. May 22 14:05:03.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:05:03.348: INFO: namespace emptydir-3889 deletion completed in 6.092794774s • [SLOW TEST:12.384 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:05:03.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-4e884ca4-a154-4eba-a0cc-4ebf0a0bd5fb in namespace container-probe-9550 May 22 14:05:07.412: INFO: Started pod busybox-4e884ca4-a154-4eba-a0cc-4ebf0a0bd5fb in namespace container-probe-9550 STEP: checking the pod's current state and verifying that restartCount is present May 22 14:05:07.414: INFO: Initial restart count of pod busybox-4e884ca4-a154-4eba-a0cc-4ebf0a0bd5fb is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:09:08.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9550" for this suite. May 22 14:09:14.106: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:09:14.187: INFO: namespace container-probe-9550 deletion completed in 6.1087056s • [SLOW TEST:250.839 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:09:14.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:09:48.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1692" for this suite. May 22 14:09:54.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:09:54.992: INFO: namespace container-runtime-1692 deletion completed in 6.120509601s • [SLOW TEST:40.804 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:09:54.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 22 14:09:55.144: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 14:09:55.155: INFO: Number of nodes with available pods: 0 May 22 14:09:55.155: INFO: Node iruya-worker is running more than one daemon pod May 22 14:09:56.180: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 14:09:56.183: INFO: Number of nodes with available pods: 0 May 22 14:09:56.183: INFO: Node iruya-worker is running more than one daemon pod May 22 14:09:57.162: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 14:09:57.165: INFO: Number of nodes with available pods: 0 May 22 14:09:57.165: INFO: Node iruya-worker is running more than one daemon pod May 22 14:09:58.159: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 14:09:58.204: INFO: Number of nodes with available pods: 0 May 22 14:09:58.204: INFO: Node iruya-worker is running more than one daemon pod May 22 14:09:59.160: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 14:09:59.164: INFO: Number of nodes with available pods: 0 May 22 14:09:59.164: INFO: Node iruya-worker is running more than one daemon pod May 22 14:10:00.158: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 14:10:00.161: INFO: Number of nodes with available pods: 2 May 22 14:10:00.161: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 22 14:10:00.186: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 22 14:10:00.191: INFO: Number of nodes with available pods: 2 May 22 14:10:00.191: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5120, will wait for the garbage collector to delete the pods May 22 14:10:01.291: INFO: Deleting DaemonSet.extensions daemon-set took: 6.052747ms May 22 14:10:01.592: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.255083ms May 22 14:10:04.795: INFO: Number of nodes with available pods: 0 May 22 14:10:04.795: INFO: Number of running nodes: 0, number of available pods: 0 May 22 14:10:04.797: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5120/daemonsets","resourceVersion":"12304567"},"items":null} May 22 14:10:04.799: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5120/pods","resourceVersion":"12304567"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:10:04.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5120" for this suite. May 22 14:10:10.824: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:10:10.928: INFO: namespace daemonsets-5120 deletion completed in 6.116799318s • [SLOW TEST:15.936 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:10:10.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-1d686d57-b4bb-4c8f-8e9d-5329ea085328 STEP: Creating a pod to test consume configMaps May 22 14:10:11.009: INFO: Waiting up to 5m0s for pod "pod-configmaps-f09143ec-e194-41de-a37d-3be2036e9c62" in namespace "configmap-9827" to be "success or failure" May 22 14:10:11.025: INFO: Pod "pod-configmaps-f09143ec-e194-41de-a37d-3be2036e9c62": Phase="Pending", Reason="", readiness=false. Elapsed: 16.108267ms May 22 14:10:13.029: INFO: Pod "pod-configmaps-f09143ec-e194-41de-a37d-3be2036e9c62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020290436s May 22 14:10:15.033: INFO: Pod "pod-configmaps-f09143ec-e194-41de-a37d-3be2036e9c62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024346358s STEP: Saw pod success May 22 14:10:15.033: INFO: Pod "pod-configmaps-f09143ec-e194-41de-a37d-3be2036e9c62" satisfied condition "success or failure" May 22 14:10:15.036: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-f09143ec-e194-41de-a37d-3be2036e9c62 container configmap-volume-test: STEP: delete the pod May 22 14:10:15.076: INFO: Waiting for pod pod-configmaps-f09143ec-e194-41de-a37d-3be2036e9c62 to disappear May 22 14:10:15.108: INFO: Pod pod-configmaps-f09143ec-e194-41de-a37d-3be2036e9c62 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:10:15.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9827" for this suite. May 22 14:10:21.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:10:21.203: INFO: namespace configmap-9827 deletion completed in 6.0912458s • [SLOW TEST:10.274 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:10:21.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6249.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6249.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 22 14:10:27.348: INFO: DNS probes using dns-6249/dns-test-875864ae-7f5a-48b4-8207-002eddd34e0e succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:10:27.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6249" for this suite. May 22 14:10:33.451: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:10:33.541: INFO: namespace dns-6249 deletion completed in 6.144597213s • [SLOW TEST:12.338 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:10:33.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-a794adbe-dd9a-4dbe-b77f-fed769e47479 in namespace container-probe-1380 May 22 14:10:37.616: INFO: Started pod liveness-a794adbe-dd9a-4dbe-b77f-fed769e47479 in namespace container-probe-1380 STEP: checking the pod's current state and verifying that restartCount is present May 22 14:10:37.619: INFO: Initial restart count of pod liveness-a794adbe-dd9a-4dbe-b77f-fed769e47479 is 0 May 22 14:10:55.688: INFO: Restart count of pod container-probe-1380/liveness-a794adbe-dd9a-4dbe-b77f-fed769e47479 is now 1 (18.069234305s elapsed) May 22 14:11:15.732: INFO: Restart count of pod container-probe-1380/liveness-a794adbe-dd9a-4dbe-b77f-fed769e47479 is now 2 (38.11342465s elapsed) May 22 14:11:36.119: INFO: Restart count of pod container-probe-1380/liveness-a794adbe-dd9a-4dbe-b77f-fed769e47479 is now 3 (58.500251885s elapsed) May 22 14:11:56.168: INFO: Restart count of pod container-probe-1380/liveness-a794adbe-dd9a-4dbe-b77f-fed769e47479 is now 4 (1m18.549454817s elapsed) May 22 14:13:00.415: INFO: Restart count of pod container-probe-1380/liveness-a794adbe-dd9a-4dbe-b77f-fed769e47479 is now 5 (2m22.795752255s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:13:00.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1380" for this suite. May 22 14:13:06.439: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:13:06.520: INFO: namespace container-probe-1380 deletion completed in 6.089533906s • [SLOW TEST:152.978 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:13:06.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-5416 STEP: creating a selector STEP: Creating the service pods in kubernetes May 22 14:13:06.557: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 22 14:13:30.714: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.20:8080/dial?request=hostName&protocol=http&host=10.244.2.213&port=8080&tries=1'] Namespace:pod-network-test-5416 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 22 14:13:30.714: INFO: >>> kubeConfig: /root/.kube/config I0522 14:13:30.749601 6 log.go:172] (0xc001e44160) (0xc002986000) Create stream I0522 14:13:30.749633 6 log.go:172] (0xc001e44160) (0xc002986000) Stream added, broadcasting: 1 I0522 14:13:30.752189 6 log.go:172] (0xc001e44160) Reply frame received for 1 I0522 14:13:30.752230 6 log.go:172] (0xc001e44160) (0xc0021526e0) Create stream I0522 14:13:30.752243 6 log.go:172] (0xc001e44160) (0xc0021526e0) Stream added, broadcasting: 3 I0522 14:13:30.753611 6 log.go:172] (0xc001e44160) Reply frame received for 3 I0522 14:13:30.753651 6 log.go:172] (0xc001e44160) (0xc002152780) Create stream I0522 14:13:30.753665 6 log.go:172] (0xc001e44160) (0xc002152780) Stream added, broadcasting: 5 I0522 14:13:30.754679 6 log.go:172] (0xc001e44160) Reply frame received for 5 I0522 14:13:30.841634 6 log.go:172] (0xc001e44160) Data frame received for 3 I0522 14:13:30.841693 6 log.go:172] (0xc0021526e0) (3) Data frame handling I0522 14:13:30.841739 6 log.go:172] (0xc0021526e0) (3) Data frame sent I0522 14:13:30.842021 6 log.go:172] (0xc001e44160) Data frame received for 5 I0522 14:13:30.842054 6 log.go:172] (0xc002152780) (5) Data frame handling I0522 14:13:30.842084 6 log.go:172] (0xc001e44160) Data frame received for 3 I0522 14:13:30.842100 6 log.go:172] (0xc0021526e0) (3) Data frame handling I0522 14:13:30.843792 6 log.go:172] (0xc001e44160) Data frame received for 1 I0522 14:13:30.843837 6 log.go:172] (0xc002986000) (1) Data frame handling I0522 14:13:30.843875 6 log.go:172] (0xc002986000) (1) Data frame sent I0522 14:13:30.843898 6 log.go:172] (0xc001e44160) (0xc002986000) Stream removed, broadcasting: 1 I0522 14:13:30.843916 6 log.go:172] (0xc001e44160) Go away received I0522 14:13:30.844063 6 log.go:172] (0xc001e44160) (0xc002986000) Stream removed, broadcasting: 1 I0522 14:13:30.844083 6 log.go:172] (0xc001e44160) (0xc0021526e0) Stream removed, broadcasting: 3 I0522 14:13:30.844099 6 log.go:172] (0xc001e44160) (0xc002152780) Stream removed, broadcasting: 5 May 22 14:13:30.844: INFO: Waiting for endpoints: map[] May 22 14:13:30.847: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.20:8080/dial?request=hostName&protocol=http&host=10.244.1.19&port=8080&tries=1'] Namespace:pod-network-test-5416 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 22 14:13:30.847: INFO: >>> kubeConfig: /root/.kube/config I0522 14:13:30.880287 6 log.go:172] (0xc0015f2dc0) (0xc002152960) Create stream I0522 14:13:30.880318 6 log.go:172] (0xc0015f2dc0) (0xc002152960) Stream added, broadcasting: 1 I0522 14:13:30.883007 6 log.go:172] (0xc0015f2dc0) Reply frame received for 1 I0522 14:13:30.883046 6 log.go:172] (0xc0015f2dc0) (0xc002238aa0) Create stream I0522 14:13:30.883062 6 log.go:172] (0xc0015f2dc0) (0xc002238aa0) Stream added, broadcasting: 3 I0522 14:13:30.884018 6 log.go:172] (0xc0015f2dc0) Reply frame received for 3 I0522 14:13:30.884056 6 log.go:172] (0xc0015f2dc0) (0xc002152a00) Create stream I0522 14:13:30.884076 6 log.go:172] (0xc0015f2dc0) (0xc002152a00) Stream added, broadcasting: 5 I0522 14:13:30.885018 6 log.go:172] (0xc0015f2dc0) Reply frame received for 5 I0522 14:13:30.970115 6 log.go:172] (0xc0015f2dc0) Data frame received for 3 I0522 14:13:30.970282 6 log.go:172] (0xc002238aa0) (3) Data frame handling I0522 14:13:30.970339 6 log.go:172] (0xc002238aa0) (3) Data frame sent I0522 14:13:30.970646 6 log.go:172] (0xc0015f2dc0) Data frame received for 5 I0522 14:13:30.970678 6 log.go:172] (0xc002152a00) (5) Data frame handling I0522 14:13:30.971028 6 log.go:172] (0xc0015f2dc0) Data frame received for 3 I0522 14:13:30.971116 6 log.go:172] (0xc002238aa0) (3) Data frame handling I0522 14:13:30.977476 6 log.go:172] (0xc0015f2dc0) Data frame received for 1 I0522 14:13:30.977503 6 log.go:172] (0xc002152960) (1) Data frame handling I0522 14:13:30.977521 6 log.go:172] (0xc002152960) (1) Data frame sent I0522 14:13:30.977539 6 log.go:172] (0xc0015f2dc0) (0xc002152960) Stream removed, broadcasting: 1 I0522 14:13:30.977673 6 log.go:172] (0xc0015f2dc0) (0xc002152960) Stream removed, broadcasting: 1 I0522 14:13:30.977694 6 log.go:172] (0xc0015f2dc0) Go away received I0522 14:13:30.977716 6 log.go:172] (0xc0015f2dc0) (0xc002238aa0) Stream removed, broadcasting: 3 I0522 14:13:30.977728 6 log.go:172] (0xc0015f2dc0) (0xc002152a00) Stream removed, broadcasting: 5 May 22 14:13:30.977: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:13:30.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5416" for this suite. May 22 14:13:55.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:13:55.092: INFO: namespace pod-network-test-5416 deletion completed in 24.109166325s • [SLOW TEST:48.572 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:13:55.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-58677e07-23f4-4328-87c1-1d3659fef8f6 May 22 14:13:55.146: INFO: Pod name my-hostname-basic-58677e07-23f4-4328-87c1-1d3659fef8f6: Found 0 pods out of 1 May 22 14:14:00.152: INFO: Pod name my-hostname-basic-58677e07-23f4-4328-87c1-1d3659fef8f6: Found 1 pods out of 1 May 22 14:14:00.152: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-58677e07-23f4-4328-87c1-1d3659fef8f6" are running May 22 14:14:00.155: INFO: Pod "my-hostname-basic-58677e07-23f4-4328-87c1-1d3659fef8f6-hpxvv" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-22 14:13:55 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-22 14:13:58 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-22 14:13:58 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-22 14:13:55 +0000 UTC Reason: Message:}]) May 22 14:14:00.155: INFO: Trying to dial the pod May 22 14:14:05.173: INFO: Controller my-hostname-basic-58677e07-23f4-4328-87c1-1d3659fef8f6: Got expected result from replica 1 [my-hostname-basic-58677e07-23f4-4328-87c1-1d3659fef8f6-hpxvv]: "my-hostname-basic-58677e07-23f4-4328-87c1-1d3659fef8f6-hpxvv", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:14:05.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9376" for this suite. May 22 14:14:11.195: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:14:11.263: INFO: namespace replication-controller-9376 deletion completed in 6.086496025s • [SLOW TEST:16.170 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:14:11.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-8026 STEP: creating a selector STEP: Creating the service pods in kubernetes May 22 14:14:11.383: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 22 14:14:33.489: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.22 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8026 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 22 14:14:33.489: INFO: >>> kubeConfig: /root/.kube/config I0522 14:14:33.524043 6 log.go:172] (0xc001e32370) (0xc0004485a0) Create stream I0522 14:14:33.524071 6 log.go:172] (0xc001e32370) (0xc0004485a0) Stream added, broadcasting: 1 I0522 14:14:33.525894 6 log.go:172] (0xc001e32370) Reply frame received for 1 I0522 14:14:33.525920 6 log.go:172] (0xc001e32370) (0xc000448640) Create stream I0522 14:14:33.525927 6 log.go:172] (0xc001e32370) (0xc000448640) Stream added, broadcasting: 3 I0522 14:14:33.526885 6 log.go:172] (0xc001e32370) Reply frame received for 3 I0522 14:14:33.526937 6 log.go:172] (0xc001e32370) (0xc001800fa0) Create stream I0522 14:14:33.526961 6 log.go:172] (0xc001e32370) (0xc001800fa0) Stream added, broadcasting: 5 I0522 14:14:33.527871 6 log.go:172] (0xc001e32370) Reply frame received for 5 I0522 14:14:34.616284 6 log.go:172] (0xc001e32370) Data frame received for 3 I0522 14:14:34.616320 6 log.go:172] (0xc000448640) (3) Data frame handling I0522 14:14:34.616347 6 log.go:172] (0xc000448640) (3) Data frame sent I0522 14:14:34.616588 6 log.go:172] (0xc001e32370) Data frame received for 3 I0522 14:14:34.616627 6 log.go:172] (0xc000448640) (3) Data frame handling I0522 14:14:34.616670 6 log.go:172] (0xc001e32370) Data frame received for 5 I0522 14:14:34.616708 6 log.go:172] (0xc001800fa0) (5) Data frame handling I0522 14:14:34.618507 6 log.go:172] (0xc001e32370) Data frame received for 1 I0522 14:14:34.618529 6 log.go:172] (0xc0004485a0) (1) Data frame handling I0522 14:14:34.618539 6 log.go:172] (0xc0004485a0) (1) Data frame sent I0522 14:14:34.618556 6 log.go:172] (0xc001e32370) (0xc0004485a0) Stream removed, broadcasting: 1 I0522 14:14:34.618599 6 log.go:172] (0xc001e32370) Go away received I0522 14:14:34.618664 6 log.go:172] (0xc001e32370) (0xc0004485a0) Stream removed, broadcasting: 1 I0522 14:14:34.618724 6 log.go:172] (0xc001e32370) (0xc000448640) Stream removed, broadcasting: 3 I0522 14:14:34.618740 6 log.go:172] (0xc001e32370) (0xc001800fa0) Stream removed, broadcasting: 5 May 22 14:14:34.618: INFO: Found all expected endpoints: [netserver-0] May 22 14:14:34.622: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.214 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8026 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 22 14:14:34.622: INFO: >>> kubeConfig: /root/.kube/config I0522 14:14:34.653563 6 log.go:172] (0xc001dc44d0) (0xc0016b2000) Create stream I0522 14:14:34.653587 6 log.go:172] (0xc001dc44d0) (0xc0016b2000) Stream added, broadcasting: 1 I0522 14:14:34.655377 6 log.go:172] (0xc001dc44d0) Reply frame received for 1 I0522 14:14:34.655431 6 log.go:172] (0xc001dc44d0) (0xc001203cc0) Create stream I0522 14:14:34.655448 6 log.go:172] (0xc001dc44d0) (0xc001203cc0) Stream added, broadcasting: 3 I0522 14:14:34.656528 6 log.go:172] (0xc001dc44d0) Reply frame received for 3 I0522 14:14:34.656580 6 log.go:172] (0xc001dc44d0) (0xc001801040) Create stream I0522 14:14:34.656639 6 log.go:172] (0xc001dc44d0) (0xc001801040) Stream added, broadcasting: 5 I0522 14:14:34.658109 6 log.go:172] (0xc001dc44d0) Reply frame received for 5 I0522 14:14:35.755170 6 log.go:172] (0xc001dc44d0) Data frame received for 3 I0522 14:14:35.755215 6 log.go:172] (0xc001203cc0) (3) Data frame handling I0522 14:14:35.755245 6 log.go:172] (0xc001203cc0) (3) Data frame sent I0522 14:14:35.755265 6 log.go:172] (0xc001dc44d0) Data frame received for 3 I0522 14:14:35.755278 6 log.go:172] (0xc001203cc0) (3) Data frame handling I0522 14:14:35.755346 6 log.go:172] (0xc001dc44d0) Data frame received for 5 I0522 14:14:35.755386 6 log.go:172] (0xc001801040) (5) Data frame handling I0522 14:14:35.757429 6 log.go:172] (0xc001dc44d0) Data frame received for 1 I0522 14:14:35.757468 6 log.go:172] (0xc0016b2000) (1) Data frame handling I0522 14:14:35.757494 6 log.go:172] (0xc0016b2000) (1) Data frame sent I0522 14:14:35.757655 6 log.go:172] (0xc001dc44d0) (0xc0016b2000) Stream removed, broadcasting: 1 I0522 14:14:35.757693 6 log.go:172] (0xc001dc44d0) Go away received I0522 14:14:35.757813 6 log.go:172] (0xc001dc44d0) (0xc0016b2000) Stream removed, broadcasting: 1 I0522 14:14:35.757837 6 log.go:172] (0xc001dc44d0) (0xc001203cc0) Stream removed, broadcasting: 3 I0522 14:14:35.757851 6 log.go:172] (0xc001dc44d0) (0xc001801040) Stream removed, broadcasting: 5 May 22 14:14:35.757: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:14:35.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8026" for this suite. May 22 14:14:49.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:14:49.887: INFO: namespace pod-network-test-8026 deletion completed in 14.12513671s • [SLOW TEST:38.623 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:14:49.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions May 22 14:14:49.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 22 14:14:50.116: INFO: stderr: "" May 22 14:14:50.116: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:14:50.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-414" for this suite. May 22 14:14:56.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:14:56.219: INFO: namespace kubectl-414 deletion completed in 6.097659311s • [SLOW TEST:6.331 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:14:56.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 22 14:15:04.329: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 22 14:15:04.341: INFO: Pod pod-with-prestop-exec-hook still exists May 22 14:15:06.341: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 22 14:15:06.346: INFO: Pod pod-with-prestop-exec-hook still exists May 22 14:15:08.341: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 22 14:15:08.345: INFO: Pod pod-with-prestop-exec-hook still exists May 22 14:15:10.341: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 22 14:15:10.345: INFO: Pod pod-with-prestop-exec-hook still exists May 22 14:15:12.341: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 22 14:15:12.345: INFO: Pod pod-with-prestop-exec-hook still exists May 22 14:15:14.341: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 22 14:15:14.345: INFO: Pod pod-with-prestop-exec-hook still exists May 22 14:15:16.341: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 22 14:15:16.345: INFO: Pod pod-with-prestop-exec-hook still exists May 22 14:15:18.341: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 22 14:15:18.344: INFO: Pod pod-with-prestop-exec-hook still exists May 22 14:15:20.341: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 22 14:15:20.346: INFO: Pod pod-with-prestop-exec-hook still exists May 22 14:15:22.341: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 22 14:15:22.344: INFO: Pod pod-with-prestop-exec-hook still exists May 22 14:15:24.341: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 22 14:15:24.345: INFO: Pod pod-with-prestop-exec-hook still exists May 22 14:15:26.341: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 22 14:15:26.345: INFO: Pod pod-with-prestop-exec-hook still exists May 22 14:15:28.341: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 22 14:15:28.346: INFO: Pod pod-with-prestop-exec-hook still exists May 22 14:15:30.341: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 22 14:15:30.344: INFO: Pod pod-with-prestop-exec-hook still exists May 22 14:15:32.341: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 22 14:15:32.345: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:15:32.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1883" for this suite. May 22 14:15:54.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:15:54.453: INFO: namespace container-lifecycle-hook-1883 deletion completed in 22.097184756s • [SLOW TEST:58.233 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:15:54.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-6662679c-a302-44ff-a841-1f08cf4c4e08 STEP: Creating configMap with name cm-test-opt-upd-64607444-1814-43dd-852e-860773d0e62c STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-6662679c-a302-44ff-a841-1f08cf4c4e08 STEP: Updating configmap cm-test-opt-upd-64607444-1814-43dd-852e-860773d0e62c STEP: Creating configMap with name cm-test-opt-create-f0281e28-d21d-4ddf-9983-5979d1a6347d STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:16:02.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8854" for this suite. May 22 14:16:24.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:16:24.788: INFO: namespace configmap-8854 deletion completed in 22.091324804s • [SLOW TEST:30.335 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:16:24.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-b2aa6f93-d2c0-4521-a075-5ed30f97eb8b STEP: Creating a pod to test consume secrets May 22 14:16:24.891: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a4a0e0de-ac6c-463a-ac03-47532bd9a2ac" in namespace "projected-7453" to be "success or failure" May 22 14:16:24.921: INFO: Pod "pod-projected-secrets-a4a0e0de-ac6c-463a-ac03-47532bd9a2ac": Phase="Pending", Reason="", readiness=false. Elapsed: 29.985362ms May 22 14:16:26.925: INFO: Pod "pod-projected-secrets-a4a0e0de-ac6c-463a-ac03-47532bd9a2ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034270029s May 22 14:16:28.929: INFO: Pod "pod-projected-secrets-a4a0e0de-ac6c-463a-ac03-47532bd9a2ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038189948s STEP: Saw pod success May 22 14:16:28.929: INFO: Pod "pod-projected-secrets-a4a0e0de-ac6c-463a-ac03-47532bd9a2ac" satisfied condition "success or failure" May 22 14:16:28.932: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-a4a0e0de-ac6c-463a-ac03-47532bd9a2ac container projected-secret-volume-test: STEP: delete the pod May 22 14:16:28.991: INFO: Waiting for pod pod-projected-secrets-a4a0e0de-ac6c-463a-ac03-47532bd9a2ac to disappear May 22 14:16:29.010: INFO: Pod pod-projected-secrets-a4a0e0de-ac6c-463a-ac03-47532bd9a2ac no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:16:29.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7453" for this suite. May 22 14:16:35.053: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:16:35.136: INFO: namespace projected-7453 deletion completed in 6.093391182s • [SLOW TEST:10.348 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:16:35.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components May 22 14:16:35.202: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend May 22 14:16:35.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8118' May 22 14:16:37.979: INFO: stderr: "" May 22 14:16:37.979: INFO: stdout: "service/redis-slave created\n" May 22 14:16:37.979: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend May 22 14:16:37.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8118' May 22 14:16:38.274: INFO: stderr: "" May 22 14:16:38.274: INFO: stdout: "service/redis-master created\n" May 22 14:16:38.274: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 22 14:16:38.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8118' May 22 14:16:38.597: INFO: stderr: "" May 22 14:16:38.597: INFO: stdout: "service/frontend created\n" May 22 14:16:38.597: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 May 22 14:16:38.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8118' May 22 14:16:38.872: INFO: stderr: "" May 22 14:16:38.872: INFO: stdout: "deployment.apps/frontend created\n" May 22 14:16:38.872: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 22 14:16:38.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8118' May 22 14:16:39.225: INFO: stderr: "" May 22 14:16:39.225: INFO: stdout: "deployment.apps/redis-master created\n" May 22 14:16:39.225: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 May 22 14:16:39.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8118' May 22 14:16:39.529: INFO: stderr: "" May 22 14:16:39.529: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app May 22 14:16:39.529: INFO: Waiting for all frontend pods to be Running. May 22 14:16:49.580: INFO: Waiting for frontend to serve content. May 22 14:16:49.644: INFO: Trying to add a new entry to the guestbook. May 22 14:16:49.686: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 22 14:16:49.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8118' May 22 14:16:49.942: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 22 14:16:49.942: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources May 22 14:16:49.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8118' May 22 14:16:50.106: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 22 14:16:50.106: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 22 14:16:50.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8118' May 22 14:16:50.234: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 22 14:16:50.234: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 22 14:16:50.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8118' May 22 14:16:50.367: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 22 14:16:50.367: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 22 14:16:50.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8118' May 22 14:16:50.486: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 22 14:16:50.486: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 22 14:16:50.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8118' May 22 14:16:50.716: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 22 14:16:50.716: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:16:50.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8118" for this suite. May 22 14:17:33.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:17:33.443: INFO: namespace kubectl-8118 deletion completed in 42.720558414s • [SLOW TEST:58.306 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:17:33.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-8cdh STEP: Creating a pod to test atomic-volume-subpath May 22 14:17:33.511: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-8cdh" in namespace "subpath-7587" to be "success or failure" May 22 14:17:33.531: INFO: Pod "pod-subpath-test-configmap-8cdh": Phase="Pending", Reason="", readiness=false. Elapsed: 19.018965ms May 22 14:17:35.535: INFO: Pod "pod-subpath-test-configmap-8cdh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023749664s May 22 14:17:37.539: INFO: Pod "pod-subpath-test-configmap-8cdh": Phase="Running", Reason="", readiness=true. Elapsed: 4.027633788s May 22 14:17:39.544: INFO: Pod "pod-subpath-test-configmap-8cdh": Phase="Running", Reason="", readiness=true. Elapsed: 6.032449011s May 22 14:17:41.548: INFO: Pod "pod-subpath-test-configmap-8cdh": Phase="Running", Reason="", readiness=true. Elapsed: 8.036902945s May 22 14:17:43.552: INFO: Pod "pod-subpath-test-configmap-8cdh": Phase="Running", Reason="", readiness=true. Elapsed: 10.040570601s May 22 14:17:45.555: INFO: Pod "pod-subpath-test-configmap-8cdh": Phase="Running", Reason="", readiness=true. Elapsed: 12.04330814s May 22 14:17:47.565: INFO: Pod "pod-subpath-test-configmap-8cdh": Phase="Running", Reason="", readiness=true. Elapsed: 14.053705148s May 22 14:17:49.570: INFO: Pod "pod-subpath-test-configmap-8cdh": Phase="Running", Reason="", readiness=true. Elapsed: 16.058729077s May 22 14:17:51.575: INFO: Pod "pod-subpath-test-configmap-8cdh": Phase="Running", Reason="", readiness=true. Elapsed: 18.063272344s May 22 14:17:53.579: INFO: Pod "pod-subpath-test-configmap-8cdh": Phase="Running", Reason="", readiness=true. Elapsed: 20.067278348s May 22 14:17:55.583: INFO: Pod "pod-subpath-test-configmap-8cdh": Phase="Running", Reason="", readiness=true. Elapsed: 22.071554546s May 22 14:17:57.586: INFO: Pod "pod-subpath-test-configmap-8cdh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.074717184s STEP: Saw pod success May 22 14:17:57.586: INFO: Pod "pod-subpath-test-configmap-8cdh" satisfied condition "success or failure" May 22 14:17:57.588: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-8cdh container test-container-subpath-configmap-8cdh: STEP: delete the pod May 22 14:17:57.716: INFO: Waiting for pod pod-subpath-test-configmap-8cdh to disappear May 22 14:17:57.752: INFO: Pod pod-subpath-test-configmap-8cdh no longer exists STEP: Deleting pod pod-subpath-test-configmap-8cdh May 22 14:17:57.753: INFO: Deleting pod "pod-subpath-test-configmap-8cdh" in namespace "subpath-7587" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:17:57.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7587" for this suite. May 22 14:18:03.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:18:03.861: INFO: namespace subpath-7587 deletion completed in 6.102825694s • [SLOW TEST:30.418 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:18:03.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-b6b409a3-14e4-49e9-ab10-c16cbc7b962f STEP: Creating secret with name s-test-opt-upd-0028d421-6ac0-4861-b773-e1c3de163595 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-b6b409a3-14e4-49e9-ab10-c16cbc7b962f STEP: Updating secret s-test-opt-upd-0028d421-6ac0-4861-b773-e1c3de163595 STEP: Creating secret with name s-test-opt-create-f1c77bde-c163-47be-a7af-3bd59a7bae12 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:18:12.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7546" for this suite. May 22 14:18:34.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:18:34.179: INFO: namespace secrets-7546 deletion completed in 22.117952107s • [SLOW TEST:30.318 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:18:34.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 22 14:18:34.232: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:18:38.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5430" for this suite. May 22 14:19:24.295: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:19:24.449: INFO: namespace pods-5430 deletion completed in 46.167157729s • [SLOW TEST:50.269 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:19:24.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs May 22 14:19:24.486: INFO: Waiting up to 5m0s for pod "pod-ba23b7a7-9244-4221-89c3-38eccaa9cd8c" in namespace "emptydir-8971" to be "success or failure" May 22 14:19:24.502: INFO: Pod "pod-ba23b7a7-9244-4221-89c3-38eccaa9cd8c": Phase="Pending", Reason="", readiness=false. Elapsed: 15.81616ms May 22 14:19:26.542: INFO: Pod "pod-ba23b7a7-9244-4221-89c3-38eccaa9cd8c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05600928s May 22 14:19:28.557: INFO: Pod "pod-ba23b7a7-9244-4221-89c3-38eccaa9cd8c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070825229s STEP: Saw pod success May 22 14:19:28.557: INFO: Pod "pod-ba23b7a7-9244-4221-89c3-38eccaa9cd8c" satisfied condition "success or failure" May 22 14:19:28.560: INFO: Trying to get logs from node iruya-worker pod pod-ba23b7a7-9244-4221-89c3-38eccaa9cd8c container test-container: STEP: delete the pod May 22 14:19:28.576: INFO: Waiting for pod pod-ba23b7a7-9244-4221-89c3-38eccaa9cd8c to disappear May 22 14:19:28.580: INFO: Pod pod-ba23b7a7-9244-4221-89c3-38eccaa9cd8c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:19:28.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8971" for this suite. May 22 14:19:34.609: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:19:34.686: INFO: namespace emptydir-8971 deletion completed in 6.103316396s • [SLOW TEST:10.237 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:19:34.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs May 22 14:19:34.768: INFO: Waiting up to 5m0s for pod "pod-d88d6342-e7db-453a-bc31-5cd585591a3f" in namespace "emptydir-5259" to be "success or failure" May 22 14:19:34.772: INFO: Pod "pod-d88d6342-e7db-453a-bc31-5cd585591a3f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.227452ms May 22 14:19:36.776: INFO: Pod "pod-d88d6342-e7db-453a-bc31-5cd585591a3f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007394566s May 22 14:19:38.780: INFO: Pod "pod-d88d6342-e7db-453a-bc31-5cd585591a3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011854351s STEP: Saw pod success May 22 14:19:38.780: INFO: Pod "pod-d88d6342-e7db-453a-bc31-5cd585591a3f" satisfied condition "success or failure" May 22 14:19:38.784: INFO: Trying to get logs from node iruya-worker2 pod pod-d88d6342-e7db-453a-bc31-5cd585591a3f container test-container: STEP: delete the pod May 22 14:19:38.820: INFO: Waiting for pod pod-d88d6342-e7db-453a-bc31-5cd585591a3f to disappear May 22 14:19:38.827: INFO: Pod pod-d88d6342-e7db-453a-bc31-5cd585591a3f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:19:38.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5259" for this suite. May 22 14:19:44.871: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:19:44.936: INFO: namespace emptydir-5259 deletion completed in 6.076097632s • [SLOW TEST:10.249 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:19:44.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 22 14:19:49.143: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:19:49.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6297" for this suite. May 22 14:19:55.450: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:19:55.532: INFO: namespace container-runtime-6297 deletion completed in 6.135575789s • [SLOW TEST:10.596 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:19:55.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 22 14:19:55.664: INFO: Waiting up to 5m0s for pod "downward-api-f31501e4-f27b-4395-b065-b708e5b4c7f8" in namespace "downward-api-7356" to be "success or failure" May 22 14:19:55.734: INFO: Pod "downward-api-f31501e4-f27b-4395-b065-b708e5b4c7f8": Phase="Pending", Reason="", readiness=false. Elapsed: 70.036147ms May 22 14:19:57.739: INFO: Pod "downward-api-f31501e4-f27b-4395-b065-b708e5b4c7f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07454803s May 22 14:19:59.743: INFO: Pod "downward-api-f31501e4-f27b-4395-b065-b708e5b4c7f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079157153s STEP: Saw pod success May 22 14:19:59.743: INFO: Pod "downward-api-f31501e4-f27b-4395-b065-b708e5b4c7f8" satisfied condition "success or failure" May 22 14:19:59.747: INFO: Trying to get logs from node iruya-worker pod downward-api-f31501e4-f27b-4395-b065-b708e5b4c7f8 container dapi-container: STEP: delete the pod May 22 14:19:59.857: INFO: Waiting for pod downward-api-f31501e4-f27b-4395-b065-b708e5b4c7f8 to disappear May 22 14:19:59.890: INFO: Pod downward-api-f31501e4-f27b-4395-b065-b708e5b4c7f8 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:19:59.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7356" for this suite. May 22 14:20:05.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:20:06.022: INFO: namespace downward-api-7356 deletion completed in 6.127594067s • [SLOW TEST:10.490 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:20:06.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 22 14:20:06.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-9174' May 22 14:20:06.173: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 22 14:20:06.173: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 May 22 14:20:08.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-9174' May 22 14:20:08.363: INFO: stderr: "" May 22 14:20:08.363: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:20:08.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9174" for this suite. May 22 14:22:10.380: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:22:10.453: INFO: namespace kubectl-9174 deletion completed in 2m2.087033863s • [SLOW TEST:124.430 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:22:10.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs May 22 14:22:10.559: INFO: Waiting up to 5m0s for pod "pod-a9754b43-8b07-4963-8834-a370e958c1f9" in namespace "emptydir-3357" to be "success or failure" May 22 14:22:10.563: INFO: Pod "pod-a9754b43-8b07-4963-8834-a370e958c1f9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.682587ms May 22 14:22:12.566: INFO: Pod "pod-a9754b43-8b07-4963-8834-a370e958c1f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007089957s May 22 14:22:14.570: INFO: Pod "pod-a9754b43-8b07-4963-8834-a370e958c1f9": Phase="Running", Reason="", readiness=true. Elapsed: 4.010238968s May 22 14:22:16.573: INFO: Pod "pod-a9754b43-8b07-4963-8834-a370e958c1f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013712944s STEP: Saw pod success May 22 14:22:16.573: INFO: Pod "pod-a9754b43-8b07-4963-8834-a370e958c1f9" satisfied condition "success or failure" May 22 14:22:16.576: INFO: Trying to get logs from node iruya-worker pod pod-a9754b43-8b07-4963-8834-a370e958c1f9 container test-container: STEP: delete the pod May 22 14:22:16.595: INFO: Waiting for pod pod-a9754b43-8b07-4963-8834-a370e958c1f9 to disappear May 22 14:22:16.599: INFO: Pod pod-a9754b43-8b07-4963-8834-a370e958c1f9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:22:16.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3357" for this suite. May 22 14:22:22.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:22:22.701: INFO: namespace emptydir-3357 deletion completed in 6.099658912s • [SLOW TEST:12.248 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:22:22.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller May 22 14:22:22.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9765' May 22 14:22:23.051: INFO: stderr: "" May 22 14:22:23.051: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 22 14:22:23.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9765' May 22 14:22:23.153: INFO: stderr: "" May 22 14:22:23.153: INFO: stdout: "update-demo-nautilus-f6r5w update-demo-nautilus-qms44 " May 22 14:22:23.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f6r5w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9765' May 22 14:22:23.270: INFO: stderr: "" May 22 14:22:23.270: INFO: stdout: "" May 22 14:22:23.270: INFO: update-demo-nautilus-f6r5w is created but not running May 22 14:22:28.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9765' May 22 14:22:28.399: INFO: stderr: "" May 22 14:22:28.399: INFO: stdout: "update-demo-nautilus-f6r5w update-demo-nautilus-qms44 " May 22 14:22:28.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f6r5w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9765' May 22 14:22:28.508: INFO: stderr: "" May 22 14:22:28.508: INFO: stdout: "true" May 22 14:22:28.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f6r5w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9765' May 22 14:22:28.612: INFO: stderr: "" May 22 14:22:28.612: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 22 14:22:28.612: INFO: validating pod update-demo-nautilus-f6r5w May 22 14:22:28.615: INFO: got data: { "image": "nautilus.jpg" } May 22 14:22:28.615: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 22 14:22:28.615: INFO: update-demo-nautilus-f6r5w is verified up and running May 22 14:22:28.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qms44 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9765' May 22 14:22:28.726: INFO: stderr: "" May 22 14:22:28.726: INFO: stdout: "true" May 22 14:22:28.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qms44 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9765' May 22 14:22:28.826: INFO: stderr: "" May 22 14:22:28.826: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 22 14:22:28.826: INFO: validating pod update-demo-nautilus-qms44 May 22 14:22:28.830: INFO: got data: { "image": "nautilus.jpg" } May 22 14:22:28.830: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 22 14:22:28.830: INFO: update-demo-nautilus-qms44 is verified up and running STEP: using delete to clean up resources May 22 14:22:28.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9765' May 22 14:22:28.948: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 22 14:22:28.948: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 22 14:22:28.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9765' May 22 14:22:29.051: INFO: stderr: "No resources found.\n" May 22 14:22:29.051: INFO: stdout: "" May 22 14:22:29.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9765 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 22 14:22:29.208: INFO: stderr: "" May 22 14:22:29.208: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:22:29.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9765" for this suite. May 22 14:22:51.239: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:22:51.312: INFO: namespace kubectl-9765 deletion completed in 22.093007849s • [SLOW TEST:28.610 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:22:51.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 22 14:22:51.380: INFO: Waiting up to 5m0s for pod "downwardapi-volume-efb85540-2fcd-4f06-8d1b-309725288c85" in namespace "downward-api-506" to be "success or failure" May 22 14:22:51.398: INFO: Pod "downwardapi-volume-efb85540-2fcd-4f06-8d1b-309725288c85": Phase="Pending", Reason="", readiness=false. Elapsed: 17.917638ms May 22 14:22:53.403: INFO: Pod "downwardapi-volume-efb85540-2fcd-4f06-8d1b-309725288c85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02262709s May 22 14:22:55.408: INFO: Pod "downwardapi-volume-efb85540-2fcd-4f06-8d1b-309725288c85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027845512s STEP: Saw pod success May 22 14:22:55.408: INFO: Pod "downwardapi-volume-efb85540-2fcd-4f06-8d1b-309725288c85" satisfied condition "success or failure" May 22 14:22:55.411: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-efb85540-2fcd-4f06-8d1b-309725288c85 container client-container: STEP: delete the pod May 22 14:22:55.497: INFO: Waiting for pod downwardapi-volume-efb85540-2fcd-4f06-8d1b-309725288c85 to disappear May 22 14:22:55.514: INFO: Pod downwardapi-volume-efb85540-2fcd-4f06-8d1b-309725288c85 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:22:55.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-506" for this suite. May 22 14:23:01.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:23:01.610: INFO: namespace downward-api-506 deletion completed in 6.091530635s • [SLOW TEST:10.297 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:23:01.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-ae744323-dc19-466c-af7d-6f4202f82e2c STEP: Creating a pod to test consume secrets May 22 14:23:01.831: INFO: Waiting up to 5m0s for pod "pod-secrets-70c1ad67-9358-45a9-8ae9-15f5affe5cf7" in namespace "secrets-4631" to be "success or failure" May 22 14:23:01.835: INFO: Pod "pod-secrets-70c1ad67-9358-45a9-8ae9-15f5affe5cf7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.974494ms May 22 14:23:03.839: INFO: Pod "pod-secrets-70c1ad67-9358-45a9-8ae9-15f5affe5cf7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008420482s May 22 14:23:05.843: INFO: Pod "pod-secrets-70c1ad67-9358-45a9-8ae9-15f5affe5cf7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012719673s STEP: Saw pod success May 22 14:23:05.843: INFO: Pod "pod-secrets-70c1ad67-9358-45a9-8ae9-15f5affe5cf7" satisfied condition "success or failure" May 22 14:23:05.846: INFO: Trying to get logs from node iruya-worker pod pod-secrets-70c1ad67-9358-45a9-8ae9-15f5affe5cf7 container secret-volume-test: STEP: delete the pod May 22 14:23:05.860: INFO: Waiting for pod pod-secrets-70c1ad67-9358-45a9-8ae9-15f5affe5cf7 to disappear May 22 14:23:05.865: INFO: Pod pod-secrets-70c1ad67-9358-45a9-8ae9-15f5affe5cf7 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:23:05.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4631" for this suite. May 22 14:23:11.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:23:11.957: INFO: namespace secrets-4631 deletion completed in 6.089425206s STEP: Destroying namespace "secret-namespace-4942" for this suite. May 22 14:23:17.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:23:18.079: INFO: namespace secret-namespace-4942 deletion completed in 6.12229688s • [SLOW TEST:16.469 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:23:18.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 22 14:23:18.172: INFO: Waiting up to 5m0s for pod "downward-api-ec4ce7e6-217d-427a-b646-3ae7a5848fb3" in namespace "downward-api-8724" to be "success or failure" May 22 14:23:18.214: INFO: Pod "downward-api-ec4ce7e6-217d-427a-b646-3ae7a5848fb3": Phase="Pending", Reason="", readiness=false. Elapsed: 42.07909ms May 22 14:23:20.218: INFO: Pod "downward-api-ec4ce7e6-217d-427a-b646-3ae7a5848fb3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046348751s May 22 14:23:22.223: INFO: Pod "downward-api-ec4ce7e6-217d-427a-b646-3ae7a5848fb3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050938525s STEP: Saw pod success May 22 14:23:22.223: INFO: Pod "downward-api-ec4ce7e6-217d-427a-b646-3ae7a5848fb3" satisfied condition "success or failure" May 22 14:23:22.226: INFO: Trying to get logs from node iruya-worker2 pod downward-api-ec4ce7e6-217d-427a-b646-3ae7a5848fb3 container dapi-container: STEP: delete the pod May 22 14:23:22.277: INFO: Waiting for pod downward-api-ec4ce7e6-217d-427a-b646-3ae7a5848fb3 to disappear May 22 14:23:22.283: INFO: Pod downward-api-ec4ce7e6-217d-427a-b646-3ae7a5848fb3 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:23:22.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8724" for this suite. May 22 14:23:28.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:23:28.370: INFO: namespace downward-api-8724 deletion completed in 6.08471015s • [SLOW TEST:10.291 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:23:28.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 22 14:23:33.042: INFO: Successfully updated pod "annotationupdate13638082-bd3b-4f15-abcc-ca3c13573bc0" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:23:35.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6787" for this suite. May 22 14:23:57.115: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:23:57.191: INFO: namespace projected-6787 deletion completed in 22.11748637s • [SLOW TEST:28.820 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:23:57.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command May 22 14:23:57.251: INFO: Waiting up to 5m0s for pod "var-expansion-d9e7ee1d-e1e3-45ce-8511-c00bd7b37d4c" in namespace "var-expansion-9121" to be "success or failure" May 22 14:23:57.292: INFO: Pod "var-expansion-d9e7ee1d-e1e3-45ce-8511-c00bd7b37d4c": Phase="Pending", Reason="", readiness=false. Elapsed: 40.808258ms May 22 14:23:59.343: INFO: Pod "var-expansion-d9e7ee1d-e1e3-45ce-8511-c00bd7b37d4c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092059601s May 22 14:24:01.347: INFO: Pod "var-expansion-d9e7ee1d-e1e3-45ce-8511-c00bd7b37d4c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.095829323s STEP: Saw pod success May 22 14:24:01.347: INFO: Pod "var-expansion-d9e7ee1d-e1e3-45ce-8511-c00bd7b37d4c" satisfied condition "success or failure" May 22 14:24:01.349: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-d9e7ee1d-e1e3-45ce-8511-c00bd7b37d4c container dapi-container: STEP: delete the pod May 22 14:24:01.376: INFO: Waiting for pod var-expansion-d9e7ee1d-e1e3-45ce-8511-c00bd7b37d4c to disappear May 22 14:24:01.399: INFO: Pod var-expansion-d9e7ee1d-e1e3-45ce-8511-c00bd7b37d4c no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:24:01.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9121" for this suite. May 22 14:24:07.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:24:07.489: INFO: namespace var-expansion-9121 deletion completed in 6.086880766s • [SLOW TEST:10.298 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:24:07.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 22 14:24:07.594: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6375,SelfLink:/api/v1/namespaces/watch-6375/configmaps/e2e-watch-test-label-changed,UID:2eadffc1-3aea-4942-9669-2fd8e111c1da,ResourceVersion:12307260,Generation:0,CreationTimestamp:2020-05-22 14:24:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 22 14:24:07.594: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6375,SelfLink:/api/v1/namespaces/watch-6375/configmaps/e2e-watch-test-label-changed,UID:2eadffc1-3aea-4942-9669-2fd8e111c1da,ResourceVersion:12307261,Generation:0,CreationTimestamp:2020-05-22 14:24:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 22 14:24:07.594: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6375,SelfLink:/api/v1/namespaces/watch-6375/configmaps/e2e-watch-test-label-changed,UID:2eadffc1-3aea-4942-9669-2fd8e111c1da,ResourceVersion:12307262,Generation:0,CreationTimestamp:2020-05-22 14:24:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 22 14:24:17.646: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6375,SelfLink:/api/v1/namespaces/watch-6375/configmaps/e2e-watch-test-label-changed,UID:2eadffc1-3aea-4942-9669-2fd8e111c1da,ResourceVersion:12307285,Generation:0,CreationTimestamp:2020-05-22 14:24:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 22 14:24:17.647: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6375,SelfLink:/api/v1/namespaces/watch-6375/configmaps/e2e-watch-test-label-changed,UID:2eadffc1-3aea-4942-9669-2fd8e111c1da,ResourceVersion:12307286,Generation:0,CreationTimestamp:2020-05-22 14:24:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 22 14:24:17.647: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6375,SelfLink:/api/v1/namespaces/watch-6375/configmaps/e2e-watch-test-label-changed,UID:2eadffc1-3aea-4942-9669-2fd8e111c1da,ResourceVersion:12307287,Generation:0,CreationTimestamp:2020-05-22 14:24:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:24:17.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6375" for this suite. May 22 14:24:23.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:24:23.732: INFO: namespace watch-6375 deletion completed in 6.081793142s • [SLOW TEST:16.243 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:24:23.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 22 14:24:23.883: INFO: Pod name pod-release: Found 0 pods out of 1 May 22 14:24:28.889: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:24:29.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3089" for this suite. May 22 14:24:36.097: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:24:36.316: INFO: namespace replication-controller-3089 deletion completed in 6.32347063s • [SLOW TEST:12.583 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:24:36.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 22 14:24:40.463: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-c46c9914-f77e-4520-9e50-0ed2d72a3833,GenerateName:,Namespace:events-9576,SelfLink:/api/v1/namespaces/events-9576/pods/send-events-c46c9914-f77e-4520-9e50-0ed2d72a3833,UID:1c68491c-2a90-4943-b7cf-6c4b81fed31d,ResourceVersion:12307391,Generation:0,CreationTimestamp:2020-05-22 14:24:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 406637268,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-59vwj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-59vwj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-59vwj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002daf0b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002daf0d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 14:24:36 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 14:24:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 14:24:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 14:24:36 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.231,StartTime:2020-05-22 14:24:36 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-05-22 14:24:39 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://eca1e5ce6c091278f2abbd45def185e9f0d5cd17b14eaee30d7a39c5ed2c889e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod May 22 14:24:42.468: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 22 14:24:44.474: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:24:44.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-9576" for this suite. May 22 14:25:22.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:25:22.646: INFO: namespace events-9576 deletion completed in 38.135170252s • [SLOW TEST:46.330 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:25:22.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container May 22 14:25:27.243: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4582 pod-service-account-8f2eae7a-cdb2-498e-926d-034e1b79bb8a -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 22 14:25:27.552: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4582 pod-service-account-8f2eae7a-cdb2-498e-926d-034e1b79bb8a -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 22 14:25:27.788: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4582 pod-service-account-8f2eae7a-cdb2-498e-926d-034e1b79bb8a -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:25:27.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4582" for this suite. May 22 14:25:34.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:25:34.117: INFO: namespace svcaccounts-4582 deletion completed in 6.12798993s • [SLOW TEST:11.470 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:25:34.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes May 22 14:25:40.247: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 22 14:25:55.353: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:25:55.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7494" for this suite. May 22 14:26:01.393: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:26:01.471: INFO: namespace pods-7494 deletion completed in 6.112162322s • [SLOW TEST:27.354 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:26:01.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3662.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-3662.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3662.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3662.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-3662.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3662.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 22 14:26:07.604: INFO: DNS probes using dns-3662/dns-test-4732c001-d16a-4876-abb6-1f81072c9229 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:26:07.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3662" for this suite. May 22 14:26:13.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:26:13.766: INFO: namespace dns-3662 deletion completed in 6.118929087s • [SLOW TEST:12.294 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:26:13.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-796f6968-9a38-4a57-a475-66b90840b4b4 STEP: Creating a pod to test consume secrets May 22 14:26:13.846: INFO: Waiting up to 5m0s for pod "pod-secrets-61d195c4-a646-4b20-97fa-84398d306e42" in namespace "secrets-9453" to be "success or failure" May 22 14:26:13.850: INFO: Pod "pod-secrets-61d195c4-a646-4b20-97fa-84398d306e42": Phase="Pending", Reason="", readiness=false. Elapsed: 4.296994ms May 22 14:26:15.854: INFO: Pod "pod-secrets-61d195c4-a646-4b20-97fa-84398d306e42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008739624s May 22 14:26:17.859: INFO: Pod "pod-secrets-61d195c4-a646-4b20-97fa-84398d306e42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0131806s STEP: Saw pod success May 22 14:26:17.859: INFO: Pod "pod-secrets-61d195c4-a646-4b20-97fa-84398d306e42" satisfied condition "success or failure" May 22 14:26:17.862: INFO: Trying to get logs from node iruya-worker pod pod-secrets-61d195c4-a646-4b20-97fa-84398d306e42 container secret-volume-test: STEP: delete the pod May 22 14:26:17.943: INFO: Waiting for pod pod-secrets-61d195c4-a646-4b20-97fa-84398d306e42 to disappear May 22 14:26:17.957: INFO: Pod pod-secrets-61d195c4-a646-4b20-97fa-84398d306e42 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:26:17.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9453" for this suite. May 22 14:26:23.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:26:24.042: INFO: namespace secrets-9453 deletion completed in 6.081152286s • [SLOW TEST:10.275 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:26:24.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium May 22 14:26:24.122: INFO: Waiting up to 5m0s for pod "pod-d383fa4b-6abb-49b8-8323-94fb16131dd3" in namespace "emptydir-7153" to be "success or failure" May 22 14:26:24.126: INFO: Pod "pod-d383fa4b-6abb-49b8-8323-94fb16131dd3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.930143ms May 22 14:26:26.232: INFO: Pod "pod-d383fa4b-6abb-49b8-8323-94fb16131dd3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109839029s May 22 14:26:28.235: INFO: Pod "pod-d383fa4b-6abb-49b8-8323-94fb16131dd3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.11309152s STEP: Saw pod success May 22 14:26:28.235: INFO: Pod "pod-d383fa4b-6abb-49b8-8323-94fb16131dd3" satisfied condition "success or failure" May 22 14:26:28.237: INFO: Trying to get logs from node iruya-worker2 pod pod-d383fa4b-6abb-49b8-8323-94fb16131dd3 container test-container: STEP: delete the pod May 22 14:26:28.265: INFO: Waiting for pod pod-d383fa4b-6abb-49b8-8323-94fb16131dd3 to disappear May 22 14:26:28.269: INFO: Pod pod-d383fa4b-6abb-49b8-8323-94fb16131dd3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:26:28.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7153" for this suite. May 22 14:26:34.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:26:34.357: INFO: namespace emptydir-7153 deletion completed in 6.084593981s • [SLOW TEST:10.316 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:26:34.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-94ca0a01-5d6f-4d53-b9a8-920519b49d94 STEP: Creating a pod to test consume configMaps May 22 14:26:34.424: INFO: Waiting up to 5m0s for pod "pod-configmaps-dea7607b-ecde-411c-bb7a-4850266086fd" in namespace "configmap-4337" to be "success or failure" May 22 14:26:34.443: INFO: Pod "pod-configmaps-dea7607b-ecde-411c-bb7a-4850266086fd": Phase="Pending", Reason="", readiness=false. Elapsed: 19.857136ms May 22 14:26:36.447: INFO: Pod "pod-configmaps-dea7607b-ecde-411c-bb7a-4850266086fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02370994s May 22 14:26:38.451: INFO: Pod "pod-configmaps-dea7607b-ecde-411c-bb7a-4850266086fd": Phase="Running", Reason="", readiness=true. Elapsed: 4.027429195s May 22 14:26:40.455: INFO: Pod "pod-configmaps-dea7607b-ecde-411c-bb7a-4850266086fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031624295s STEP: Saw pod success May 22 14:26:40.455: INFO: Pod "pod-configmaps-dea7607b-ecde-411c-bb7a-4850266086fd" satisfied condition "success or failure" May 22 14:26:40.458: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-dea7607b-ecde-411c-bb7a-4850266086fd container configmap-volume-test: STEP: delete the pod May 22 14:26:40.520: INFO: Waiting for pod pod-configmaps-dea7607b-ecde-411c-bb7a-4850266086fd to disappear May 22 14:26:40.533: INFO: Pod pod-configmaps-dea7607b-ecde-411c-bb7a-4850266086fd no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:26:40.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4337" for this suite. May 22 14:26:46.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:26:46.615: INFO: namespace configmap-4337 deletion completed in 6.078930148s • [SLOW TEST:12.257 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:26:46.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 22 14:26:46.686: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a005e030-55e0-4f45-b540-1518ded8a982" in namespace "projected-9982" to be "success or failure" May 22 14:26:46.690: INFO: Pod "downwardapi-volume-a005e030-55e0-4f45-b540-1518ded8a982": Phase="Pending", Reason="", readiness=false. Elapsed: 3.96819ms May 22 14:26:48.694: INFO: Pod "downwardapi-volume-a005e030-55e0-4f45-b540-1518ded8a982": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008044727s May 22 14:26:50.698: INFO: Pod "downwardapi-volume-a005e030-55e0-4f45-b540-1518ded8a982": Phase="Running", Reason="", readiness=true. Elapsed: 4.011841439s May 22 14:26:52.702: INFO: Pod "downwardapi-volume-a005e030-55e0-4f45-b540-1518ded8a982": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016228053s STEP: Saw pod success May 22 14:26:52.702: INFO: Pod "downwardapi-volume-a005e030-55e0-4f45-b540-1518ded8a982" satisfied condition "success or failure" May 22 14:26:52.706: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-a005e030-55e0-4f45-b540-1518ded8a982 container client-container: STEP: delete the pod May 22 14:26:52.737: INFO: Waiting for pod downwardapi-volume-a005e030-55e0-4f45-b540-1518ded8a982 to disappear May 22 14:26:52.749: INFO: Pod downwardapi-volume-a005e030-55e0-4f45-b540-1518ded8a982 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:26:52.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9982" for this suite. May 22 14:26:58.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:26:58.841: INFO: namespace projected-9982 deletion completed in 6.088835546s • [SLOW TEST:12.226 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:26:58.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-334807c3-b8fb-4795-8794-03cbddd975b1 STEP: Creating a pod to test consume configMaps May 22 14:26:58.931: INFO: Waiting up to 5m0s for pod "pod-configmaps-93d6a052-53a6-45df-8ac0-d587c4a7dbfa" in namespace "configmap-692" to be "success or failure" May 22 14:26:58.940: INFO: Pod "pod-configmaps-93d6a052-53a6-45df-8ac0-d587c4a7dbfa": Phase="Pending", Reason="", readiness=false. Elapsed: 9.563492ms May 22 14:27:00.944: INFO: Pod "pod-configmaps-93d6a052-53a6-45df-8ac0-d587c4a7dbfa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013928751s May 22 14:27:02.948: INFO: Pod "pod-configmaps-93d6a052-53a6-45df-8ac0-d587c4a7dbfa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017612099s STEP: Saw pod success May 22 14:27:02.948: INFO: Pod "pod-configmaps-93d6a052-53a6-45df-8ac0-d587c4a7dbfa" satisfied condition "success or failure" May 22 14:27:02.951: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-93d6a052-53a6-45df-8ac0-d587c4a7dbfa container configmap-volume-test: STEP: delete the pod May 22 14:27:02.996: INFO: Waiting for pod pod-configmaps-93d6a052-53a6-45df-8ac0-d587c4a7dbfa to disappear May 22 14:27:03.042: INFO: Pod pod-configmaps-93d6a052-53a6-45df-8ac0-d587c4a7dbfa no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:27:03.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-692" for this suite. May 22 14:27:09.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:27:09.140: INFO: namespace configmap-692 deletion completed in 6.093408381s • [SLOW TEST:10.299 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:27:09.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 22 14:27:09.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-8995' May 22 14:27:11.802: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 22 14:27:11.802: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 May 22 14:27:11.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-8995' May 22 14:27:11.926: INFO: stderr: "" May 22 14:27:11.926: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:27:11.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8995" for this suite. May 22 14:27:17.938: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:27:18.012: INFO: namespace kubectl-8995 deletion completed in 6.082516757s • [SLOW TEST:8.872 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:27:18.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 22 14:27:18.051: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c69d0273-4e4d-4efb-aca2-7bc65d695e17" in namespace "projected-8542" to be "success or failure" May 22 14:27:18.066: INFO: Pod "downwardapi-volume-c69d0273-4e4d-4efb-aca2-7bc65d695e17": Phase="Pending", Reason="", readiness=false. Elapsed: 15.451404ms May 22 14:27:20.071: INFO: Pod "downwardapi-volume-c69d0273-4e4d-4efb-aca2-7bc65d695e17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019596782s May 22 14:27:22.075: INFO: Pod "downwardapi-volume-c69d0273-4e4d-4efb-aca2-7bc65d695e17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023518128s STEP: Saw pod success May 22 14:27:22.075: INFO: Pod "downwardapi-volume-c69d0273-4e4d-4efb-aca2-7bc65d695e17" satisfied condition "success or failure" May 22 14:27:22.077: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-c69d0273-4e4d-4efb-aca2-7bc65d695e17 container client-container: STEP: delete the pod May 22 14:27:22.180: INFO: Waiting for pod downwardapi-volume-c69d0273-4e4d-4efb-aca2-7bc65d695e17 to disappear May 22 14:27:22.187: INFO: Pod downwardapi-volume-c69d0273-4e4d-4efb-aca2-7bc65d695e17 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:27:22.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8542" for this suite. May 22 14:27:28.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:27:28.279: INFO: namespace projected-8542 deletion completed in 6.088674634s • [SLOW TEST:10.266 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:27:28.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-7259 STEP: creating a selector STEP: Creating the service pods in kubernetes May 22 14:27:28.355: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 22 14:27:54.526: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.40:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7259 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 22 14:27:54.526: INFO: >>> kubeConfig: /root/.kube/config I0522 14:27:54.556513 6 log.go:172] (0xc0005d6e70) (0xc002986500) Create stream I0522 14:27:54.556549 6 log.go:172] (0xc0005d6e70) (0xc002986500) Stream added, broadcasting: 1 I0522 14:27:54.558382 6 log.go:172] (0xc0005d6e70) Reply frame received for 1 I0522 14:27:54.558414 6 log.go:172] (0xc0005d6e70) (0xc001e860a0) Create stream I0522 14:27:54.558424 6 log.go:172] (0xc0005d6e70) (0xc001e860a0) Stream added, broadcasting: 3 I0522 14:27:54.559302 6 log.go:172] (0xc0005d6e70) Reply frame received for 3 I0522 14:27:54.559332 6 log.go:172] (0xc0005d6e70) (0xc0029865a0) Create stream I0522 14:27:54.559342 6 log.go:172] (0xc0005d6e70) (0xc0029865a0) Stream added, broadcasting: 5 I0522 14:27:54.560111 6 log.go:172] (0xc0005d6e70) Reply frame received for 5 I0522 14:27:54.650820 6 log.go:172] (0xc0005d6e70) Data frame received for 3 I0522 14:27:54.650908 6 log.go:172] (0xc001e860a0) (3) Data frame handling I0522 14:27:54.650954 6 log.go:172] (0xc0005d6e70) Data frame received for 5 I0522 14:27:54.650991 6 log.go:172] (0xc0029865a0) (5) Data frame handling I0522 14:27:54.651035 6 log.go:172] (0xc001e860a0) (3) Data frame sent I0522 14:27:54.651072 6 log.go:172] (0xc0005d6e70) Data frame received for 3 I0522 14:27:54.651092 6 log.go:172] (0xc001e860a0) (3) Data frame handling I0522 14:27:54.652516 6 log.go:172] (0xc0005d6e70) Data frame received for 1 I0522 14:27:54.652541 6 log.go:172] (0xc002986500) (1) Data frame handling I0522 14:27:54.652553 6 log.go:172] (0xc002986500) (1) Data frame sent I0522 14:27:54.652574 6 log.go:172] (0xc0005d6e70) (0xc002986500) Stream removed, broadcasting: 1 I0522 14:27:54.652599 6 log.go:172] (0xc0005d6e70) Go away received I0522 14:27:54.652765 6 log.go:172] (0xc0005d6e70) (0xc002986500) Stream removed, broadcasting: 1 I0522 14:27:54.652781 6 log.go:172] (0xc0005d6e70) (0xc001e860a0) Stream removed, broadcasting: 3 I0522 14:27:54.652788 6 log.go:172] (0xc0005d6e70) (0xc0029865a0) Stream removed, broadcasting: 5 May 22 14:27:54.652: INFO: Found all expected endpoints: [netserver-0] May 22 14:27:54.656: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.238:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7259 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 22 14:27:54.656: INFO: >>> kubeConfig: /root/.kube/config I0522 14:27:54.686180 6 log.go:172] (0xc001418000) (0xc000a06fa0) Create stream I0522 14:27:54.686211 6 log.go:172] (0xc001418000) (0xc000a06fa0) Stream added, broadcasting: 1 I0522 14:27:54.688445 6 log.go:172] (0xc001418000) Reply frame received for 1 I0522 14:27:54.688499 6 log.go:172] (0xc001418000) (0xc00195c000) Create stream I0522 14:27:54.688524 6 log.go:172] (0xc001418000) (0xc00195c000) Stream added, broadcasting: 3 I0522 14:27:54.689753 6 log.go:172] (0xc001418000) Reply frame received for 3 I0522 14:27:54.689810 6 log.go:172] (0xc001418000) (0xc002986640) Create stream I0522 14:27:54.689824 6 log.go:172] (0xc001418000) (0xc002986640) Stream added, broadcasting: 5 I0522 14:27:54.690738 6 log.go:172] (0xc001418000) Reply frame received for 5 I0522 14:27:54.756554 6 log.go:172] (0xc001418000) Data frame received for 3 I0522 14:27:54.756588 6 log.go:172] (0xc00195c000) (3) Data frame handling I0522 14:27:54.756601 6 log.go:172] (0xc00195c000) (3) Data frame sent I0522 14:27:54.756610 6 log.go:172] (0xc001418000) Data frame received for 3 I0522 14:27:54.756625 6 log.go:172] (0xc00195c000) (3) Data frame handling I0522 14:27:54.756666 6 log.go:172] (0xc001418000) Data frame received for 5 I0522 14:27:54.756680 6 log.go:172] (0xc002986640) (5) Data frame handling I0522 14:27:54.758540 6 log.go:172] (0xc001418000) Data frame received for 1 I0522 14:27:54.758561 6 log.go:172] (0xc000a06fa0) (1) Data frame handling I0522 14:27:54.758572 6 log.go:172] (0xc000a06fa0) (1) Data frame sent I0522 14:27:54.758586 6 log.go:172] (0xc001418000) (0xc000a06fa0) Stream removed, broadcasting: 1 I0522 14:27:54.758609 6 log.go:172] (0xc001418000) Go away received I0522 14:27:54.758706 6 log.go:172] (0xc001418000) (0xc000a06fa0) Stream removed, broadcasting: 1 I0522 14:27:54.758733 6 log.go:172] (0xc001418000) (0xc00195c000) Stream removed, broadcasting: 3 I0522 14:27:54.758745 6 log.go:172] (0xc001418000) (0xc002986640) Stream removed, broadcasting: 5 May 22 14:27:54.758: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:27:54.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7259" for this suite. May 22 14:28:18.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:28:18.891: INFO: namespace pod-network-test-7259 deletion completed in 24.128911117s • [SLOW TEST:50.612 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:28:18.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:28:24.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8383" for this suite. May 22 14:28:46.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:28:46.124: INFO: namespace replication-controller-8383 deletion completed in 22.10971844s • [SLOW TEST:27.232 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:28:46.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 22 14:28:46.185: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:28:47.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6353" for this suite. May 22 14:28:53.251: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:28:53.335: INFO: namespace custom-resource-definition-6353 deletion completed in 6.0993139s • [SLOW TEST:7.210 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:28:53.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-kpqd STEP: Creating a pod to test atomic-volume-subpath May 22 14:28:53.418: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-kpqd" in namespace "subpath-5259" to be "success or failure" May 22 14:28:53.422: INFO: Pod "pod-subpath-test-downwardapi-kpqd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012062ms May 22 14:28:55.443: INFO: Pod "pod-subpath-test-downwardapi-kpqd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02468823s May 22 14:28:57.447: INFO: Pod "pod-subpath-test-downwardapi-kpqd": Phase="Running", Reason="", readiness=true. Elapsed: 4.028559213s May 22 14:28:59.451: INFO: Pod "pod-subpath-test-downwardapi-kpqd": Phase="Running", Reason="", readiness=true. Elapsed: 6.0324422s May 22 14:29:01.455: INFO: Pod "pod-subpath-test-downwardapi-kpqd": Phase="Running", Reason="", readiness=true. Elapsed: 8.036414344s May 22 14:29:03.459: INFO: Pod "pod-subpath-test-downwardapi-kpqd": Phase="Running", Reason="", readiness=true. Elapsed: 10.040583953s May 22 14:29:05.491: INFO: Pod "pod-subpath-test-downwardapi-kpqd": Phase="Running", Reason="", readiness=true. Elapsed: 12.072757315s May 22 14:29:07.494: INFO: Pod "pod-subpath-test-downwardapi-kpqd": Phase="Running", Reason="", readiness=true. Elapsed: 14.075928355s May 22 14:29:09.499: INFO: Pod "pod-subpath-test-downwardapi-kpqd": Phase="Running", Reason="", readiness=true. Elapsed: 16.080333188s May 22 14:29:11.503: INFO: Pod "pod-subpath-test-downwardapi-kpqd": Phase="Running", Reason="", readiness=true. Elapsed: 18.084606484s May 22 14:29:13.507: INFO: Pod "pod-subpath-test-downwardapi-kpqd": Phase="Running", Reason="", readiness=true. Elapsed: 20.088341499s May 22 14:29:15.511: INFO: Pod "pod-subpath-test-downwardapi-kpqd": Phase="Running", Reason="", readiness=true. Elapsed: 22.092758112s May 22 14:29:17.514: INFO: Pod "pod-subpath-test-downwardapi-kpqd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.096057274s STEP: Saw pod success May 22 14:29:17.514: INFO: Pod "pod-subpath-test-downwardapi-kpqd" satisfied condition "success or failure" May 22 14:29:17.516: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-downwardapi-kpqd container test-container-subpath-downwardapi-kpqd: STEP: delete the pod May 22 14:29:17.552: INFO: Waiting for pod pod-subpath-test-downwardapi-kpqd to disappear May 22 14:29:17.565: INFO: Pod pod-subpath-test-downwardapi-kpqd no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-kpqd May 22 14:29:17.565: INFO: Deleting pod "pod-subpath-test-downwardapi-kpqd" in namespace "subpath-5259" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:29:17.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5259" for this suite. May 22 14:29:23.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:29:23.680: INFO: namespace subpath-5259 deletion completed in 6.110305726s • [SLOW TEST:30.345 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:29:23.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:29:23.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2651" for this suite. May 22 14:29:29.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:29:29.855: INFO: namespace services-2651 deletion completed in 6.088643514s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.175 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:29:29.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC May 22 14:29:29.903: INFO: namespace kubectl-592 May 22 14:29:29.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-592' May 22 14:29:30.266: INFO: stderr: "" May 22 14:29:30.266: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 22 14:29:31.270: INFO: Selector matched 1 pods for map[app:redis] May 22 14:29:31.270: INFO: Found 0 / 1 May 22 14:29:32.271: INFO: Selector matched 1 pods for map[app:redis] May 22 14:29:32.271: INFO: Found 0 / 1 May 22 14:29:33.270: INFO: Selector matched 1 pods for map[app:redis] May 22 14:29:33.270: INFO: Found 0 / 1 May 22 14:29:34.271: INFO: Selector matched 1 pods for map[app:redis] May 22 14:29:34.271: INFO: Found 1 / 1 May 22 14:29:34.271: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 22 14:29:34.275: INFO: Selector matched 1 pods for map[app:redis] May 22 14:29:34.275: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 22 14:29:34.275: INFO: wait on redis-master startup in kubectl-592 May 22 14:29:34.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-6z6br redis-master --namespace=kubectl-592' May 22 14:29:34.375: INFO: stderr: "" May 22 14:29:34.375: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 22 May 14:29:33.331 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 22 May 14:29:33.331 # Server started, Redis version 3.2.12\n1:M 22 May 14:29:33.331 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 22 May 14:29:33.331 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC May 22 14:29:34.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-592' May 22 14:29:34.514: INFO: stderr: "" May 22 14:29:34.514: INFO: stdout: "service/rm2 exposed\n" May 22 14:29:34.551: INFO: Service rm2 in namespace kubectl-592 found. STEP: exposing service May 22 14:29:36.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-592' May 22 14:29:36.694: INFO: stderr: "" May 22 14:29:36.694: INFO: stdout: "service/rm3 exposed\n" May 22 14:29:36.700: INFO: Service rm3 in namespace kubectl-592 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:29:38.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-592" for this suite. May 22 14:30:06.727: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:30:06.805: INFO: namespace kubectl-592 deletion completed in 28.094185039s • [SLOW TEST:36.950 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:30:06.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command May 22 14:30:06.883: INFO: Waiting up to 5m0s for pod "client-containers-c12ac33d-da52-4729-a1e5-5c30f203c20e" in namespace "containers-1962" to be "success or failure" May 22 14:30:06.903: INFO: Pod "client-containers-c12ac33d-da52-4729-a1e5-5c30f203c20e": Phase="Pending", Reason="", readiness=false. Elapsed: 20.069289ms May 22 14:30:08.965: INFO: Pod "client-containers-c12ac33d-da52-4729-a1e5-5c30f203c20e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082107001s May 22 14:30:10.969: INFO: Pod "client-containers-c12ac33d-da52-4729-a1e5-5c30f203c20e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.086677556s STEP: Saw pod success May 22 14:30:10.969: INFO: Pod "client-containers-c12ac33d-da52-4729-a1e5-5c30f203c20e" satisfied condition "success or failure" May 22 14:30:10.973: INFO: Trying to get logs from node iruya-worker2 pod client-containers-c12ac33d-da52-4729-a1e5-5c30f203c20e container test-container: STEP: delete the pod May 22 14:30:11.006: INFO: Waiting for pod client-containers-c12ac33d-da52-4729-a1e5-5c30f203c20e to disappear May 22 14:30:11.010: INFO: Pod client-containers-c12ac33d-da52-4729-a1e5-5c30f203c20e no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:30:11.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1962" for this suite. May 22 14:30:17.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:30:17.173: INFO: namespace containers-1962 deletion completed in 6.141982817s • [SLOW TEST:10.367 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:30:17.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:31:17.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8739" for this suite. May 22 14:31:39.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:31:39.356: INFO: namespace container-probe-8739 deletion completed in 22.105980938s • [SLOW TEST:82.183 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:31:39.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-6e4d6131-4690-4d24-8323-042ef17139e9 STEP: Creating a pod to test consume secrets May 22 14:31:39.452: INFO: Waiting up to 5m0s for pod "pod-secrets-8176dfe4-1f61-49a7-9871-d9bc168f07da" in namespace "secrets-8035" to be "success or failure" May 22 14:31:39.469: INFO: Pod "pod-secrets-8176dfe4-1f61-49a7-9871-d9bc168f07da": Phase="Pending", Reason="", readiness=false. Elapsed: 16.934732ms May 22 14:31:41.481: INFO: Pod "pod-secrets-8176dfe4-1f61-49a7-9871-d9bc168f07da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028597673s May 22 14:31:43.485: INFO: Pod "pod-secrets-8176dfe4-1f61-49a7-9871-d9bc168f07da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032267082s STEP: Saw pod success May 22 14:31:43.485: INFO: Pod "pod-secrets-8176dfe4-1f61-49a7-9871-d9bc168f07da" satisfied condition "success or failure" May 22 14:31:43.487: INFO: Trying to get logs from node iruya-worker pod pod-secrets-8176dfe4-1f61-49a7-9871-d9bc168f07da container secret-volume-test: STEP: delete the pod May 22 14:31:43.512: INFO: Waiting for pod pod-secrets-8176dfe4-1f61-49a7-9871-d9bc168f07da to disappear May 22 14:31:43.538: INFO: Pod pod-secrets-8176dfe4-1f61-49a7-9871-d9bc168f07da no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:31:43.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8035" for this suite. May 22 14:31:49.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:31:49.665: INFO: namespace secrets-8035 deletion completed in 6.124122335s • [SLOW TEST:10.308 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:31:49.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc May 22 14:31:49.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1772' May 22 14:31:50.005: INFO: stderr: "" May 22 14:31:50.005: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. May 22 14:31:51.014: INFO: Selector matched 1 pods for map[app:redis] May 22 14:31:51.014: INFO: Found 0 / 1 May 22 14:31:52.010: INFO: Selector matched 1 pods for map[app:redis] May 22 14:31:52.010: INFO: Found 0 / 1 May 22 14:31:53.010: INFO: Selector matched 1 pods for map[app:redis] May 22 14:31:53.010: INFO: Found 0 / 1 May 22 14:31:54.009: INFO: Selector matched 1 pods for map[app:redis] May 22 14:31:54.010: INFO: Found 1 / 1 May 22 14:31:54.010: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 22 14:31:54.012: INFO: Selector matched 1 pods for map[app:redis] May 22 14:31:54.012: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings May 22 14:31:54.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-9bhkp redis-master --namespace=kubectl-1772' May 22 14:31:54.130: INFO: stderr: "" May 22 14:31:54.130: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 22 May 14:31:53.067 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 22 May 14:31:53.067 # Server started, Redis version 3.2.12\n1:M 22 May 14:31:53.067 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 22 May 14:31:53.067 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines May 22 14:31:54.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-9bhkp redis-master --namespace=kubectl-1772 --tail=1' May 22 14:31:54.237: INFO: stderr: "" May 22 14:31:54.237: INFO: stdout: "1:M 22 May 14:31:53.067 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes May 22 14:31:54.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-9bhkp redis-master --namespace=kubectl-1772 --limit-bytes=1' May 22 14:31:54.335: INFO: stderr: "" May 22 14:31:54.335: INFO: stdout: " " STEP: exposing timestamps May 22 14:31:54.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-9bhkp redis-master --namespace=kubectl-1772 --tail=1 --timestamps' May 22 14:31:54.436: INFO: stderr: "" May 22 14:31:54.436: INFO: stdout: "2020-05-22T14:31:53.068246628Z 1:M 22 May 14:31:53.067 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range May 22 14:31:56.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-9bhkp redis-master --namespace=kubectl-1772 --since=1s' May 22 14:31:57.053: INFO: stderr: "" May 22 14:31:57.053: INFO: stdout: "" May 22 14:31:57.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-9bhkp redis-master --namespace=kubectl-1772 --since=24h' May 22 14:31:57.156: INFO: stderr: "" May 22 14:31:57.156: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 22 May 14:31:53.067 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 22 May 14:31:53.067 # Server started, Redis version 3.2.12\n1:M 22 May 14:31:53.067 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 22 May 14:31:53.067 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources May 22 14:31:57.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1772' May 22 14:31:57.249: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 22 14:31:57.249: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" May 22 14:31:57.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-1772' May 22 14:31:57.349: INFO: stderr: "No resources found.\n" May 22 14:31:57.349: INFO: stdout: "" May 22 14:31:57.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-1772 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 22 14:31:57.452: INFO: stderr: "" May 22 14:31:57.453: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:31:57.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1772" for this suite. May 22 14:32:03.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:32:03.609: INFO: namespace kubectl-1772 deletion completed in 6.15311266s • [SLOW TEST:13.943 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:32:03.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium May 22 14:32:03.678: INFO: Waiting up to 5m0s for pod "pod-75ee9407-f33d-4864-b6e9-12616a36a4a4" in namespace "emptydir-7684" to be "success or failure" May 22 14:32:03.689: INFO: Pod "pod-75ee9407-f33d-4864-b6e9-12616a36a4a4": Phase="Pending", Reason="", readiness=false. Elapsed: 11.494288ms May 22 14:32:05.695: INFO: Pod "pod-75ee9407-f33d-4864-b6e9-12616a36a4a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017047832s May 22 14:32:07.698: INFO: Pod "pod-75ee9407-f33d-4864-b6e9-12616a36a4a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020719459s STEP: Saw pod success May 22 14:32:07.699: INFO: Pod "pod-75ee9407-f33d-4864-b6e9-12616a36a4a4" satisfied condition "success or failure" May 22 14:32:07.702: INFO: Trying to get logs from node iruya-worker pod pod-75ee9407-f33d-4864-b6e9-12616a36a4a4 container test-container: STEP: delete the pod May 22 14:32:07.716: INFO: Waiting for pod pod-75ee9407-f33d-4864-b6e9-12616a36a4a4 to disappear May 22 14:32:07.720: INFO: Pod pod-75ee9407-f33d-4864-b6e9-12616a36a4a4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:32:07.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7684" for this suite. May 22 14:32:13.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:32:13.819: INFO: namespace emptydir-7684 deletion completed in 6.095127988s • [SLOW TEST:10.209 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:32:13.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 May 22 14:32:13.900: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 22 14:32:13.918: INFO: Waiting for terminating namespaces to be deleted... May 22 14:32:13.934: INFO: Logging pods the kubelet thinks is on node iruya-worker before test May 22 14:32:13.939: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 22 14:32:13.939: INFO: Container kube-proxy ready: true, restart count 0 May 22 14:32:13.939: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 22 14:32:13.939: INFO: Container kindnet-cni ready: true, restart count 0 May 22 14:32:13.939: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test May 22 14:32:13.944: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) May 22 14:32:13.944: INFO: Container kube-proxy ready: true, restart count 0 May 22 14:32:13.944: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) May 22 14:32:13.944: INFO: Container kindnet-cni ready: true, restart count 0 May 22 14:32:13.944: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) May 22 14:32:13.944: INFO: Container coredns ready: true, restart count 0 May 22 14:32:13.944: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) May 22 14:32:13.944: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-2228acd8-dcb2-4acb-8bb4-9a22ec6ecbd9 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-2228acd8-dcb2-4acb-8bb4-9a22ec6ecbd9 off the node iruya-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-2228acd8-dcb2-4acb-8bb4-9a22ec6ecbd9 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:32:22.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4279" for this suite. May 22 14:32:32.100: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:32:32.177: INFO: namespace sched-pred-4279 deletion completed in 10.090721745s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:18.357 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:32:32.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 May 22 14:32:32.223: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 22 14:32:32.244: INFO: Waiting for terminating namespaces to be deleted... May 22 14:32:32.246: INFO: Logging pods the kubelet thinks is on node iruya-worker before test May 22 14:32:32.253: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 22 14:32:32.253: INFO: Container kube-proxy ready: true, restart count 0 May 22 14:32:32.253: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 22 14:32:32.253: INFO: Container kindnet-cni ready: true, restart count 0 May 22 14:32:32.253: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test May 22 14:32:32.274: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) May 22 14:32:32.274: INFO: Container coredns ready: true, restart count 0 May 22 14:32:32.274: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) May 22 14:32:32.274: INFO: Container coredns ready: true, restart count 0 May 22 14:32:32.274: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) May 22 14:32:32.274: INFO: Container kindnet-cni ready: true, restart count 0 May 22 14:32:32.274: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) May 22 14:32:32.274: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-worker STEP: verifying the node has the label node iruya-worker2 May 22 14:32:32.343: INFO: Pod coredns-5d4dd4b4db-6jcgz requesting resource cpu=100m on Node iruya-worker2 May 22 14:32:32.343: INFO: Pod coredns-5d4dd4b4db-gm7vr requesting resource cpu=100m on Node iruya-worker2 May 22 14:32:32.343: INFO: Pod kindnet-gwz5g requesting resource cpu=100m on Node iruya-worker May 22 14:32:32.343: INFO: Pod kindnet-mgd8b requesting resource cpu=100m on Node iruya-worker2 May 22 14:32:32.343: INFO: Pod kube-proxy-pmz4p requesting resource cpu=0m on Node iruya-worker May 22 14:32:32.343: INFO: Pod kube-proxy-vwbcj requesting resource cpu=0m on Node iruya-worker2 STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-40bab2c1-ac07-4226-81dd-5f559cafaae5.1611603b8edab5e9], Reason = [Scheduled], Message = [Successfully assigned sched-pred-435/filler-pod-40bab2c1-ac07-4226-81dd-5f559cafaae5 to iruya-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-40bab2c1-ac07-4226-81dd-5f559cafaae5.1611603bdd45b9e1], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-40bab2c1-ac07-4226-81dd-5f559cafaae5.1611603c36e37786], Reason = [Created], Message = [Created container filler-pod-40bab2c1-ac07-4226-81dd-5f559cafaae5] STEP: Considering event: Type = [Normal], Name = [filler-pod-40bab2c1-ac07-4226-81dd-5f559cafaae5.1611603c4a530610], Reason = [Started], Message = [Started container filler-pod-40bab2c1-ac07-4226-81dd-5f559cafaae5] STEP: Considering event: Type = [Normal], Name = [filler-pod-cd312b86-9214-46a1-9352-09c093e0fc78.1611603b91766c4b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-435/filler-pod-cd312b86-9214-46a1-9352-09c093e0fc78 to iruya-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-cd312b86-9214-46a1-9352-09c093e0fc78.1611603c1144610a], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-cd312b86-9214-46a1-9352-09c093e0fc78.1611603c4a48fe30], Reason = [Created], Message = [Created container filler-pod-cd312b86-9214-46a1-9352-09c093e0fc78] STEP: Considering event: Type = [Normal], Name = [filler-pod-cd312b86-9214-46a1-9352-09c093e0fc78.1611603c582e7a2c], Reason = [Started], Message = [Started container filler-pod-cd312b86-9214-46a1-9352-09c093e0fc78] STEP: Considering event: Type = [Warning], Name = [additional-pod.1611603c80c29295], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node iruya-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:32:37.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-435" for this suite. May 22 14:32:43.517: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:32:43.815: INFO: namespace sched-pred-435 deletion completed in 6.31766515s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:11.638 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:32:43.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 22 14:32:43.848: INFO: Creating deployment "test-recreate-deployment" May 22 14:32:43.865: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 22 14:32:43.902: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 22 14:32:45.934: INFO: Waiting deployment "test-recreate-deployment" to complete May 22 14:32:45.986: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725754763, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725754763, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725754764, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725754763, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 22 14:32:47.990: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 22 14:32:47.998: INFO: Updating deployment test-recreate-deployment May 22 14:32:47.998: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 22 14:32:48.263: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-8726,SelfLink:/apis/apps/v1/namespaces/deployment-8726/deployments/test-recreate-deployment,UID:fc087bcf-1f60-4553-8e66-11d2a27a90d7,ResourceVersion:12309078,Generation:2,CreationTimestamp:2020-05-22 14:32:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-05-22 14:32:48 +0000 UTC 2020-05-22 14:32:48 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-05-22 14:32:48 +0000 UTC 2020-05-22 14:32:43 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} May 22 14:32:48.270: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-8726,SelfLink:/apis/apps/v1/namespaces/deployment-8726/replicasets/test-recreate-deployment-5c8c9cc69d,UID:bca1d1c0-dfc5-4278-a74c-33882df054c3,ResourceVersion:12309076,Generation:1,CreationTimestamp:2020-05-22 14:32:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment fc087bcf-1f60-4553-8e66-11d2a27a90d7 0xc0020cee67 0xc0020cee68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 22 14:32:48.270: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 22 14:32:48.270: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-8726,SelfLink:/apis/apps/v1/namespaces/deployment-8726/replicasets/test-recreate-deployment-6df85df6b9,UID:0541c969-676a-41d5-bf84-aa269931e004,ResourceVersion:12309067,Generation:2,CreationTimestamp:2020-05-22 14:32:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment fc087bcf-1f60-4553-8e66-11d2a27a90d7 0xc0020cef37 0xc0020cef38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 22 14:32:48.488: INFO: Pod "test-recreate-deployment-5c8c9cc69d-pqz7f" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-pqz7f,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-8726,SelfLink:/api/v1/namespaces/deployment-8726/pods/test-recreate-deployment-5c8c9cc69d-pqz7f,UID:9cebf56a-8d20-4fe1-a777-9e3f3cd114d4,ResourceVersion:12309079,Generation:0,CreationTimestamp:2020-05-22 14:32:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d bca1d1c0-dfc5-4278-a74c-33882df054c3 0xc00350fc67 0xc00350fc68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4rpbb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4rpbb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4rpbb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00350fce0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00350fd00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 14:32:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 14:32:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-22 14:32:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-22 14:32:48 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-22 14:32:48 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:32:48.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8726" for this suite. May 22 14:32:54.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:32:54.730: INFO: namespace deployment-8726 deletion completed in 6.238399025s • [SLOW TEST:10.914 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:32:54.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 22 14:32:54.769: INFO: Creating ReplicaSet my-hostname-basic-bb47dad3-1785-44c7-9292-67bc9b0ef272 May 22 14:32:54.841: INFO: Pod name my-hostname-basic-bb47dad3-1785-44c7-9292-67bc9b0ef272: Found 0 pods out of 1 May 22 14:32:59.859: INFO: Pod name my-hostname-basic-bb47dad3-1785-44c7-9292-67bc9b0ef272: Found 1 pods out of 1 May 22 14:32:59.859: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-bb47dad3-1785-44c7-9292-67bc9b0ef272" is running May 22 14:32:59.862: INFO: Pod "my-hostname-basic-bb47dad3-1785-44c7-9292-67bc9b0ef272-v7fd6" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-22 14:32:54 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-22 14:32:58 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-22 14:32:58 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-22 14:32:54 +0000 UTC Reason: Message:}]) May 22 14:32:59.862: INFO: Trying to dial the pod May 22 14:33:04.872: INFO: Controller my-hostname-basic-bb47dad3-1785-44c7-9292-67bc9b0ef272: Got expected result from replica 1 [my-hostname-basic-bb47dad3-1785-44c7-9292-67bc9b0ef272-v7fd6]: "my-hostname-basic-bb47dad3-1785-44c7-9292-67bc9b0ef272-v7fd6", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:33:04.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6004" for this suite. May 22 14:33:10.905: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:33:10.974: INFO: namespace replicaset-6004 deletion completed in 6.098368267s • [SLOW TEST:16.243 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:33:10.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-e94c3edc-1eb5-4af9-b02c-4cb7bddf8bfb STEP: Creating a pod to test consume secrets May 22 14:33:11.077: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3b36a7a7-c986-4cf2-889d-365813c0832e" in namespace "projected-2314" to be "success or failure" May 22 14:33:11.097: INFO: Pod "pod-projected-secrets-3b36a7a7-c986-4cf2-889d-365813c0832e": Phase="Pending", Reason="", readiness=false. Elapsed: 19.915628ms May 22 14:33:13.177: INFO: Pod "pod-projected-secrets-3b36a7a7-c986-4cf2-889d-365813c0832e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100140898s May 22 14:33:15.182: INFO: Pod "pod-projected-secrets-3b36a7a7-c986-4cf2-889d-365813c0832e": Phase="Running", Reason="", readiness=true. Elapsed: 4.104258167s May 22 14:33:17.213: INFO: Pod "pod-projected-secrets-3b36a7a7-c986-4cf2-889d-365813c0832e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.136107543s STEP: Saw pod success May 22 14:33:17.213: INFO: Pod "pod-projected-secrets-3b36a7a7-c986-4cf2-889d-365813c0832e" satisfied condition "success or failure" May 22 14:33:17.216: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-3b36a7a7-c986-4cf2-889d-365813c0832e container projected-secret-volume-test: STEP: delete the pod May 22 14:33:17.233: INFO: Waiting for pod pod-projected-secrets-3b36a7a7-c986-4cf2-889d-365813c0832e to disappear May 22 14:33:17.237: INFO: Pod pod-projected-secrets-3b36a7a7-c986-4cf2-889d-365813c0832e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:33:17.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2314" for this suite. May 22 14:33:23.263: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:33:23.343: INFO: namespace projected-2314 deletion completed in 6.103697564s • [SLOW TEST:12.369 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:33:23.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-daa15911-e2e0-41d0-8d9d-1891c99b2969 STEP: Creating a pod to test consume configMaps May 22 14:33:23.449: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-981e82c9-d693-47af-9da8-a25e5ed69a9c" in namespace "projected-2689" to be "success or failure" May 22 14:33:23.468: INFO: Pod "pod-projected-configmaps-981e82c9-d693-47af-9da8-a25e5ed69a9c": Phase="Pending", Reason="", readiness=false. Elapsed: 19.35152ms May 22 14:33:25.472: INFO: Pod "pod-projected-configmaps-981e82c9-d693-47af-9da8-a25e5ed69a9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023429612s May 22 14:33:27.476: INFO: Pod "pod-projected-configmaps-981e82c9-d693-47af-9da8-a25e5ed69a9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027567429s STEP: Saw pod success May 22 14:33:27.476: INFO: Pod "pod-projected-configmaps-981e82c9-d693-47af-9da8-a25e5ed69a9c" satisfied condition "success or failure" May 22 14:33:27.479: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-981e82c9-d693-47af-9da8-a25e5ed69a9c container projected-configmap-volume-test: STEP: delete the pod May 22 14:33:27.536: INFO: Waiting for pod pod-projected-configmaps-981e82c9-d693-47af-9da8-a25e5ed69a9c to disappear May 22 14:33:27.540: INFO: Pod pod-projected-configmaps-981e82c9-d693-47af-9da8-a25e5ed69a9c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:33:27.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2689" for this suite. May 22 14:33:33.584: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:33:33.655: INFO: namespace projected-2689 deletion completed in 6.108275345s • [SLOW TEST:10.312 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:33:33.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 22 14:33:33.796: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ee578c39-1ad4-4380-9f4c-2055dfb7ae31" in namespace "downward-api-5897" to be "success or failure" May 22 14:33:33.836: INFO: Pod "downwardapi-volume-ee578c39-1ad4-4380-9f4c-2055dfb7ae31": Phase="Pending", Reason="", readiness=false. Elapsed: 40.683285ms May 22 14:33:35.841: INFO: Pod "downwardapi-volume-ee578c39-1ad4-4380-9f4c-2055dfb7ae31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045645633s May 22 14:33:37.845: INFO: Pod "downwardapi-volume-ee578c39-1ad4-4380-9f4c-2055dfb7ae31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049230505s STEP: Saw pod success May 22 14:33:37.845: INFO: Pod "downwardapi-volume-ee578c39-1ad4-4380-9f4c-2055dfb7ae31" satisfied condition "success or failure" May 22 14:33:37.847: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-ee578c39-1ad4-4380-9f4c-2055dfb7ae31 container client-container: STEP: delete the pod May 22 14:33:38.011: INFO: Waiting for pod downwardapi-volume-ee578c39-1ad4-4380-9f4c-2055dfb7ae31 to disappear May 22 14:33:38.054: INFO: Pod downwardapi-volume-ee578c39-1ad4-4380-9f4c-2055dfb7ae31 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:33:38.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5897" for this suite. May 22 14:33:44.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:33:44.168: INFO: namespace downward-api-5897 deletion completed in 6.10865971s • [SLOW TEST:10.511 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:33:44.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 22 14:33:44.842: INFO: Pod name wrapped-volume-race-7e4121e2-bcb0-4c54-947b-9befbf916927: Found 0 pods out of 5 May 22 14:33:49.850: INFO: Pod name wrapped-volume-race-7e4121e2-bcb0-4c54-947b-9befbf916927: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-7e4121e2-bcb0-4c54-947b-9befbf916927 in namespace emptydir-wrapper-1935, will wait for the garbage collector to delete the pods May 22 14:34:03.930: INFO: Deleting ReplicationController wrapped-volume-race-7e4121e2-bcb0-4c54-947b-9befbf916927 took: 6.902786ms May 22 14:34:04.230: INFO: Terminating ReplicationController wrapped-volume-race-7e4121e2-bcb0-4c54-947b-9befbf916927 pods took: 300.251404ms STEP: Creating RC which spawns configmap-volume pods May 22 14:34:42.674: INFO: Pod name wrapped-volume-race-9ad55299-a4ec-466e-9c54-b64a42046b7c: Found 0 pods out of 5 May 22 14:34:47.683: INFO: Pod name wrapped-volume-race-9ad55299-a4ec-466e-9c54-b64a42046b7c: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-9ad55299-a4ec-466e-9c54-b64a42046b7c in namespace emptydir-wrapper-1935, will wait for the garbage collector to delete the pods May 22 14:35:01.813: INFO: Deleting ReplicationController wrapped-volume-race-9ad55299-a4ec-466e-9c54-b64a42046b7c took: 9.047406ms May 22 14:35:02.113: INFO: Terminating ReplicationController wrapped-volume-race-9ad55299-a4ec-466e-9c54-b64a42046b7c pods took: 300.322546ms STEP: Creating RC which spawns configmap-volume pods May 22 14:35:43.269: INFO: Pod name wrapped-volume-race-37fee867-99b0-4205-b009-414226b5589b: Found 0 pods out of 5 May 22 14:35:48.283: INFO: Pod name wrapped-volume-race-37fee867-99b0-4205-b009-414226b5589b: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-37fee867-99b0-4205-b009-414226b5589b in namespace emptydir-wrapper-1935, will wait for the garbage collector to delete the pods May 22 14:36:02.369: INFO: Deleting ReplicationController wrapped-volume-race-37fee867-99b0-4205-b009-414226b5589b took: 9.307882ms May 22 14:36:02.670: INFO: Terminating ReplicationController wrapped-volume-race-37fee867-99b0-4205-b009-414226b5589b pods took: 300.207251ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:36:43.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-1935" for this suite. May 22 14:36:51.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:36:51.965: INFO: namespace emptydir-wrapper-1935 deletion completed in 8.081596714s • [SLOW TEST:187.797 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:36:51.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs May 22 14:36:52.028: INFO: Waiting up to 5m0s for pod "pod-3545e9fa-81b1-423f-ba58-d639da9de1fe" in namespace "emptydir-1964" to be "success or failure" May 22 14:36:52.032: INFO: Pod "pod-3545e9fa-81b1-423f-ba58-d639da9de1fe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.159599ms May 22 14:36:54.037: INFO: Pod "pod-3545e9fa-81b1-423f-ba58-d639da9de1fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008482649s May 22 14:36:56.041: INFO: Pod "pod-3545e9fa-81b1-423f-ba58-d639da9de1fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013225507s STEP: Saw pod success May 22 14:36:56.041: INFO: Pod "pod-3545e9fa-81b1-423f-ba58-d639da9de1fe" satisfied condition "success or failure" May 22 14:36:56.045: INFO: Trying to get logs from node iruya-worker2 pod pod-3545e9fa-81b1-423f-ba58-d639da9de1fe container test-container: STEP: delete the pod May 22 14:36:56.070: INFO: Waiting for pod pod-3545e9fa-81b1-423f-ba58-d639da9de1fe to disappear May 22 14:36:56.074: INFO: Pod pod-3545e9fa-81b1-423f-ba58-d639da9de1fe no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:36:56.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1964" for this suite. May 22 14:37:02.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:37:02.348: INFO: namespace emptydir-1964 deletion completed in 6.271367576s • [SLOW TEST:10.383 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:37:02.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-4cae3fc5-5379-44a1-9ad5-a983dc7b66c2 STEP: Creating a pod to test consume secrets May 22 14:37:02.436: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d4d9a892-2138-4ff6-9977-bbfc75c187fc" in namespace "projected-5507" to be "success or failure" May 22 14:37:02.453: INFO: Pod "pod-projected-secrets-d4d9a892-2138-4ff6-9977-bbfc75c187fc": Phase="Pending", Reason="", readiness=false. Elapsed: 16.922532ms May 22 14:37:04.457: INFO: Pod "pod-projected-secrets-d4d9a892-2138-4ff6-9977-bbfc75c187fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021341886s May 22 14:37:06.462: INFO: Pod "pod-projected-secrets-d4d9a892-2138-4ff6-9977-bbfc75c187fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025863171s STEP: Saw pod success May 22 14:37:06.462: INFO: Pod "pod-projected-secrets-d4d9a892-2138-4ff6-9977-bbfc75c187fc" satisfied condition "success or failure" May 22 14:37:06.466: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-d4d9a892-2138-4ff6-9977-bbfc75c187fc container projected-secret-volume-test: STEP: delete the pod May 22 14:37:06.525: INFO: Waiting for pod pod-projected-secrets-d4d9a892-2138-4ff6-9977-bbfc75c187fc to disappear May 22 14:37:06.529: INFO: Pod pod-projected-secrets-d4d9a892-2138-4ff6-9977-bbfc75c187fc no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:37:06.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5507" for this suite. May 22 14:37:12.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:37:12.634: INFO: namespace projected-5507 deletion completed in 6.102349958s • [SLOW TEST:10.285 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:37:12.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 22 14:37:12.739: INFO: Waiting up to 5m0s for pod "downwardapi-volume-91c1326c-19e6-4809-af35-f72267b09df9" in namespace "downward-api-1507" to be "success or failure" May 22 14:37:12.743: INFO: Pod "downwardapi-volume-91c1326c-19e6-4809-af35-f72267b09df9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006151ms May 22 14:37:14.839: INFO: Pod "downwardapi-volume-91c1326c-19e6-4809-af35-f72267b09df9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100354572s May 22 14:37:16.843: INFO: Pod "downwardapi-volume-91c1326c-19e6-4809-af35-f72267b09df9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.104468874s STEP: Saw pod success May 22 14:37:16.844: INFO: Pod "downwardapi-volume-91c1326c-19e6-4809-af35-f72267b09df9" satisfied condition "success or failure" May 22 14:37:16.846: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-91c1326c-19e6-4809-af35-f72267b09df9 container client-container: STEP: delete the pod May 22 14:37:16.926: INFO: Waiting for pod downwardapi-volume-91c1326c-19e6-4809-af35-f72267b09df9 to disappear May 22 14:37:16.929: INFO: Pod downwardapi-volume-91c1326c-19e6-4809-af35-f72267b09df9 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:37:16.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1507" for this suite. May 22 14:37:22.945: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:37:23.042: INFO: namespace downward-api-1507 deletion completed in 6.10957595s • [SLOW TEST:10.407 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:37:23.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-947 I0522 14:37:23.133629 6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-947, replica count: 1 I0522 14:37:24.184086 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0522 14:37:25.184247 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0522 14:37:26.184475 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 22 14:37:26.317: INFO: Created: latency-svc-wwvdw May 22 14:37:26.331: INFO: Got endpoints: latency-svc-wwvdw [46.668881ms] May 22 14:37:26.365: INFO: Created: latency-svc-csbkp May 22 14:37:26.480: INFO: Got endpoints: latency-svc-csbkp [149.322083ms] May 22 14:37:26.483: INFO: Created: latency-svc-8cxtb May 22 14:37:26.494: INFO: Got endpoints: latency-svc-8cxtb [162.904194ms] May 22 14:37:26.527: INFO: Created: latency-svc-crdht May 22 14:37:26.551: INFO: Got endpoints: latency-svc-crdht [219.699676ms] May 22 14:37:26.630: INFO: Created: latency-svc-gwhnn May 22 14:37:26.659: INFO: Got endpoints: latency-svc-gwhnn [327.977877ms] May 22 14:37:26.660: INFO: Created: latency-svc-757n8 May 22 14:37:26.676: INFO: Got endpoints: latency-svc-757n8 [344.251854ms] May 22 14:37:26.701: INFO: Created: latency-svc-vk7s2 May 22 14:37:26.786: INFO: Got endpoints: latency-svc-vk7s2 [454.382165ms] May 22 14:37:26.788: INFO: Created: latency-svc-nvxg5 May 22 14:37:26.796: INFO: Got endpoints: latency-svc-nvxg5 [464.55649ms] May 22 14:37:26.863: INFO: Created: latency-svc-mnrls May 22 14:37:26.929: INFO: Got endpoints: latency-svc-mnrls [598.299862ms] May 22 14:37:26.931: INFO: Created: latency-svc-m46lc May 22 14:37:26.933: INFO: Got endpoints: latency-svc-m46lc [601.372383ms] May 22 14:37:27.007: INFO: Created: latency-svc-f6m4g May 22 14:37:27.024: INFO: Got endpoints: latency-svc-f6m4g [692.258972ms] May 22 14:37:27.085: INFO: Created: latency-svc-bdznh May 22 14:37:27.108: INFO: Got endpoints: latency-svc-bdznh [777.073492ms] May 22 14:37:27.144: INFO: Created: latency-svc-7j67w May 22 14:37:27.163: INFO: Got endpoints: latency-svc-7j67w [831.375795ms] May 22 14:37:27.223: INFO: Created: latency-svc-n6wsx May 22 14:37:27.253: INFO: Created: latency-svc-zx2t7 May 22 14:37:27.254: INFO: Got endpoints: latency-svc-n6wsx [922.258923ms] May 22 14:37:27.265: INFO: Got endpoints: latency-svc-zx2t7 [933.897844ms] May 22 14:37:27.289: INFO: Created: latency-svc-mbqk2 May 22 14:37:27.302: INFO: Got endpoints: latency-svc-mbqk2 [970.399285ms] May 22 14:37:27.367: INFO: Created: latency-svc-8mvdz May 22 14:37:27.409: INFO: Got endpoints: latency-svc-8mvdz [928.328036ms] May 22 14:37:27.411: INFO: Created: latency-svc-qtcc4 May 22 14:37:27.422: INFO: Got endpoints: latency-svc-qtcc4 [927.840776ms] May 22 14:37:27.450: INFO: Created: latency-svc-lsbgr May 22 14:37:27.504: INFO: Got endpoints: latency-svc-lsbgr [952.908027ms] May 22 14:37:27.522: INFO: Created: latency-svc-cvjbn May 22 14:37:27.536: INFO: Got endpoints: latency-svc-cvjbn [877.002431ms] May 22 14:37:27.565: INFO: Created: latency-svc-tl7q2 May 22 14:37:27.579: INFO: Got endpoints: latency-svc-tl7q2 [903.593036ms] May 22 14:37:27.636: INFO: Created: latency-svc-4tvv2 May 22 14:37:27.661: INFO: Got endpoints: latency-svc-4tvv2 [875.109751ms] May 22 14:37:27.662: INFO: Created: latency-svc-swnhh May 22 14:37:27.674: INFO: Got endpoints: latency-svc-swnhh [877.716848ms] May 22 14:37:27.698: INFO: Created: latency-svc-qps5j May 22 14:37:27.709: INFO: Got endpoints: latency-svc-qps5j [779.968665ms] May 22 14:37:27.733: INFO: Created: latency-svc-vn8cw May 22 14:37:27.786: INFO: Got endpoints: latency-svc-vn8cw [852.991199ms] May 22 14:37:27.799: INFO: Created: latency-svc-5mxkp May 22 14:37:27.812: INFO: Got endpoints: latency-svc-5mxkp [788.708262ms] May 22 14:37:27.834: INFO: Created: latency-svc-b7j8f May 22 14:37:27.848: INFO: Got endpoints: latency-svc-b7j8f [739.551183ms] May 22 14:37:27.870: INFO: Created: latency-svc-4fhsd May 22 14:37:27.884: INFO: Got endpoints: latency-svc-4fhsd [721.527715ms] May 22 14:37:27.947: INFO: Created: latency-svc-t8r9t May 22 14:37:27.952: INFO: Got endpoints: latency-svc-t8r9t [697.892584ms] May 22 14:37:27.979: INFO: Created: latency-svc-4x54l May 22 14:37:27.993: INFO: Got endpoints: latency-svc-4x54l [727.854858ms] May 22 14:37:28.014: INFO: Created: latency-svc-j5xsc May 22 14:37:28.035: INFO: Got endpoints: latency-svc-j5xsc [733.535209ms] May 22 14:37:28.139: INFO: Created: latency-svc-fvbqk May 22 14:37:28.150: INFO: Got endpoints: latency-svc-fvbqk [741.134941ms] May 22 14:37:28.225: INFO: Created: latency-svc-qkhq4 May 22 14:37:28.288: INFO: Got endpoints: latency-svc-qkhq4 [866.145046ms] May 22 14:37:28.294: INFO: Created: latency-svc-cxn7g May 22 14:37:28.321: INFO: Got endpoints: latency-svc-cxn7g [817.398165ms] May 22 14:37:28.351: INFO: Created: latency-svc-5qfj9 May 22 14:37:28.363: INFO: Got endpoints: latency-svc-5qfj9 [826.359357ms] May 22 14:37:28.386: INFO: Created: latency-svc-t9m7n May 22 14:37:28.463: INFO: Got endpoints: latency-svc-t9m7n [883.822215ms] May 22 14:37:28.489: INFO: Created: latency-svc-sgw6c May 22 14:37:28.513: INFO: Got endpoints: latency-svc-sgw6c [851.992911ms] May 22 14:37:28.554: INFO: Created: latency-svc-vf8gm May 22 14:37:28.591: INFO: Got endpoints: latency-svc-vf8gm [916.954735ms] May 22 14:37:28.609: INFO: Created: latency-svc-s4n9w May 22 14:37:28.669: INFO: Got endpoints: latency-svc-s4n9w [960.00161ms] May 22 14:37:28.724: INFO: Created: latency-svc-s849f May 22 14:37:28.736: INFO: Got endpoints: latency-svc-s849f [950.557182ms] May 22 14:37:28.768: INFO: Created: latency-svc-68r5w May 22 14:37:28.795: INFO: Got endpoints: latency-svc-68r5w [982.65299ms] May 22 14:37:28.876: INFO: Created: latency-svc-7gnnh May 22 14:37:28.903: INFO: Got endpoints: latency-svc-7gnnh [1.055310855s] May 22 14:37:28.957: INFO: Created: latency-svc-6lfpz May 22 14:37:29.038: INFO: Got endpoints: latency-svc-6lfpz [1.153109002s] May 22 14:37:29.040: INFO: Created: latency-svc-k5lzt May 22 14:37:29.049: INFO: Got endpoints: latency-svc-k5lzt [1.09719106s] May 22 14:37:29.096: INFO: Created: latency-svc-7lt9h May 22 14:37:29.119: INFO: Got endpoints: latency-svc-7lt9h [1.125784232s] May 22 14:37:29.182: INFO: Created: latency-svc-kw6dp May 22 14:37:29.186: INFO: Got endpoints: latency-svc-kw6dp [1.150498949s] May 22 14:37:29.246: INFO: Created: latency-svc-d8hnt May 22 14:37:29.269: INFO: Got endpoints: latency-svc-d8hnt [1.11898752s] May 22 14:37:29.337: INFO: Created: latency-svc-zpfhn May 22 14:37:29.340: INFO: Got endpoints: latency-svc-zpfhn [1.051894404s] May 22 14:37:29.366: INFO: Created: latency-svc-jksgt May 22 14:37:29.383: INFO: Got endpoints: latency-svc-jksgt [1.061728468s] May 22 14:37:29.421: INFO: Created: latency-svc-st94s May 22 14:37:29.475: INFO: Got endpoints: latency-svc-st94s [1.111743015s] May 22 14:37:29.502: INFO: Created: latency-svc-vrpfn May 22 14:37:29.509: INFO: Got endpoints: latency-svc-vrpfn [1.045813501s] May 22 14:37:29.546: INFO: Created: latency-svc-rmnnj May 22 14:37:29.563: INFO: Got endpoints: latency-svc-rmnnj [1.050149518s] May 22 14:37:29.630: INFO: Created: latency-svc-swgwr May 22 14:37:29.633: INFO: Got endpoints: latency-svc-swgwr [1.042568125s] May 22 14:37:29.671: INFO: Created: latency-svc-rmjf6 May 22 14:37:29.686: INFO: Got endpoints: latency-svc-rmjf6 [1.01695313s] May 22 14:37:29.708: INFO: Created: latency-svc-vjlmv May 22 14:37:29.723: INFO: Got endpoints: latency-svc-vjlmv [986.313707ms] May 22 14:37:29.774: INFO: Created: latency-svc-x8jk5 May 22 14:37:29.777: INFO: Got endpoints: latency-svc-x8jk5 [982.209861ms] May 22 14:37:29.810: INFO: Created: latency-svc-stn8h May 22 14:37:29.825: INFO: Got endpoints: latency-svc-stn8h [921.627792ms] May 22 14:37:29.846: INFO: Created: latency-svc-7xpkt May 22 14:37:29.855: INFO: Got endpoints: latency-svc-7xpkt [817.439129ms] May 22 14:37:29.900: INFO: Created: latency-svc-bfh2p May 22 14:37:29.903: INFO: Got endpoints: latency-svc-bfh2p [854.058958ms] May 22 14:37:29.930: INFO: Created: latency-svc-nwn7p May 22 14:37:29.946: INFO: Got endpoints: latency-svc-nwn7p [826.469324ms] May 22 14:37:29.978: INFO: Created: latency-svc-qp5mb May 22 14:37:30.056: INFO: Got endpoints: latency-svc-qp5mb [869.914215ms] May 22 14:37:30.105: INFO: Created: latency-svc-jpsdf May 22 14:37:30.120: INFO: Got endpoints: latency-svc-jpsdf [851.356198ms] May 22 14:37:30.187: INFO: Created: latency-svc-64692 May 22 14:37:30.191: INFO: Got endpoints: latency-svc-64692 [850.552078ms] May 22 14:37:30.218: INFO: Created: latency-svc-zk796 May 22 14:37:30.228: INFO: Got endpoints: latency-svc-zk796 [845.271325ms] May 22 14:37:30.254: INFO: Created: latency-svc-xp5kq May 22 14:37:30.271: INFO: Got endpoints: latency-svc-xp5kq [796.499657ms] May 22 14:37:30.312: INFO: Created: latency-svc-8pghk May 22 14:37:30.315: INFO: Got endpoints: latency-svc-8pghk [805.64297ms] May 22 14:37:30.340: INFO: Created: latency-svc-xcvxh May 22 14:37:30.355: INFO: Got endpoints: latency-svc-xcvxh [792.114473ms] May 22 14:37:30.374: INFO: Created: latency-svc-26w9f May 22 14:37:30.386: INFO: Got endpoints: latency-svc-26w9f [752.975894ms] May 22 14:37:30.410: INFO: Created: latency-svc-zdqxw May 22 14:37:30.451: INFO: Got endpoints: latency-svc-zdqxw [764.379591ms] May 22 14:37:30.493: INFO: Created: latency-svc-qpqhd May 22 14:37:30.549: INFO: Got endpoints: latency-svc-qpqhd [826.215502ms] May 22 14:37:30.646: INFO: Created: latency-svc-xqmbh May 22 14:37:30.798: INFO: Got endpoints: latency-svc-xqmbh [1.020572996s] May 22 14:37:30.872: INFO: Created: latency-svc-dr9dk May 22 14:37:30.930: INFO: Got endpoints: latency-svc-dr9dk [1.104860918s] May 22 14:37:31.015: INFO: Created: latency-svc-j4gqk May 22 14:37:31.048: INFO: Got endpoints: latency-svc-j4gqk [1.193117807s] May 22 14:37:31.064: INFO: Created: latency-svc-hdqq9 May 22 14:37:31.093: INFO: Got endpoints: latency-svc-hdqq9 [1.190166495s] May 22 14:37:31.127: INFO: Created: latency-svc-s5ss2 May 22 14:37:31.142: INFO: Got endpoints: latency-svc-s5ss2 [1.196574357s] May 22 14:37:31.191: INFO: Created: latency-svc-2578j May 22 14:37:31.194: INFO: Got endpoints: latency-svc-2578j [1.138237862s] May 22 14:37:31.220: INFO: Created: latency-svc-bkc2m May 22 14:37:31.237: INFO: Got endpoints: latency-svc-bkc2m [1.116098132s] May 22 14:37:31.268: INFO: Created: latency-svc-7wblb May 22 14:37:31.285: INFO: Got endpoints: latency-svc-7wblb [1.094575851s] May 22 14:37:31.331: INFO: Created: latency-svc-9rnhg May 22 14:37:31.345: INFO: Got endpoints: latency-svc-9rnhg [1.116835613s] May 22 14:37:31.400: INFO: Created: latency-svc-f22qd May 22 14:37:31.412: INFO: Got endpoints: latency-svc-f22qd [1.140477238s] May 22 14:37:31.481: INFO: Created: latency-svc-bpphx May 22 14:37:31.484: INFO: Got endpoints: latency-svc-bpphx [1.169512914s] May 22 14:37:31.527: INFO: Created: latency-svc-72krq May 22 14:37:31.555: INFO: Got endpoints: latency-svc-72krq [1.19957757s] May 22 14:37:31.631: INFO: Created: latency-svc-57c8f May 22 14:37:31.658: INFO: Got endpoints: latency-svc-57c8f [1.271386748s] May 22 14:37:31.659: INFO: Created: latency-svc-jwnhl May 22 14:37:31.670: INFO: Got endpoints: latency-svc-jwnhl [1.219338095s] May 22 14:37:31.700: INFO: Created: latency-svc-6qrpj May 22 14:37:31.719: INFO: Got endpoints: latency-svc-6qrpj [1.16994597s] May 22 14:37:31.791: INFO: Created: latency-svc-52xmx May 22 14:37:31.797: INFO: Got endpoints: latency-svc-52xmx [999.319497ms] May 22 14:37:31.820: INFO: Created: latency-svc-72vvd May 22 14:37:31.833: INFO: Got endpoints: latency-svc-72vvd [903.221517ms] May 22 14:37:31.857: INFO: Created: latency-svc-g9tv2 May 22 14:37:31.870: INFO: Got endpoints: latency-svc-g9tv2 [821.419917ms] May 22 14:37:31.959: INFO: Created: latency-svc-wn7bs May 22 14:37:31.963: INFO: Got endpoints: latency-svc-wn7bs [869.463509ms] May 22 14:37:32.006: INFO: Created: latency-svc-tk55z May 22 14:37:32.020: INFO: Got endpoints: latency-svc-tk55z [877.680872ms] May 22 14:37:32.049: INFO: Created: latency-svc-h2rn9 May 22 14:37:32.091: INFO: Got endpoints: latency-svc-h2rn9 [896.378781ms] May 22 14:37:32.114: INFO: Created: latency-svc-gm7tj May 22 14:37:32.173: INFO: Got endpoints: latency-svc-gm7tj [936.73033ms] May 22 14:37:32.235: INFO: Created: latency-svc-hfw6f May 22 14:37:32.243: INFO: Got endpoints: latency-svc-hfw6f [957.590351ms] May 22 14:37:32.264: INFO: Created: latency-svc-mlp4h May 22 14:37:32.279: INFO: Got endpoints: latency-svc-mlp4h [933.795819ms] May 22 14:37:32.300: INFO: Created: latency-svc-mb2dr May 22 14:37:32.315: INFO: Got endpoints: latency-svc-mb2dr [903.444295ms] May 22 14:37:32.396: INFO: Created: latency-svc-pqfgk May 22 14:37:32.400: INFO: Got endpoints: latency-svc-pqfgk [915.51246ms] May 22 14:37:32.425: INFO: Created: latency-svc-mqhqb May 22 14:37:32.435: INFO: Got endpoints: latency-svc-mqhqb [880.240706ms] May 22 14:37:32.463: INFO: Created: latency-svc-8h6h8 May 22 14:37:32.478: INFO: Got endpoints: latency-svc-8h6h8 [820.622729ms] May 22 14:37:32.565: INFO: Created: latency-svc-2dwdl May 22 14:37:32.568: INFO: Got endpoints: latency-svc-2dwdl [897.661267ms] May 22 14:37:32.606: INFO: Created: latency-svc-vdbcc May 22 14:37:32.623: INFO: Got endpoints: latency-svc-vdbcc [904.139989ms] May 22 14:37:32.708: INFO: Created: latency-svc-bfqs9 May 22 14:37:32.710: INFO: Got endpoints: latency-svc-bfqs9 [912.91103ms] May 22 14:37:32.745: INFO: Created: latency-svc-5hvgw May 22 14:37:32.779: INFO: Got endpoints: latency-svc-5hvgw [945.734688ms] May 22 14:37:32.845: INFO: Created: latency-svc-zjcv9 May 22 14:37:32.851: INFO: Got endpoints: latency-svc-zjcv9 [981.169085ms] May 22 14:37:32.876: INFO: Created: latency-svc-ncvwr May 22 14:37:32.893: INFO: Got endpoints: latency-svc-ncvwr [930.788197ms] May 22 14:37:32.918: INFO: Created: latency-svc-wjfb8 May 22 14:37:32.936: INFO: Got endpoints: latency-svc-wjfb8 [915.486273ms] May 22 14:37:32.989: INFO: Created: latency-svc-lc7sn May 22 14:37:32.996: INFO: Got endpoints: latency-svc-lc7sn [905.115008ms] May 22 14:37:33.026: INFO: Created: latency-svc-th8c5 May 22 14:37:33.038: INFO: Got endpoints: latency-svc-th8c5 [865.002639ms] May 22 14:37:33.061: INFO: Created: latency-svc-ctwbf May 22 14:37:33.157: INFO: Got endpoints: latency-svc-ctwbf [913.382794ms] May 22 14:37:33.176: INFO: Created: latency-svc-6qsdf May 22 14:37:33.191: INFO: Got endpoints: latency-svc-6qsdf [911.878182ms] May 22 14:37:33.223: INFO: Created: latency-svc-9l94b May 22 14:37:33.237: INFO: Got endpoints: latency-svc-9l94b [922.180827ms] May 22 14:37:33.313: INFO: Created: latency-svc-4mhk5 May 22 14:37:33.316: INFO: Got endpoints: latency-svc-4mhk5 [916.082224ms] May 22 14:37:33.350: INFO: Created: latency-svc-wm2kw May 22 14:37:33.364: INFO: Got endpoints: latency-svc-wm2kw [928.439922ms] May 22 14:37:33.392: INFO: Created: latency-svc-pgv8s May 22 14:37:33.406: INFO: Got endpoints: latency-svc-pgv8s [927.21586ms] May 22 14:37:33.463: INFO: Created: latency-svc-k7z4d May 22 14:37:33.493: INFO: Got endpoints: latency-svc-k7z4d [925.309273ms] May 22 14:37:33.494: INFO: Created: latency-svc-gsvdq May 22 14:37:33.502: INFO: Got endpoints: latency-svc-gsvdq [878.861034ms] May 22 14:37:33.536: INFO: Created: latency-svc-qz4wc May 22 14:37:33.552: INFO: Got endpoints: latency-svc-qz4wc [841.485776ms] May 22 14:37:33.612: INFO: Created: latency-svc-4484n May 22 14:37:33.655: INFO: Got endpoints: latency-svc-4484n [876.20591ms] May 22 14:37:33.698: INFO: Created: latency-svc-46pkw May 22 14:37:33.738: INFO: Got endpoints: latency-svc-46pkw [186.227965ms] May 22 14:37:33.764: INFO: Created: latency-svc-t9qxj May 22 14:37:33.779: INFO: Got endpoints: latency-svc-t9qxj [928.083876ms] May 22 14:37:33.806: INFO: Created: latency-svc-zxj76 May 22 14:37:33.835: INFO: Got endpoints: latency-svc-zxj76 [941.564985ms] May 22 14:37:33.882: INFO: Created: latency-svc-d8z6j May 22 14:37:33.888: INFO: Got endpoints: latency-svc-d8z6j [952.095954ms] May 22 14:37:33.914: INFO: Created: latency-svc-xbb88 May 22 14:37:33.930: INFO: Got endpoints: latency-svc-xbb88 [934.050487ms] May 22 14:37:33.950: INFO: Created: latency-svc-bq2gh May 22 14:37:34.019: INFO: Got endpoints: latency-svc-bq2gh [980.43485ms] May 22 14:37:34.035: INFO: Created: latency-svc-hbg2r May 22 14:37:34.047: INFO: Got endpoints: latency-svc-hbg2r [889.985028ms] May 22 14:37:34.069: INFO: Created: latency-svc-v29pl May 22 14:37:34.083: INFO: Got endpoints: latency-svc-v29pl [891.949548ms] May 22 14:37:34.175: INFO: Created: latency-svc-lchcz May 22 14:37:34.178: INFO: Got endpoints: latency-svc-lchcz [940.433462ms] May 22 14:37:34.231: INFO: Created: latency-svc-kgwhp May 22 14:37:34.258: INFO: Got endpoints: latency-svc-kgwhp [942.23928ms] May 22 14:37:34.313: INFO: Created: latency-svc-zbglp May 22 14:37:34.316: INFO: Got endpoints: latency-svc-zbglp [951.783872ms] May 22 14:37:34.347: INFO: Created: latency-svc-sqt6t May 22 14:37:34.360: INFO: Got endpoints: latency-svc-sqt6t [953.959754ms] May 22 14:37:34.388: INFO: Created: latency-svc-2mh8g May 22 14:37:34.402: INFO: Got endpoints: latency-svc-2mh8g [908.951359ms] May 22 14:37:34.453: INFO: Created: latency-svc-kd5w6 May 22 14:37:34.455: INFO: Got endpoints: latency-svc-kd5w6 [952.981674ms] May 22 14:37:34.508: INFO: Created: latency-svc-8kkmg May 22 14:37:34.523: INFO: Got endpoints: latency-svc-8kkmg [867.145423ms] May 22 14:37:34.543: INFO: Created: latency-svc-rvjx4 May 22 14:37:34.600: INFO: Got endpoints: latency-svc-rvjx4 [861.390871ms] May 22 14:37:34.624: INFO: Created: latency-svc-dzszs May 22 14:37:34.638: INFO: Got endpoints: latency-svc-dzszs [858.540688ms] May 22 14:37:34.659: INFO: Created: latency-svc-4vq8h May 22 14:37:34.674: INFO: Got endpoints: latency-svc-4vq8h [838.743023ms] May 22 14:37:34.695: INFO: Created: latency-svc-t2mn9 May 22 14:37:34.756: INFO: Got endpoints: latency-svc-t2mn9 [867.78296ms] May 22 14:37:34.759: INFO: Created: latency-svc-l8s87 May 22 14:37:34.770: INFO: Got endpoints: latency-svc-l8s87 [840.23692ms] May 22 14:37:34.801: INFO: Created: latency-svc-czhbh May 22 14:37:34.818: INFO: Got endpoints: latency-svc-czhbh [799.330741ms] May 22 14:37:34.844: INFO: Created: latency-svc-2mfgt May 22 14:37:34.887: INFO: Got endpoints: latency-svc-2mfgt [840.25874ms] May 22 14:37:34.897: INFO: Created: latency-svc-2l5s7 May 22 14:37:34.915: INFO: Got endpoints: latency-svc-2l5s7 [832.093596ms] May 22 14:37:34.934: INFO: Created: latency-svc-swnk2 May 22 14:37:34.951: INFO: Got endpoints: latency-svc-swnk2 [773.095063ms] May 22 14:37:34.982: INFO: Created: latency-svc-68b6d May 22 14:37:35.013: INFO: Got endpoints: latency-svc-68b6d [754.601558ms] May 22 14:37:35.035: INFO: Created: latency-svc-vdq52 May 22 14:37:35.053: INFO: Got endpoints: latency-svc-vdq52 [737.307875ms] May 22 14:37:35.096: INFO: Created: latency-svc-cfzl2 May 22 14:37:35.163: INFO: Got endpoints: latency-svc-cfzl2 [802.96317ms] May 22 14:37:35.174: INFO: Created: latency-svc-xck7g May 22 14:37:35.186: INFO: Got endpoints: latency-svc-xck7g [783.733285ms] May 22 14:37:35.211: INFO: Created: latency-svc-h8w8s May 22 14:37:35.223: INFO: Got endpoints: latency-svc-h8w8s [767.727224ms] May 22 14:37:35.257: INFO: Created: latency-svc-2fcgg May 22 14:37:35.331: INFO: Got endpoints: latency-svc-2fcgg [807.886516ms] May 22 14:37:35.332: INFO: Created: latency-svc-xdrrz May 22 14:37:35.349: INFO: Got endpoints: latency-svc-xdrrz [749.442676ms] May 22 14:37:35.402: INFO: Created: latency-svc-tr9tz May 22 14:37:35.416: INFO: Got endpoints: latency-svc-tr9tz [778.477575ms] May 22 14:37:35.475: INFO: Created: latency-svc-dm9ct May 22 14:37:35.478: INFO: Got endpoints: latency-svc-dm9ct [803.831329ms] May 22 14:37:35.528: INFO: Created: latency-svc-m86c7 May 22 14:37:35.560: INFO: Got endpoints: latency-svc-m86c7 [804.195016ms] May 22 14:37:35.608: INFO: Created: latency-svc-7b4v2 May 22 14:37:35.610: INFO: Got endpoints: latency-svc-7b4v2 [839.300667ms] May 22 14:37:35.679: INFO: Created: latency-svc-84hsg May 22 14:37:35.705: INFO: Got endpoints: latency-svc-84hsg [886.92815ms] May 22 14:37:35.756: INFO: Created: latency-svc-rbq7j May 22 14:37:35.759: INFO: Got endpoints: latency-svc-rbq7j [871.558429ms] May 22 14:37:35.835: INFO: Created: latency-svc-bn6l7 May 22 14:37:35.887: INFO: Got endpoints: latency-svc-bn6l7 [972.15611ms] May 22 14:37:35.894: INFO: Created: latency-svc-wctjh May 22 14:37:35.909: INFO: Got endpoints: latency-svc-wctjh [958.037927ms] May 22 14:37:35.932: INFO: Created: latency-svc-hgslt May 22 14:37:35.945: INFO: Got endpoints: latency-svc-hgslt [932.615371ms] May 22 14:37:35.972: INFO: Created: latency-svc-kpk57 May 22 14:37:36.025: INFO: Got endpoints: latency-svc-kpk57 [972.036801ms] May 22 14:37:36.055: INFO: Created: latency-svc-5wr7h May 22 14:37:36.072: INFO: Got endpoints: latency-svc-5wr7h [909.562548ms] May 22 14:37:36.092: INFO: Created: latency-svc-z2cd2 May 22 14:37:36.108: INFO: Got endpoints: latency-svc-z2cd2 [922.159803ms] May 22 14:37:36.170: INFO: Created: latency-svc-lj5p4 May 22 14:37:36.205: INFO: Got endpoints: latency-svc-lj5p4 [982.430643ms] May 22 14:37:36.269: INFO: Created: latency-svc-qxb6t May 22 14:37:36.296: INFO: Got endpoints: latency-svc-qxb6t [965.589847ms] May 22 14:37:36.296: INFO: Created: latency-svc-nsnm8 May 22 14:37:36.313: INFO: Got endpoints: latency-svc-nsnm8 [963.337578ms] May 22 14:37:36.344: INFO: Created: latency-svc-q9bcb May 22 14:37:36.420: INFO: Got endpoints: latency-svc-q9bcb [1.004194079s] May 22 14:37:36.423: INFO: Created: latency-svc-rcnp5 May 22 14:37:36.433: INFO: Got endpoints: latency-svc-rcnp5 [955.459044ms] May 22 14:37:36.465: INFO: Created: latency-svc-tbjwx May 22 14:37:36.481: INFO: Got endpoints: latency-svc-tbjwx [921.541167ms] May 22 14:37:36.506: INFO: Created: latency-svc-2vtxh May 22 14:37:36.552: INFO: Got endpoints: latency-svc-2vtxh [941.767042ms] May 22 14:37:36.567: INFO: Created: latency-svc-8gv4w May 22 14:37:36.590: INFO: Got endpoints: latency-svc-8gv4w [884.720802ms] May 22 14:37:36.620: INFO: Created: latency-svc-77rkv May 22 14:37:36.632: INFO: Got endpoints: latency-svc-77rkv [873.301489ms] May 22 14:37:36.702: INFO: Created: latency-svc-lrprq May 22 14:37:36.717: INFO: Got endpoints: latency-svc-lrprq [829.209405ms] May 22 14:37:36.752: INFO: Created: latency-svc-rp4wn May 22 14:37:36.788: INFO: Got endpoints: latency-svc-rp4wn [878.400668ms] May 22 14:37:36.863: INFO: Created: latency-svc-jqf7k May 22 14:37:36.873: INFO: Got endpoints: latency-svc-jqf7k [927.442821ms] May 22 14:37:36.895: INFO: Created: latency-svc-5hn4g May 22 14:37:36.909: INFO: Got endpoints: latency-svc-5hn4g [884.090468ms] May 22 14:37:36.938: INFO: Created: latency-svc-5nhzl May 22 14:37:36.945: INFO: Got endpoints: latency-svc-5nhzl [872.912382ms] May 22 14:37:37.015: INFO: Created: latency-svc-sfqkg May 22 14:37:37.017: INFO: Got endpoints: latency-svc-sfqkg [908.415652ms] May 22 14:37:37.039: INFO: Created: latency-svc-z9z8l May 22 14:37:37.057: INFO: Got endpoints: latency-svc-z9z8l [851.33752ms] May 22 14:37:37.095: INFO: Created: latency-svc-2hhpr May 22 14:37:37.138: INFO: Got endpoints: latency-svc-2hhpr [842.211884ms] May 22 14:37:37.160: INFO: Created: latency-svc-sp7hc May 22 14:37:37.183: INFO: Got endpoints: latency-svc-sp7hc [870.667814ms] May 22 14:37:37.232: INFO: Created: latency-svc-f7mvx May 22 14:37:37.277: INFO: Got endpoints: latency-svc-f7mvx [856.944095ms] May 22 14:37:37.291: INFO: Created: latency-svc-8dh7x May 22 14:37:37.322: INFO: Got endpoints: latency-svc-8dh7x [888.383611ms] May 22 14:37:37.416: INFO: Created: latency-svc-cs6qt May 22 14:37:37.418: INFO: Got endpoints: latency-svc-cs6qt [936.493766ms] May 22 14:37:37.460: INFO: Created: latency-svc-pf98q May 22 14:37:37.472: INFO: Got endpoints: latency-svc-pf98q [920.314871ms] May 22 14:37:37.508: INFO: Created: latency-svc-9whdv May 22 14:37:37.547: INFO: Got endpoints: latency-svc-9whdv [956.474509ms] May 22 14:37:37.575: INFO: Created: latency-svc-8hnqq May 22 14:37:37.592: INFO: Got endpoints: latency-svc-8hnqq [960.296983ms] May 22 14:37:37.634: INFO: Created: latency-svc-jnb8g May 22 14:37:37.702: INFO: Got endpoints: latency-svc-jnb8g [984.985949ms] May 22 14:37:37.704: INFO: Created: latency-svc-v97wj May 22 14:37:37.713: INFO: Got endpoints: latency-svc-v97wj [925.142233ms] May 22 14:37:37.742: INFO: Created: latency-svc-zpntq May 22 14:37:37.767: INFO: Got endpoints: latency-svc-zpntq [894.027016ms] May 22 14:37:37.834: INFO: Created: latency-svc-ptk2r May 22 14:37:37.837: INFO: Got endpoints: latency-svc-ptk2r [927.812087ms] May 22 14:37:37.868: INFO: Created: latency-svc-67khp May 22 14:37:37.897: INFO: Got endpoints: latency-svc-67khp [952.109567ms] May 22 14:37:37.927: INFO: Created: latency-svc-wdv9z May 22 14:37:37.989: INFO: Got endpoints: latency-svc-wdv9z [972.512425ms] May 22 14:37:37.992: INFO: Created: latency-svc-qr5tt May 22 14:37:37.996: INFO: Got endpoints: latency-svc-qr5tt [939.139957ms] May 22 14:37:38.018: INFO: Created: latency-svc-pc9pj May 22 14:37:38.033: INFO: Got endpoints: latency-svc-pc9pj [894.045663ms] May 22 14:37:38.053: INFO: Created: latency-svc-k5vhd May 22 14:37:38.063: INFO: Got endpoints: latency-svc-k5vhd [879.258077ms] May 22 14:37:38.122: INFO: Created: latency-svc-7zhsf May 22 14:37:38.131: INFO: Got endpoints: latency-svc-7zhsf [853.663601ms] May 22 14:37:38.131: INFO: Created: latency-svc-4v5ps May 22 14:37:38.147: INFO: Got endpoints: latency-svc-4v5ps [825.691798ms] May 22 14:37:38.186: INFO: Created: latency-svc-jwlqb May 22 14:37:38.202: INFO: Got endpoints: latency-svc-jwlqb [783.733672ms] May 22 14:37:38.271: INFO: Created: latency-svc-j757m May 22 14:37:38.292: INFO: Got endpoints: latency-svc-j757m [820.463748ms] May 22 14:37:38.332: INFO: Created: latency-svc-dqdkp May 22 14:37:38.346: INFO: Got endpoints: latency-svc-dqdkp [799.872035ms] May 22 14:37:38.432: INFO: Created: latency-svc-2flc2 May 22 14:37:38.435: INFO: Got endpoints: latency-svc-2flc2 [842.991067ms] May 22 14:37:38.474: INFO: Created: latency-svc-m7n9r May 22 14:37:38.491: INFO: Got endpoints: latency-svc-m7n9r [788.896919ms] May 22 14:37:38.515: INFO: Created: latency-svc-xvdx5 May 22 14:37:38.558: INFO: Got endpoints: latency-svc-xvdx5 [845.031381ms] May 22 14:37:38.558: INFO: Latencies: [149.322083ms 162.904194ms 186.227965ms 219.699676ms 327.977877ms 344.251854ms 454.382165ms 464.55649ms 598.299862ms 601.372383ms 692.258972ms 697.892584ms 721.527715ms 727.854858ms 733.535209ms 737.307875ms 739.551183ms 741.134941ms 749.442676ms 752.975894ms 754.601558ms 764.379591ms 767.727224ms 773.095063ms 777.073492ms 778.477575ms 779.968665ms 783.733285ms 783.733672ms 788.708262ms 788.896919ms 792.114473ms 796.499657ms 799.330741ms 799.872035ms 802.96317ms 803.831329ms 804.195016ms 805.64297ms 807.886516ms 817.398165ms 817.439129ms 820.463748ms 820.622729ms 821.419917ms 825.691798ms 826.215502ms 826.359357ms 826.469324ms 829.209405ms 831.375795ms 832.093596ms 838.743023ms 839.300667ms 840.23692ms 840.25874ms 841.485776ms 842.211884ms 842.991067ms 845.031381ms 845.271325ms 850.552078ms 851.33752ms 851.356198ms 851.992911ms 852.991199ms 853.663601ms 854.058958ms 856.944095ms 858.540688ms 861.390871ms 865.002639ms 866.145046ms 867.145423ms 867.78296ms 869.463509ms 869.914215ms 870.667814ms 871.558429ms 872.912382ms 873.301489ms 875.109751ms 876.20591ms 877.002431ms 877.680872ms 877.716848ms 878.400668ms 878.861034ms 879.258077ms 880.240706ms 883.822215ms 884.090468ms 884.720802ms 886.92815ms 888.383611ms 889.985028ms 891.949548ms 894.027016ms 894.045663ms 896.378781ms 897.661267ms 903.221517ms 903.444295ms 903.593036ms 904.139989ms 905.115008ms 908.415652ms 908.951359ms 909.562548ms 911.878182ms 912.91103ms 913.382794ms 915.486273ms 915.51246ms 916.082224ms 916.954735ms 920.314871ms 921.541167ms 921.627792ms 922.159803ms 922.180827ms 922.258923ms 925.142233ms 925.309273ms 927.21586ms 927.442821ms 927.812087ms 927.840776ms 928.083876ms 928.328036ms 928.439922ms 930.788197ms 932.615371ms 933.795819ms 933.897844ms 934.050487ms 936.493766ms 936.73033ms 939.139957ms 940.433462ms 941.564985ms 941.767042ms 942.23928ms 945.734688ms 950.557182ms 951.783872ms 952.095954ms 952.109567ms 952.908027ms 952.981674ms 953.959754ms 955.459044ms 956.474509ms 957.590351ms 958.037927ms 960.00161ms 960.296983ms 963.337578ms 965.589847ms 970.399285ms 972.036801ms 972.15611ms 972.512425ms 980.43485ms 981.169085ms 982.209861ms 982.430643ms 982.65299ms 984.985949ms 986.313707ms 999.319497ms 1.004194079s 1.01695313s 1.020572996s 1.042568125s 1.045813501s 1.050149518s 1.051894404s 1.055310855s 1.061728468s 1.094575851s 1.09719106s 1.104860918s 1.111743015s 1.116098132s 1.116835613s 1.11898752s 1.125784232s 1.138237862s 1.140477238s 1.150498949s 1.153109002s 1.169512914s 1.16994597s 1.190166495s 1.193117807s 1.196574357s 1.19957757s 1.219338095s 1.271386748s] May 22 14:37:38.558: INFO: 50 %ile: 897.661267ms May 22 14:37:38.558: INFO: 90 %ile: 1.094575851s May 22 14:37:38.558: INFO: 99 %ile: 1.219338095s May 22 14:37:38.558: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:37:38.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-947" for this suite. May 22 14:38:02.572: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:38:02.657: INFO: namespace svc-latency-947 deletion completed in 24.094886506s • [SLOW TEST:39.615 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 22 14:38:02.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 22 14:38:06.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7102" for this suite. May 22 14:38:12.900: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 22 14:38:12.976: INFO: namespace emptydir-wrapper-7102 deletion completed in 6.091195084s • [SLOW TEST:10.319 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSMay 22 14:38:12.976: INFO: Running AfterSuite actions on all nodes May 22 14:38:12.976: INFO: Running AfterSuite actions on node 1 May 22 14:38:12.976: INFO: Skipping dumping logs from cluster Ran 215 of 4412 Specs in 6148.226 seconds SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped PASS