I0507 12:55:44.566448 6 e2e.go:243] Starting e2e run "c79be77e-30b8-437a-9018-ff7265094089" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1588856143 - Will randomize all specs Will run 215 of 4412 specs May 7 12:55:44.756: INFO: >>> kubeConfig: /root/.kube/config May 7 12:55:44.761: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 7 12:55:44.782: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 7 12:55:44.813: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 7 12:55:44.813: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 7 12:55:44.813: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 7 12:55:44.824: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 7 12:55:44.824: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 7 12:55:44.824: INFO: e2e test version: v1.15.11 May 7 12:55:44.825: INFO: kube-apiserver version: v1.15.7 [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 12:55:44.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container May 7 12:55:44.891: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 7 12:55:44.893: INFO: PodSpec: initContainers in spec.initContainers May 7 12:56:29.570: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-c950f98f-aed5-4c54-b822-2779dfe09ef8", GenerateName:"", Namespace:"init-container-574", SelfLink:"/api/v1/namespaces/init-container-574/pods/pod-init-c950f98f-aed5-4c54-b822-2779dfe09ef8", UID:"8e0e58d0-691c-4c2d-8b45-11a87e15cc6f", ResourceVersion:"9525540", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63724452944, loc:(*time.Location)(0x7ead8c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"893484604"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-9q8xr", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001329180), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-9q8xr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-9q8xr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-9q8xr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002c0f338), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001f94780), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002c0f3c0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002c0f3e0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002c0f3e8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002c0f3ec), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724452945, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724452945, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724452945, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724452944, loc:(*time.Location)(0x7ead8c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.5", PodIP:"10.244.1.32", StartTime:(*v1.Time)(0xc001f17920), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0025ccf50)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0025ccfc0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://3c671446d044ba45472c237ba7d89e21bdb491d7e4a00b54dda833f908258697"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001f17960), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001f17940), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 12:56:29.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-574" for this suite. May 7 12:56:51.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 12:56:51.727: INFO: namespace init-container-574 deletion completed in 22.111733457s • [SLOW TEST:66.902 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 12:56:51.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-19d55e92-f120-43f8-9179-4d2362c6955f STEP: Creating a pod to test consume secrets May 7 12:56:51.859: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b1733b3f-83bb-45f0-bb1d-ade55ac6233a" in namespace "projected-2475" to be "success or failure" May 7 12:56:51.906: INFO: Pod "pod-projected-secrets-b1733b3f-83bb-45f0-bb1d-ade55ac6233a": Phase="Pending", Reason="", readiness=false. Elapsed: 46.391422ms May 7 12:56:53.910: INFO: Pod "pod-projected-secrets-b1733b3f-83bb-45f0-bb1d-ade55ac6233a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050339969s May 7 12:56:55.914: INFO: Pod "pod-projected-secrets-b1733b3f-83bb-45f0-bb1d-ade55ac6233a": Phase="Running", Reason="", readiness=true. Elapsed: 4.054636809s May 7 12:56:57.918: INFO: Pod "pod-projected-secrets-b1733b3f-83bb-45f0-bb1d-ade55ac6233a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.058760348s STEP: Saw pod success May 7 12:56:57.918: INFO: Pod "pod-projected-secrets-b1733b3f-83bb-45f0-bb1d-ade55ac6233a" satisfied condition "success or failure" May 7 12:56:57.922: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-b1733b3f-83bb-45f0-bb1d-ade55ac6233a container projected-secret-volume-test: STEP: delete the pod May 7 12:56:57.980: INFO: Waiting for pod pod-projected-secrets-b1733b3f-83bb-45f0-bb1d-ade55ac6233a to disappear May 7 12:56:57.988: INFO: Pod pod-projected-secrets-b1733b3f-83bb-45f0-bb1d-ade55ac6233a no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 12:56:57.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2475" for this suite. May 7 12:57:04.010: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 12:57:04.110: INFO: namespace projected-2475 deletion completed in 6.118795526s • [SLOW TEST:12.383 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 12:57:04.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-8943 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-8943 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8943 May 7 12:57:04.228: INFO: Found 0 stateful pods, waiting for 1 May 7 12:57:14.233: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 7 12:57:14.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8943 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 7 12:57:16.859: INFO: stderr: "I0507 12:57:16.730495 32 log.go:172] (0xc000118630) (0xc00028e820) Create stream\nI0507 12:57:16.730563 32 log.go:172] (0xc000118630) (0xc00028e820) Stream added, broadcasting: 1\nI0507 12:57:16.733948 32 log.go:172] (0xc000118630) Reply frame received for 1\nI0507 12:57:16.733998 32 log.go:172] (0xc000118630) (0xc00022c000) Create stream\nI0507 12:57:16.734027 32 log.go:172] (0xc000118630) (0xc00022c000) Stream added, broadcasting: 3\nI0507 12:57:16.735068 32 log.go:172] (0xc000118630) Reply frame received for 3\nI0507 12:57:16.735119 32 log.go:172] (0xc000118630) (0xc00026c000) Create stream\nI0507 12:57:16.735143 32 log.go:172] (0xc000118630) (0xc00026c000) Stream added, broadcasting: 5\nI0507 12:57:16.736210 32 log.go:172] (0xc000118630) Reply frame received for 5\nI0507 12:57:16.821870 32 log.go:172] (0xc000118630) Data frame received for 5\nI0507 12:57:16.821914 32 log.go:172] (0xc00026c000) (5) Data frame handling\nI0507 12:57:16.821950 32 log.go:172] (0xc00026c000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0507 12:57:16.851668 32 log.go:172] (0xc000118630) Data frame received for 3\nI0507 12:57:16.851711 32 log.go:172] (0xc00022c000) (3) Data frame handling\nI0507 12:57:16.851763 32 log.go:172] (0xc00022c000) (3) Data frame sent\nI0507 12:57:16.851793 32 log.go:172] (0xc000118630) Data frame received for 3\nI0507 12:57:16.851829 32 log.go:172] (0xc00022c000) (3) Data frame handling\nI0507 12:57:16.851891 32 log.go:172] (0xc000118630) Data frame received for 5\nI0507 12:57:16.851922 32 log.go:172] (0xc00026c000) (5) Data frame handling\nI0507 12:57:16.854223 32 log.go:172] (0xc000118630) Data frame received for 1\nI0507 12:57:16.854253 32 log.go:172] (0xc00028e820) (1) Data frame handling\nI0507 12:57:16.854277 32 log.go:172] (0xc00028e820) (1) Data frame sent\nI0507 12:57:16.854306 32 log.go:172] (0xc000118630) (0xc00028e820) Stream removed, broadcasting: 1\nI0507 12:57:16.854329 32 log.go:172] (0xc000118630) Go away received\nI0507 12:57:16.854693 32 log.go:172] (0xc000118630) (0xc00028e820) Stream removed, broadcasting: 1\nI0507 12:57:16.854711 32 log.go:172] (0xc000118630) (0xc00022c000) Stream removed, broadcasting: 3\nI0507 12:57:16.854718 32 log.go:172] (0xc000118630) (0xc00026c000) Stream removed, broadcasting: 5\n" May 7 12:57:16.859: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 7 12:57:16.859: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 7 12:57:16.863: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 7 12:57:26.867: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 7 12:57:26.868: INFO: Waiting for statefulset status.replicas updated to 0 May 7 12:57:26.888: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999754s May 7 12:57:27.892: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.988373004s May 7 12:57:28.897: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.984842439s May 7 12:57:29.901: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.979765839s May 7 12:57:30.905: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.976146394s May 7 12:57:31.910: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.971646553s May 7 12:57:32.915: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.9665472s May 7 12:57:33.920: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.962268607s May 7 12:57:34.924: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.957245434s May 7 12:57:35.929: INFO: Verifying statefulset ss doesn't scale past 1 for another 952.967481ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8943 May 7 12:57:36.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8943 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 7 12:57:37.149: INFO: stderr: "I0507 12:57:37.055076 63 log.go:172] (0xc000a3a630) (0xc0007ccaa0) Create stream\nI0507 12:57:37.055128 63 log.go:172] (0xc000a3a630) (0xc0007ccaa0) Stream added, broadcasting: 1\nI0507 12:57:37.060400 63 log.go:172] (0xc000a3a630) Reply frame received for 1\nI0507 12:57:37.060437 63 log.go:172] (0xc000a3a630) (0xc0007cc1e0) Create stream\nI0507 12:57:37.060449 63 log.go:172] (0xc000a3a630) (0xc0007cc1e0) Stream added, broadcasting: 3\nI0507 12:57:37.061606 63 log.go:172] (0xc000a3a630) Reply frame received for 3\nI0507 12:57:37.061654 63 log.go:172] (0xc000a3a630) (0xc0001f6000) Create stream\nI0507 12:57:37.061675 63 log.go:172] (0xc000a3a630) (0xc0001f6000) Stream added, broadcasting: 5\nI0507 12:57:37.063825 63 log.go:172] (0xc000a3a630) Reply frame received for 5\nI0507 12:57:37.141372 63 log.go:172] (0xc000a3a630) Data frame received for 5\nI0507 12:57:37.141415 63 log.go:172] (0xc0001f6000) (5) Data frame handling\nI0507 12:57:37.141431 63 log.go:172] (0xc0001f6000) (5) Data frame sent\nI0507 12:57:37.141443 63 log.go:172] (0xc000a3a630) Data frame received for 5\nI0507 12:57:37.141453 63 log.go:172] (0xc0001f6000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0507 12:57:37.141484 63 log.go:172] (0xc000a3a630) Data frame received for 3\nI0507 12:57:37.141502 63 log.go:172] (0xc0007cc1e0) (3) Data frame handling\nI0507 12:57:37.141513 63 log.go:172] (0xc0007cc1e0) (3) Data frame sent\nI0507 12:57:37.141528 63 log.go:172] (0xc000a3a630) Data frame received for 3\nI0507 12:57:37.141538 63 log.go:172] (0xc0007cc1e0) (3) Data frame handling\nI0507 12:57:37.143226 63 log.go:172] (0xc000a3a630) Data frame received for 1\nI0507 12:57:37.143279 63 log.go:172] (0xc0007ccaa0) (1) Data frame handling\nI0507 12:57:37.143301 63 log.go:172] (0xc0007ccaa0) (1) Data frame sent\nI0507 12:57:37.143327 63 log.go:172] (0xc000a3a630) (0xc0007ccaa0) Stream removed, broadcasting: 1\nI0507 12:57:37.143360 63 log.go:172] (0xc000a3a630) Go away received\nI0507 12:57:37.143833 63 log.go:172] (0xc000a3a630) (0xc0007ccaa0) Stream removed, broadcasting: 1\nI0507 12:57:37.143855 63 log.go:172] (0xc000a3a630) (0xc0007cc1e0) Stream removed, broadcasting: 3\nI0507 12:57:37.143866 63 log.go:172] (0xc000a3a630) (0xc0001f6000) Stream removed, broadcasting: 5\n" May 7 12:57:37.149: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 7 12:57:37.149: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 7 12:57:37.153: INFO: Found 1 stateful pods, waiting for 3 May 7 12:57:47.163: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 7 12:57:47.163: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 7 12:57:47.163: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 7 12:57:47.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8943 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 7 12:57:47.478: INFO: stderr: "I0507 12:57:47.288565 83 log.go:172] (0xc0001166e0) (0xc0005f2f00) Create stream\nI0507 12:57:47.288660 83 log.go:172] (0xc0001166e0) (0xc0005f2f00) Stream added, broadcasting: 1\nI0507 12:57:47.293039 83 log.go:172] (0xc0001166e0) Reply frame received for 1\nI0507 12:57:47.293083 83 log.go:172] (0xc0001166e0) (0xc0005f26e0) Create stream\nI0507 12:57:47.293094 83 log.go:172] (0xc0001166e0) (0xc0005f26e0) Stream added, broadcasting: 3\nI0507 12:57:47.294255 83 log.go:172] (0xc0001166e0) Reply frame received for 3\nI0507 12:57:47.294294 83 log.go:172] (0xc0001166e0) (0xc000547720) Create stream\nI0507 12:57:47.294303 83 log.go:172] (0xc0001166e0) (0xc000547720) Stream added, broadcasting: 5\nI0507 12:57:47.295272 83 log.go:172] (0xc0001166e0) Reply frame received for 5\nI0507 12:57:47.472379 83 log.go:172] (0xc0001166e0) Data frame received for 5\nI0507 12:57:47.472401 83 log.go:172] (0xc000547720) (5) Data frame handling\nI0507 12:57:47.472413 83 log.go:172] (0xc000547720) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0507 12:57:47.472474 83 log.go:172] (0xc0001166e0) Data frame received for 5\nI0507 12:57:47.472490 83 log.go:172] (0xc000547720) (5) Data frame handling\nI0507 12:57:47.472537 83 log.go:172] (0xc0001166e0) Data frame received for 3\nI0507 12:57:47.472567 83 log.go:172] (0xc0005f26e0) (3) Data frame handling\nI0507 12:57:47.472575 83 log.go:172] (0xc0005f26e0) (3) Data frame sent\nI0507 12:57:47.472583 83 log.go:172] (0xc0001166e0) Data frame received for 3\nI0507 12:57:47.472593 83 log.go:172] (0xc0005f26e0) (3) Data frame handling\nI0507 12:57:47.474028 83 log.go:172] (0xc0001166e0) Data frame received for 1\nI0507 12:57:47.474043 83 log.go:172] (0xc0005f2f00) (1) Data frame handling\nI0507 12:57:47.474053 83 log.go:172] (0xc0005f2f00) (1) Data frame sent\nI0507 12:57:47.474065 83 log.go:172] (0xc0001166e0) (0xc0005f2f00) Stream removed, broadcasting: 1\nI0507 12:57:47.474079 83 log.go:172] (0xc0001166e0) Go away received\nI0507 12:57:47.474367 83 log.go:172] (0xc0001166e0) (0xc0005f2f00) Stream removed, broadcasting: 1\nI0507 12:57:47.474389 83 log.go:172] (0xc0001166e0) (0xc0005f26e0) Stream removed, broadcasting: 3\nI0507 12:57:47.474402 83 log.go:172] (0xc0001166e0) (0xc000547720) Stream removed, broadcasting: 5\n" May 7 12:57:47.478: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 7 12:57:47.478: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 7 12:57:47.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8943 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 7 12:57:47.717: INFO: stderr: "I0507 12:57:47.597034 105 log.go:172] (0xc000996420) (0xc000891e00) Create stream\nI0507 12:57:47.597095 105 log.go:172] (0xc000996420) (0xc000891e00) Stream added, broadcasting: 1\nI0507 12:57:47.599744 105 log.go:172] (0xc000996420) Reply frame received for 1\nI0507 12:57:47.599787 105 log.go:172] (0xc000996420) (0xc00003a0a0) Create stream\nI0507 12:57:47.599804 105 log.go:172] (0xc000996420) (0xc00003a0a0) Stream added, broadcasting: 3\nI0507 12:57:47.600984 105 log.go:172] (0xc000996420) Reply frame received for 3\nI0507 12:57:47.601020 105 log.go:172] (0xc000996420) (0xc000891ea0) Create stream\nI0507 12:57:47.601032 105 log.go:172] (0xc000996420) (0xc000891ea0) Stream added, broadcasting: 5\nI0507 12:57:47.602309 105 log.go:172] (0xc000996420) Reply frame received for 5\nI0507 12:57:47.676786 105 log.go:172] (0xc000996420) Data frame received for 5\nI0507 12:57:47.676826 105 log.go:172] (0xc000891ea0) (5) Data frame handling\nI0507 12:57:47.676844 105 log.go:172] (0xc000891ea0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0507 12:57:47.710161 105 log.go:172] (0xc000996420) Data frame received for 3\nI0507 12:57:47.710188 105 log.go:172] (0xc00003a0a0) (3) Data frame handling\nI0507 12:57:47.710195 105 log.go:172] (0xc00003a0a0) (3) Data frame sent\nI0507 12:57:47.710203 105 log.go:172] (0xc000996420) Data frame received for 3\nI0507 12:57:47.710209 105 log.go:172] (0xc00003a0a0) (3) Data frame handling\nI0507 12:57:47.710236 105 log.go:172] (0xc000996420) Data frame received for 5\nI0507 12:57:47.710245 105 log.go:172] (0xc000891ea0) (5) Data frame handling\nI0507 12:57:47.711967 105 log.go:172] (0xc000996420) Data frame received for 1\nI0507 12:57:47.711994 105 log.go:172] (0xc000891e00) (1) Data frame handling\nI0507 12:57:47.712011 105 log.go:172] (0xc000891e00) (1) Data frame sent\nI0507 12:57:47.712024 105 log.go:172] (0xc000996420) (0xc000891e00) Stream removed, broadcasting: 1\nI0507 12:57:47.712038 105 log.go:172] (0xc000996420) Go away received\nI0507 12:57:47.712428 105 log.go:172] (0xc000996420) (0xc000891e00) Stream removed, broadcasting: 1\nI0507 12:57:47.712444 105 log.go:172] (0xc000996420) (0xc00003a0a0) Stream removed, broadcasting: 3\nI0507 12:57:47.712451 105 log.go:172] (0xc000996420) (0xc000891ea0) Stream removed, broadcasting: 5\n" May 7 12:57:47.718: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 7 12:57:47.718: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 7 12:57:47.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8943 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 7 12:57:47.987: INFO: stderr: "I0507 12:57:47.875461 125 log.go:172] (0xc000116dc0) (0xc0006b46e0) Create stream\nI0507 12:57:47.875526 125 log.go:172] (0xc000116dc0) (0xc0006b46e0) Stream added, broadcasting: 1\nI0507 12:57:47.877938 125 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0507 12:57:47.877994 125 log.go:172] (0xc000116dc0) (0xc0007ee000) Create stream\nI0507 12:57:47.878018 125 log.go:172] (0xc000116dc0) (0xc0007ee000) Stream added, broadcasting: 3\nI0507 12:57:47.879108 125 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0507 12:57:47.879133 125 log.go:172] (0xc000116dc0) (0xc0006b4780) Create stream\nI0507 12:57:47.879141 125 log.go:172] (0xc000116dc0) (0xc0006b4780) Stream added, broadcasting: 5\nI0507 12:57:47.880076 125 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0507 12:57:47.956643 125 log.go:172] (0xc000116dc0) Data frame received for 5\nI0507 12:57:47.956676 125 log.go:172] (0xc0006b4780) (5) Data frame handling\nI0507 12:57:47.956703 125 log.go:172] (0xc0006b4780) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0507 12:57:47.980129 125 log.go:172] (0xc000116dc0) Data frame received for 3\nI0507 12:57:47.980156 125 log.go:172] (0xc0007ee000) (3) Data frame handling\nI0507 12:57:47.980177 125 log.go:172] (0xc0007ee000) (3) Data frame sent\nI0507 12:57:47.980501 125 log.go:172] (0xc000116dc0) Data frame received for 5\nI0507 12:57:47.980533 125 log.go:172] (0xc0006b4780) (5) Data frame handling\nI0507 12:57:47.980627 125 log.go:172] (0xc000116dc0) Data frame received for 3\nI0507 12:57:47.980652 125 log.go:172] (0xc0007ee000) (3) Data frame handling\nI0507 12:57:47.983001 125 log.go:172] (0xc000116dc0) Data frame received for 1\nI0507 12:57:47.983042 125 log.go:172] (0xc0006b46e0) (1) Data frame handling\nI0507 12:57:47.983058 125 log.go:172] (0xc0006b46e0) (1) Data frame sent\nI0507 12:57:47.983072 125 log.go:172] (0xc000116dc0) (0xc0006b46e0) Stream removed, broadcasting: 1\nI0507 12:57:47.983134 125 log.go:172] (0xc000116dc0) Go away received\nI0507 12:57:47.983273 125 log.go:172] (0xc000116dc0) (0xc0006b46e0) Stream removed, broadcasting: 1\nI0507 12:57:47.983285 125 log.go:172] (0xc000116dc0) (0xc0007ee000) Stream removed, broadcasting: 3\nI0507 12:57:47.983290 125 log.go:172] (0xc000116dc0) (0xc0006b4780) Stream removed, broadcasting: 5\n" May 7 12:57:47.987: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 7 12:57:47.987: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 7 12:57:47.987: INFO: Waiting for statefulset status.replicas updated to 0 May 7 12:57:47.990: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 7 12:57:57.999: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 7 12:57:57.999: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 7 12:57:57.999: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 7 12:57:58.027: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999245s May 7 12:57:59.033: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.978226259s May 7 12:58:00.036: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.972339436s May 7 12:58:01.042: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.969334671s May 7 12:58:02.047: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.963862982s May 7 12:58:03.053: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.958188261s May 7 12:58:04.059: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.952461904s May 7 12:58:05.064: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.946949333s May 7 12:58:06.070: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.941595998s May 7 12:58:07.076: INFO: Verifying statefulset ss doesn't scale past 3 for another 935.880541ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8943 May 7 12:58:08.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8943 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 7 12:58:08.285: INFO: stderr: "I0507 12:58:08.211246 145 log.go:172] (0xc000918630) (0xc0006bea00) Create stream\nI0507 12:58:08.211320 145 log.go:172] (0xc000918630) (0xc0006bea00) Stream added, broadcasting: 1\nI0507 12:58:08.215794 145 log.go:172] (0xc000918630) Reply frame received for 1\nI0507 12:58:08.215835 145 log.go:172] (0xc000918630) (0xc000286000) Create stream\nI0507 12:58:08.215848 145 log.go:172] (0xc000918630) (0xc000286000) Stream added, broadcasting: 3\nI0507 12:58:08.216784 145 log.go:172] (0xc000918630) Reply frame received for 3\nI0507 12:58:08.216819 145 log.go:172] (0xc000918630) (0xc0006be280) Create stream\nI0507 12:58:08.216837 145 log.go:172] (0xc000918630) (0xc0006be280) Stream added, broadcasting: 5\nI0507 12:58:08.217931 145 log.go:172] (0xc000918630) Reply frame received for 5\nI0507 12:58:08.280006 145 log.go:172] (0xc000918630) Data frame received for 5\nI0507 12:58:08.280063 145 log.go:172] (0xc0006be280) (5) Data frame handling\nI0507 12:58:08.280082 145 log.go:172] (0xc0006be280) (5) Data frame sent\nI0507 12:58:08.280096 145 log.go:172] (0xc000918630) Data frame received for 5\nI0507 12:58:08.280120 145 log.go:172] (0xc0006be280) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0507 12:58:08.280150 145 log.go:172] (0xc000918630) Data frame received for 3\nI0507 12:58:08.280174 145 log.go:172] (0xc000286000) (3) Data frame handling\nI0507 12:58:08.280185 145 log.go:172] (0xc000286000) (3) Data frame sent\nI0507 12:58:08.280194 145 log.go:172] (0xc000918630) Data frame received for 3\nI0507 12:58:08.280204 145 log.go:172] (0xc000286000) (3) Data frame handling\nI0507 12:58:08.281653 145 log.go:172] (0xc000918630) Data frame received for 1\nI0507 12:58:08.281667 145 log.go:172] (0xc0006bea00) (1) Data frame handling\nI0507 12:58:08.281675 145 log.go:172] (0xc0006bea00) (1) Data frame sent\nI0507 12:58:08.281687 145 log.go:172] (0xc000918630) (0xc0006bea00) Stream removed, broadcasting: 1\nI0507 12:58:08.281779 145 log.go:172] (0xc000918630) Go away received\nI0507 12:58:08.281906 145 log.go:172] (0xc000918630) (0xc0006bea00) Stream removed, broadcasting: 1\nI0507 12:58:08.281919 145 log.go:172] (0xc000918630) (0xc000286000) Stream removed, broadcasting: 3\nI0507 12:58:08.281927 145 log.go:172] (0xc000918630) (0xc0006be280) Stream removed, broadcasting: 5\n" May 7 12:58:08.285: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 7 12:58:08.285: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 7 12:58:08.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8943 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 7 12:58:08.505: INFO: stderr: "I0507 12:58:08.422442 166 log.go:172] (0xc00071c420) (0xc0007326e0) Create stream\nI0507 12:58:08.422502 166 log.go:172] (0xc00071c420) (0xc0007326e0) Stream added, broadcasting: 1\nI0507 12:58:08.426919 166 log.go:172] (0xc00071c420) Reply frame received for 1\nI0507 12:58:08.426971 166 log.go:172] (0xc00071c420) (0xc000732000) Create stream\nI0507 12:58:08.426986 166 log.go:172] (0xc00071c420) (0xc000732000) Stream added, broadcasting: 3\nI0507 12:58:08.427850 166 log.go:172] (0xc00071c420) Reply frame received for 3\nI0507 12:58:08.427890 166 log.go:172] (0xc00071c420) (0xc0006aa320) Create stream\nI0507 12:58:08.427903 166 log.go:172] (0xc00071c420) (0xc0006aa320) Stream added, broadcasting: 5\nI0507 12:58:08.428682 166 log.go:172] (0xc00071c420) Reply frame received for 5\nI0507 12:58:08.498661 166 log.go:172] (0xc00071c420) Data frame received for 5\nI0507 12:58:08.498692 166 log.go:172] (0xc0006aa320) (5) Data frame handling\nI0507 12:58:08.498707 166 log.go:172] (0xc0006aa320) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0507 12:58:08.498747 166 log.go:172] (0xc00071c420) Data frame received for 3\nI0507 12:58:08.498780 166 log.go:172] (0xc000732000) (3) Data frame handling\nI0507 12:58:08.498791 166 log.go:172] (0xc000732000) (3) Data frame sent\nI0507 12:58:08.498798 166 log.go:172] (0xc00071c420) Data frame received for 3\nI0507 12:58:08.498805 166 log.go:172] (0xc000732000) (3) Data frame handling\nI0507 12:58:08.498885 166 log.go:172] (0xc00071c420) Data frame received for 5\nI0507 12:58:08.498918 166 log.go:172] (0xc0006aa320) (5) Data frame handling\nI0507 12:58:08.500550 166 log.go:172] (0xc00071c420) Data frame received for 1\nI0507 12:58:08.500569 166 log.go:172] (0xc0007326e0) (1) Data frame handling\nI0507 12:58:08.500576 166 log.go:172] (0xc0007326e0) (1) Data frame sent\nI0507 12:58:08.500585 166 log.go:172] (0xc00071c420) (0xc0007326e0) Stream removed, broadcasting: 1\nI0507 12:58:08.500636 166 log.go:172] (0xc00071c420) Go away received\nI0507 12:58:08.500895 166 log.go:172] (0xc00071c420) (0xc0007326e0) Stream removed, broadcasting: 1\nI0507 12:58:08.500909 166 log.go:172] (0xc00071c420) (0xc000732000) Stream removed, broadcasting: 3\nI0507 12:58:08.500915 166 log.go:172] (0xc00071c420) (0xc0006aa320) Stream removed, broadcasting: 5\n" May 7 12:58:08.506: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 7 12:58:08.506: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 7 12:58:08.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8943 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 7 12:58:08.706: INFO: stderr: "I0507 12:58:08.638251 188 log.go:172] (0xc0009ba420) (0xc0002de6e0) Create stream\nI0507 12:58:08.638318 188 log.go:172] (0xc0009ba420) (0xc0002de6e0) Stream added, broadcasting: 1\nI0507 12:58:08.641054 188 log.go:172] (0xc0009ba420) Reply frame received for 1\nI0507 12:58:08.641333 188 log.go:172] (0xc0009ba420) (0xc00096c000) Create stream\nI0507 12:58:08.641371 188 log.go:172] (0xc0009ba420) (0xc00096c000) Stream added, broadcasting: 3\nI0507 12:58:08.642806 188 log.go:172] (0xc0009ba420) Reply frame received for 3\nI0507 12:58:08.642852 188 log.go:172] (0xc0009ba420) (0xc00096c0a0) Create stream\nI0507 12:58:08.642865 188 log.go:172] (0xc0009ba420) (0xc00096c0a0) Stream added, broadcasting: 5\nI0507 12:58:08.643727 188 log.go:172] (0xc0009ba420) Reply frame received for 5\nI0507 12:58:08.700349 188 log.go:172] (0xc0009ba420) Data frame received for 5\nI0507 12:58:08.700405 188 log.go:172] (0xc00096c0a0) (5) Data frame handling\nI0507 12:58:08.700425 188 log.go:172] (0xc00096c0a0) (5) Data frame sent\nI0507 12:58:08.700438 188 log.go:172] (0xc0009ba420) Data frame received for 5\nI0507 12:58:08.700448 188 log.go:172] (0xc00096c0a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0507 12:58:08.700474 188 log.go:172] (0xc0009ba420) Data frame received for 3\nI0507 12:58:08.700492 188 log.go:172] (0xc00096c000) (3) Data frame handling\nI0507 12:58:08.700504 188 log.go:172] (0xc00096c000) (3) Data frame sent\nI0507 12:58:08.700511 188 log.go:172] (0xc0009ba420) Data frame received for 3\nI0507 12:58:08.700517 188 log.go:172] (0xc00096c000) (3) Data frame handling\nI0507 12:58:08.701830 188 log.go:172] (0xc0009ba420) Data frame received for 1\nI0507 12:58:08.701850 188 log.go:172] (0xc0002de6e0) (1) Data frame handling\nI0507 12:58:08.701863 188 log.go:172] (0xc0002de6e0) (1) Data frame sent\nI0507 12:58:08.701873 188 log.go:172] (0xc0009ba420) (0xc0002de6e0) Stream removed, broadcasting: 1\nI0507 12:58:08.701885 188 log.go:172] (0xc0009ba420) Go away received\nI0507 12:58:08.702230 188 log.go:172] (0xc0009ba420) (0xc0002de6e0) Stream removed, broadcasting: 1\nI0507 12:58:08.702245 188 log.go:172] (0xc0009ba420) (0xc00096c000) Stream removed, broadcasting: 3\nI0507 12:58:08.702251 188 log.go:172] (0xc0009ba420) (0xc00096c0a0) Stream removed, broadcasting: 5\n" May 7 12:58:08.707: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 7 12:58:08.707: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 7 12:58:08.707: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 7 12:58:38.723: INFO: Deleting all statefulset in ns statefulset-8943 May 7 12:58:38.727: INFO: Scaling statefulset ss to 0 May 7 12:58:38.736: INFO: Waiting for statefulset status.replicas updated to 0 May 7 12:58:38.738: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 12:58:38.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8943" for this suite. May 7 12:58:44.767: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 12:58:44.847: INFO: namespace statefulset-8943 deletion completed in 6.091606899s • [SLOW TEST:100.736 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 12:58:44.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-6174/configmap-test-7645d957-31a7-419d-afcc-19a28425bafb STEP: Creating a pod to test consume configMaps May 7 12:58:44.952: INFO: Waiting up to 5m0s for pod "pod-configmaps-df8683f2-d6ce-4b1d-ad20-77b23899f1cf" in namespace "configmap-6174" to be "success or failure" May 7 12:58:44.969: INFO: Pod "pod-configmaps-df8683f2-d6ce-4b1d-ad20-77b23899f1cf": Phase="Pending", Reason="", readiness=false. Elapsed: 16.328202ms May 7 12:58:46.998: INFO: Pod "pod-configmaps-df8683f2-d6ce-4b1d-ad20-77b23899f1cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046256663s May 7 12:58:49.003: INFO: Pod "pod-configmaps-df8683f2-d6ce-4b1d-ad20-77b23899f1cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050500822s STEP: Saw pod success May 7 12:58:49.003: INFO: Pod "pod-configmaps-df8683f2-d6ce-4b1d-ad20-77b23899f1cf" satisfied condition "success or failure" May 7 12:58:49.006: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-df8683f2-d6ce-4b1d-ad20-77b23899f1cf container env-test: STEP: delete the pod May 7 12:58:49.029: INFO: Waiting for pod pod-configmaps-df8683f2-d6ce-4b1d-ad20-77b23899f1cf to disappear May 7 12:58:49.034: INFO: Pod pod-configmaps-df8683f2-d6ce-4b1d-ad20-77b23899f1cf no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 12:58:49.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6174" for this suite. May 7 12:58:55.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 12:58:55.127: INFO: namespace configmap-6174 deletion completed in 6.090275914s • [SLOW TEST:10.280 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 12:58:55.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-jkcx STEP: Creating a pod to test atomic-volume-subpath May 7 12:58:55.241: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-jkcx" in namespace "subpath-3525" to be "success or failure" May 7 12:58:55.257: INFO: Pod "pod-subpath-test-configmap-jkcx": Phase="Pending", Reason="", readiness=false. Elapsed: 16.504602ms May 7 12:58:57.261: INFO: Pod "pod-subpath-test-configmap-jkcx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020622994s May 7 12:58:59.265: INFO: Pod "pod-subpath-test-configmap-jkcx": Phase="Running", Reason="", readiness=true. Elapsed: 4.024719867s May 7 12:59:01.269: INFO: Pod "pod-subpath-test-configmap-jkcx": Phase="Running", Reason="", readiness=true. Elapsed: 6.028691936s May 7 12:59:03.274: INFO: Pod "pod-subpath-test-configmap-jkcx": Phase="Running", Reason="", readiness=true. Elapsed: 8.033230408s May 7 12:59:05.279: INFO: Pod "pod-subpath-test-configmap-jkcx": Phase="Running", Reason="", readiness=true. Elapsed: 10.038457579s May 7 12:59:07.284: INFO: Pod "pod-subpath-test-configmap-jkcx": Phase="Running", Reason="", readiness=true. Elapsed: 12.042880532s May 7 12:59:09.288: INFO: Pod "pod-subpath-test-configmap-jkcx": Phase="Running", Reason="", readiness=true. Elapsed: 14.047074749s May 7 12:59:11.292: INFO: Pod "pod-subpath-test-configmap-jkcx": Phase="Running", Reason="", readiness=true. Elapsed: 16.051311042s May 7 12:59:13.297: INFO: Pod "pod-subpath-test-configmap-jkcx": Phase="Running", Reason="", readiness=true. Elapsed: 18.056054693s May 7 12:59:15.302: INFO: Pod "pod-subpath-test-configmap-jkcx": Phase="Running", Reason="", readiness=true. Elapsed: 20.060779889s May 7 12:59:17.306: INFO: Pod "pod-subpath-test-configmap-jkcx": Phase="Running", Reason="", readiness=true. Elapsed: 22.065328648s May 7 12:59:19.310: INFO: Pod "pod-subpath-test-configmap-jkcx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.069343302s STEP: Saw pod success May 7 12:59:19.310: INFO: Pod "pod-subpath-test-configmap-jkcx" satisfied condition "success or failure" May 7 12:59:19.312: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-jkcx container test-container-subpath-configmap-jkcx: STEP: delete the pod May 7 12:59:19.356: INFO: Waiting for pod pod-subpath-test-configmap-jkcx to disappear May 7 12:59:19.362: INFO: Pod pod-subpath-test-configmap-jkcx no longer exists STEP: Deleting pod pod-subpath-test-configmap-jkcx May 7 12:59:19.362: INFO: Deleting pod "pod-subpath-test-configmap-jkcx" in namespace "subpath-3525" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 12:59:19.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3525" for this suite. May 7 12:59:25.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 12:59:25.503: INFO: namespace subpath-3525 deletion completed in 6.136017485s • [SLOW TEST:30.376 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 12:59:25.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 7 12:59:25.549: INFO: Creating deployment "nginx-deployment" May 7 12:59:25.579: INFO: Waiting for observed generation 1 May 7 12:59:27.589: INFO: Waiting for all required pods to come up May 7 12:59:27.593: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running May 7 12:59:37.604: INFO: Waiting for deployment "nginx-deployment" to complete May 7 12:59:37.610: INFO: Updating deployment "nginx-deployment" with a non-existent image May 7 12:59:37.615: INFO: Updating deployment nginx-deployment May 7 12:59:37.615: INFO: Waiting for observed generation 2 May 7 12:59:39.631: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 7 12:59:39.634: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 7 12:59:39.637: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 7 12:59:39.645: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 7 12:59:39.645: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 7 12:59:39.647: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 7 12:59:39.652: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas May 7 12:59:39.652: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 May 7 12:59:39.657: INFO: Updating deployment nginx-deployment May 7 12:59:39.657: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas May 7 12:59:39.719: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 7 12:59:39.771: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 7 12:59:40.003: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-4984,SelfLink:/apis/apps/v1/namespaces/deployment-4984/deployments/nginx-deployment,UID:98fb9f25-d2f5-487a-9472-b13c512ed199,ResourceVersion:9526418,Generation:3,CreationTimestamp:2020-05-07 12:59:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-05-07 12:59:38 +0000 UTC 2020-05-07 12:59:25 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-05-07 12:59:39 +0000 UTC 2020-05-07 12:59:39 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} May 7 12:59:40.168: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-4984,SelfLink:/apis/apps/v1/namespaces/deployment-4984/replicasets/nginx-deployment-55fb7cb77f,UID:dca653ad-6693-40bd-9efa-7120d00e6166,ResourceVersion:9526426,Generation:3,CreationTimestamp:2020-05-07 12:59:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 98fb9f25-d2f5-487a-9472-b13c512ed199 0xc002b37ef7 0xc002b37ef8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 7 12:59:40.168: INFO: All old ReplicaSets of Deployment "nginx-deployment": May 7 12:59:40.168: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-4984,SelfLink:/apis/apps/v1/namespaces/deployment-4984/replicasets/nginx-deployment-7b8c6f4498,UID:ee0ee4f4-74bc-4348-9f41-2ba718290d06,ResourceVersion:9526425,Generation:3,CreationTimestamp:2020-05-07 12:59:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 98fb9f25-d2f5-487a-9472-b13c512ed199 0xc002b37fc7 0xc002b37fc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} May 7 12:59:40.306: INFO: Pod "nginx-deployment-55fb7cb77f-4gslp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-4gslp,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4984,SelfLink:/api/v1/namespaces/deployment-4984/pods/nginx-deployment-55fb7cb77f-4gslp,UID:7ee5efbc-89d1-4286-b49c-55abeb43d1d2,ResourceVersion:9526344,Generation:0,CreationTimestamp:2020-05-07 12:59:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f dca653ad-6693-40bd-9efa-7120d00e6166 0xc002475fb7 0xc002475fb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5tgjn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5tgjn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5tgjn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d76030} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d76050}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:37 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-07 12:59:37 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 7 12:59:40.306: INFO: Pod "nginx-deployment-55fb7cb77f-7thhz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-7thhz,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4984,SelfLink:/api/v1/namespaces/deployment-4984/pods/nginx-deployment-55fb7cb77f-7thhz,UID:1209d5f7-3ff9-492e-b0ba-e2da56379139,ResourceVersion:9526369,Generation:0,CreationTimestamp:2020-05-07 12:59:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f dca653ad-6693-40bd-9efa-7120d00e6166 0xc002d76120 0xc002d76121}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5tgjn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5tgjn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5tgjn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d761a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d761c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:38 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-07 12:59:38 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 7 12:59:40.306: INFO: Pod "nginx-deployment-55fb7cb77f-8t48q" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8t48q,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4984,SelfLink:/api/v1/namespaces/deployment-4984/pods/nginx-deployment-55fb7cb77f-8t48q,UID:9ba4c3f4-9fbd-45e3-ab24-74e9f2a70b09,ResourceVersion:9526415,Generation:0,CreationTimestamp:2020-05-07 12:59:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f dca653ad-6693-40bd-9efa-7120d00e6166 0xc002d76290 0xc002d76291}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5tgjn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5tgjn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5tgjn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d76310} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d76330}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 7 12:59:40.306: INFO: Pod "nginx-deployment-55fb7cb77f-d7pv7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-d7pv7,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4984,SelfLink:/api/v1/namespaces/deployment-4984/pods/nginx-deployment-55fb7cb77f-d7pv7,UID:3b81e19e-3aa9-4ff1-892a-3ffa2a523f50,ResourceVersion:9526401,Generation:0,CreationTimestamp:2020-05-07 12:59:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f dca653ad-6693-40bd-9efa-7120d00e6166 0xc002d763b7 0xc002d763b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5tgjn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5tgjn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5tgjn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d76430} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d76450}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 7 12:59:40.306: INFO: Pod "nginx-deployment-55fb7cb77f-gcd5l" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-gcd5l,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4984,SelfLink:/api/v1/namespaces/deployment-4984/pods/nginx-deployment-55fb7cb77f-gcd5l,UID:af4f2fe4-ddc8-48c9-90d7-68704a7ee139,ResourceVersion:9526416,Generation:0,CreationTimestamp:2020-05-07 12:59:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f dca653ad-6693-40bd-9efa-7120d00e6166 0xc002d764d7 0xc002d764d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5tgjn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5tgjn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5tgjn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d76550} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d76570}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 7 12:59:40.307: INFO: Pod "nginx-deployment-55fb7cb77f-gqsfk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-gqsfk,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4984,SelfLink:/api/v1/namespaces/deployment-4984/pods/nginx-deployment-55fb7cb77f-gqsfk,UID:8d87ad21-54f5-465a-a455-c52355e3f8ac,ResourceVersion:9526441,Generation:0,CreationTimestamp:2020-05-07 12:59:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f dca653ad-6693-40bd-9efa-7120d00e6166 0xc002d765f7 0xc002d765f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5tgjn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5tgjn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5tgjn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d76670} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d76690}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:39 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-07 12:59:39 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 7 12:59:40.307: INFO: Pod "nginx-deployment-55fb7cb77f-jklvq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-jklvq,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4984,SelfLink:/api/v1/namespaces/deployment-4984/pods/nginx-deployment-55fb7cb77f-jklvq,UID:7f9b5a96-b63a-46cf-83ce-50e8da463d72,ResourceVersion:9526347,Generation:0,CreationTimestamp:2020-05-07 12:59:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f dca653ad-6693-40bd-9efa-7120d00e6166 0xc002d76760 0xc002d76761}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5tgjn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5tgjn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5tgjn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d767e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d76800}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:37 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-07 12:59:37 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 7 12:59:40.307: INFO: Pod "nginx-deployment-55fb7cb77f-qqxw6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-qqxw6,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4984,SelfLink:/api/v1/namespaces/deployment-4984/pods/nginx-deployment-55fb7cb77f-qqxw6,UID:d79e039f-9553-4888-94ee-1290f3b5ea90,ResourceVersion:9526429,Generation:0,CreationTimestamp:2020-05-07 12:59:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f dca653ad-6693-40bd-9efa-7120d00e6166 0xc002d768d0 0xc002d768d1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5tgjn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5tgjn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5tgjn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d76950} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d76970}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 7 12:59:40.307: INFO: Pod "nginx-deployment-55fb7cb77f-qzbsv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-qzbsv,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4984,SelfLink:/api/v1/namespaces/deployment-4984/pods/nginx-deployment-55fb7cb77f-qzbsv,UID:2fbeda3e-4919-4245-834c-0fbf5793e445,ResourceVersion:9526368,Generation:0,CreationTimestamp:2020-05-07 12:59:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f dca653ad-6693-40bd-9efa-7120d00e6166 0xc002d769f7 0xc002d769f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5tgjn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5tgjn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5tgjn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d76a70} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d76a90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:37 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-07 12:59:38 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 7 12:59:40.307: INFO: Pod "nginx-deployment-55fb7cb77f-rz2q2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-rz2q2,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4984,SelfLink:/api/v1/namespaces/deployment-4984/pods/nginx-deployment-55fb7cb77f-rz2q2,UID:b416f389-be81-4794-b32a-73ecd2052c04,ResourceVersion:9526395,Generation:0,CreationTimestamp:2020-05-07 12:59:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f dca653ad-6693-40bd-9efa-7120d00e6166 0xc002d76b60 0xc002d76b61}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5tgjn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5tgjn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5tgjn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d76be0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d76c00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 7 12:59:40.307: INFO: Pod "nginx-deployment-55fb7cb77f-strtb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-strtb,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4984,SelfLink:/api/v1/namespaces/deployment-4984/pods/nginx-deployment-55fb7cb77f-strtb,UID:8dca008a-1ebc-41f2-8f29-9fc3ac85dcd6,ResourceVersion:9526354,Generation:0,CreationTimestamp:2020-05-07 12:59:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f dca653ad-6693-40bd-9efa-7120d00e6166 0xc002d76c87 0xc002d76c88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5tgjn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5tgjn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5tgjn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d76d00} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d76d20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:37 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-07 12:59:37 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 7 12:59:40.307: INFO: Pod "nginx-deployment-55fb7cb77f-vfw2j" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-vfw2j,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4984,SelfLink:/api/v1/namespaces/deployment-4984/pods/nginx-deployment-55fb7cb77f-vfw2j,UID:caff3911-eb2e-4327-9146-a19506a18262,ResourceVersion:9526413,Generation:0,CreationTimestamp:2020-05-07 12:59:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f dca653ad-6693-40bd-9efa-7120d00e6166 0xc002d76df0 0xc002d76df1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5tgjn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5tgjn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5tgjn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d76e70} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d76e90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 7 12:59:40.307: INFO: Pod "nginx-deployment-55fb7cb77f-z5hsb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-z5hsb,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4984,SelfLink:/api/v1/namespaces/deployment-4984/pods/nginx-deployment-55fb7cb77f-z5hsb,UID:5641ff39-a39d-40ad-a8b8-ac2fde682faa,ResourceVersion:9526414,Generation:0,CreationTimestamp:2020-05-07 12:59:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f dca653ad-6693-40bd-9efa-7120d00e6166 0xc002d76f17 0xc002d76f18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5tgjn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5tgjn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5tgjn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d76f90} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d76fb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 7 12:59:40.307: INFO: Pod "nginx-deployment-7b8c6f4498-42nl6" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-42nl6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4984,SelfLink:/api/v1/namespaces/deployment-4984/pods/nginx-deployment-7b8c6f4498-42nl6,UID:712590fe-fdb3-475e-abc1-377c5695ca43,ResourceVersion:9526298,Generation:0,CreationTimestamp:2020-05-07 12:59:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ee0ee4f4-74bc-4348-9f41-2ba718290d06 0xc002d77037 0xc002d77038}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5tgjn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5tgjn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5tgjn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d770b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d770d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:36 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:36 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:25 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.102,StartTime:2020-05-07 12:59:25 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-07 12:59:35 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://9c25d33d4fee4d9a4fbf8c2cdea6f30e5fb2c39c9c9fc2fe7aee7bf4c89c74bf}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 7 12:59:40.308: INFO: Pod "nginx-deployment-7b8c6f4498-46vln" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-46vln,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4984,SelfLink:/api/v1/namespaces/deployment-4984/pods/nginx-deployment-7b8c6f4498-46vln,UID:48165dcb-3701-46bf-9963-fc19c37d12c3,ResourceVersion:9526440,Generation:0,CreationTimestamp:2020-05-07 12:59:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ee0ee4f4-74bc-4348-9f41-2ba718290d06 0xc002d771a7 0xc002d771a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5tgjn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5tgjn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5tgjn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d77220} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d77240}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:39 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-07 12:59:39 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 7 12:59:40.308: INFO: Pod "nginx-deployment-7b8c6f4498-4hmcl" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4hmcl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4984,SelfLink:/api/v1/namespaces/deployment-4984/pods/nginx-deployment-7b8c6f4498-4hmcl,UID:6154f6cf-eb03-40f0-bbf7-32a8a2cbe4d4,ResourceVersion:9526297,Generation:0,CreationTimestamp:2020-05-07 12:59:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ee0ee4f4-74bc-4348-9f41-2ba718290d06 0xc002d77307 0xc002d77308}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5tgjn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5tgjn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5tgjn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d77380} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d773a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:36 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:36 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:25 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.39,StartTime:2020-05-07 12:59:25 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-07 12:59:36 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://9450a2f0b6fec739a92afead700ce3775ed7f1d022670df85620b1f3f20bca39}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 7 12:59:40.308: INFO: Pod "nginx-deployment-7b8c6f4498-6n5nq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6n5nq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4984,SelfLink:/api/v1/namespaces/deployment-4984/pods/nginx-deployment-7b8c6f4498-6n5nq,UID:58425944-b0d8-4ab3-9bf6-a304da44d443,ResourceVersion:9526422,Generation:0,CreationTimestamp:2020-05-07 12:59:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ee0ee4f4-74bc-4348-9f41-2ba718290d06 0xc002d77477 0xc002d77478}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5tgjn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5tgjn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5tgjn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d774f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d77510}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 7 12:59:40.308: INFO: Pod "nginx-deployment-7b8c6f4498-824xr" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-824xr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4984,SelfLink:/api/v1/namespaces/deployment-4984/pods/nginx-deployment-7b8c6f4498-824xr,UID:2de096a8-0da3-4f27-b87d-8cbaf359ae99,ResourceVersion:9526266,Generation:0,CreationTimestamp:2020-05-07 12:59:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ee0ee4f4-74bc-4348-9f41-2ba718290d06 0xc002d77597 0xc002d77598}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5tgjn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5tgjn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5tgjn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d77610} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d77630}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:25 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.36,StartTime:2020-05-07 12:59:25 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-07 12:59:31 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://45c9c4188e1077942aa54fa49f4823a6c163cfd791310d5caa067885d1177e5a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 7 12:59:40.308: INFO: Pod "nginx-deployment-7b8c6f4498-87qn5" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-87qn5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4984,SelfLink:/api/v1/namespaces/deployment-4984/pods/nginx-deployment-7b8c6f4498-87qn5,UID:c5314079-12ab-4265-879a-df5dc8aa18cf,ResourceVersion:9526284,Generation:0,CreationTimestamp:2020-05-07 12:59:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ee0ee4f4-74bc-4348-9f41-2ba718290d06 0xc002d77707 0xc002d77708}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5tgjn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5tgjn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5tgjn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d77780} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d777a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:35 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:35 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:25 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.101,StartTime:2020-05-07 12:59:25 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-07 12:59:33 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://815ca5dc0b696822d8eac45a769042c2be55fe8f19ffc816b730032f15d0f9b0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 7 12:59:40.308: INFO: Pod "nginx-deployment-7b8c6f4498-8l52n" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8l52n,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4984,SelfLink:/api/v1/namespaces/deployment-4984/pods/nginx-deployment-7b8c6f4498-8l52n,UID:ef2eee7e-a514-4168-b368-b46aee4db82f,ResourceVersion:9526424,Generation:0,CreationTimestamp:2020-05-07 12:59:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ee0ee4f4-74bc-4348-9f41-2ba718290d06 0xc002d77877 0xc002d77878}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5tgjn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5tgjn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5tgjn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d778f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d77910}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 7 12:59:40.308: INFO: Pod "nginx-deployment-7b8c6f4498-bqhbf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bqhbf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4984,SelfLink:/api/v1/namespaces/deployment-4984/pods/nginx-deployment-7b8c6f4498-bqhbf,UID:156266a6-3b48-48db-84a9-238bab9ff512,ResourceVersion:9526405,Generation:0,CreationTimestamp:2020-05-07 12:59:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ee0ee4f4-74bc-4348-9f41-2ba718290d06 0xc002d77997 0xc002d77998}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5tgjn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5tgjn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5tgjn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d77a10} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d77a30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 7 12:59:40.308: INFO: Pod "nginx-deployment-7b8c6f4498-cgmwb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-cgmwb,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4984,SelfLink:/api/v1/namespaces/deployment-4984/pods/nginx-deployment-7b8c6f4498-cgmwb,UID:996dc645-5415-4376-859a-11a21be6c413,ResourceVersion:9526420,Generation:0,CreationTimestamp:2020-05-07 12:59:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ee0ee4f4-74bc-4348-9f41-2ba718290d06 0xc002d77ab7 0xc002d77ab8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5tgjn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5tgjn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5tgjn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d77b30} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d77b50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 7 12:59:40.309: INFO: Pod "nginx-deployment-7b8c6f4498-gx7rg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gx7rg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4984,SelfLink:/api/v1/namespaces/deployment-4984/pods/nginx-deployment-7b8c6f4498-gx7rg,UID:7b73060e-5749-4937-b9f6-41c0702fb115,ResourceVersion:9526305,Generation:0,CreationTimestamp:2020-05-07 12:59:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ee0ee4f4-74bc-4348-9f41-2ba718290d06 0xc002d77bd7 0xc002d77bd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5tgjn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5tgjn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5tgjn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d77c50} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d77c70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:36 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:36 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:25 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.103,StartTime:2020-05-07 12:59:25 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-07 12:59:36 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://33b90b3d410ce25c2532d4e882ecb299d260cd5a0b266acfe7f1804e9e82983e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 7 12:59:40.309: INFO: Pod "nginx-deployment-7b8c6f4498-h7zwl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-h7zwl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4984,SelfLink:/api/v1/namespaces/deployment-4984/pods/nginx-deployment-7b8c6f4498-h7zwl,UID:8784031f-5c19-44a8-90a5-5a520c451d6c,ResourceVersion:9526406,Generation:0,CreationTimestamp:2020-05-07 12:59:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ee0ee4f4-74bc-4348-9f41-2ba718290d06 0xc002d77d47 0xc002d77d48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5tgjn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5tgjn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5tgjn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d77dc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d77de0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 7 12:59:40.309: INFO: Pod "nginx-deployment-7b8c6f4498-hjspk" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hjspk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4984,SelfLink:/api/v1/namespaces/deployment-4984/pods/nginx-deployment-7b8c6f4498-hjspk,UID:f4a31add-7bf6-4d11-be43-c422e635a303,ResourceVersion:9526302,Generation:0,CreationTimestamp:2020-05-07 12:59:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ee0ee4f4-74bc-4348-9f41-2ba718290d06 0xc002d77e67 0xc002d77e68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5tgjn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5tgjn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5tgjn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d77ee0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d77f00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:36 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:36 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:25 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.38,StartTime:2020-05-07 12:59:25 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-07 12:59:35 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://79250c9419ebc6e275eb1ac4691cc80be7e7029ff6d6ab191b16c590a629fdbd}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 7 12:59:40.309: INFO: Pod "nginx-deployment-7b8c6f4498-jdnc5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jdnc5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4984,SelfLink:/api/v1/namespaces/deployment-4984/pods/nginx-deployment-7b8c6f4498-jdnc5,UID:9746c2dc-490c-4a7c-8a85-d6776caf9528,ResourceVersion:9526432,Generation:0,CreationTimestamp:2020-05-07 12:59:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ee0ee4f4-74bc-4348-9f41-2ba718290d06 0xc002d77fd7 0xc002d77fd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5tgjn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5tgjn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5tgjn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d9e050} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d9e070}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:39 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-07 12:59:39 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 7 12:59:40.309: INFO: Pod "nginx-deployment-7b8c6f4498-qh2q8" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qh2q8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4984,SelfLink:/api/v1/namespaces/deployment-4984/pods/nginx-deployment-7b8c6f4498-qh2q8,UID:dc528358-e45d-473e-a38b-edd074413618,ResourceVersion:9526277,Generation:0,CreationTimestamp:2020-05-07 12:59:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ee0ee4f4-74bc-4348-9f41-2ba718290d06 0xc002d9e137 0xc002d9e138}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5tgjn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5tgjn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5tgjn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d9e1b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d9e1d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:34 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:25 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.37,StartTime:2020-05-07 12:59:25 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-07 12:59:33 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://3f116d7fb0f24277c1be75e054bd113f939d1f5ca67873facb6e6ac06208c090}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 7 12:59:40.309: INFO: Pod "nginx-deployment-7b8c6f4498-r5fvh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-r5fvh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4984,SelfLink:/api/v1/namespaces/deployment-4984/pods/nginx-deployment-7b8c6f4498-r5fvh,UID:da34d28a-4432-4179-8824-0dd3826f308c,ResourceVersion:9526398,Generation:0,CreationTimestamp:2020-05-07 12:59:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ee0ee4f4-74bc-4348-9f41-2ba718290d06 0xc002d9e2a7 0xc002d9e2a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5tgjn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5tgjn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5tgjn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d9e320} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d9e340}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 7 12:59:40.309: INFO: Pod "nginx-deployment-7b8c6f4498-rl8k6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rl8k6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4984,SelfLink:/api/v1/namespaces/deployment-4984/pods/nginx-deployment-7b8c6f4498-rl8k6,UID:a16226e3-e738-4845-8275-f84eb3a13d69,ResourceVersion:9526428,Generation:0,CreationTimestamp:2020-05-07 12:59:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ee0ee4f4-74bc-4348-9f41-2ba718290d06 0xc002d9e3c7 0xc002d9e3c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5tgjn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5tgjn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5tgjn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d9e440} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d9e460}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:39 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-07 12:59:39 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 7 12:59:40.310: INFO: Pod "nginx-deployment-7b8c6f4498-svkw5" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-svkw5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4984,SelfLink:/api/v1/namespaces/deployment-4984/pods/nginx-deployment-7b8c6f4498-svkw5,UID:7ea94ed0-7fee-4875-ae43-917d6ba3e94b,ResourceVersion:9526257,Generation:0,CreationTimestamp:2020-05-07 12:59:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ee0ee4f4-74bc-4348-9f41-2ba718290d06 0xc002d9e527 0xc002d9e528}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5tgjn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5tgjn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5tgjn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d9e5a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d9e5c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:31 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:31 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:25 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.100,StartTime:2020-05-07 12:59:25 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-07 12:59:30 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://defadbc27757670f1d1bf957b5984387f6be724c957d638cf330ebeef5cf2a5a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 7 12:59:40.310: INFO: Pod "nginx-deployment-7b8c6f4498-tdw58" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tdw58,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4984,SelfLink:/api/v1/namespaces/deployment-4984/pods/nginx-deployment-7b8c6f4498-tdw58,UID:e2a91c56-3e0a-4f49-af5d-c2ccb683e263,ResourceVersion:9526419,Generation:0,CreationTimestamp:2020-05-07 12:59:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ee0ee4f4-74bc-4348-9f41-2ba718290d06 0xc002d9e697 0xc002d9e698}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5tgjn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5tgjn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5tgjn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d9e710} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d9e730}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 7 12:59:40.310: INFO: Pod "nginx-deployment-7b8c6f4498-vm7gz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vm7gz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4984,SelfLink:/api/v1/namespaces/deployment-4984/pods/nginx-deployment-7b8c6f4498-vm7gz,UID:e7e313e2-628e-4521-a1e0-0829eb889c5c,ResourceVersion:9526402,Generation:0,CreationTimestamp:2020-05-07 12:59:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ee0ee4f4-74bc-4348-9f41-2ba718290d06 0xc002d9e7b7 0xc002d9e7b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5tgjn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5tgjn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5tgjn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d9e830} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d9e850}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 7 12:59:40.310: INFO: Pod "nginx-deployment-7b8c6f4498-zkf45" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zkf45,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4984,SelfLink:/api/v1/namespaces/deployment-4984/pods/nginx-deployment-7b8c6f4498-zkf45,UID:6a8dd4ab-cae6-44bb-960e-a76848c37b49,ResourceVersion:9526423,Generation:0,CreationTimestamp:2020-05-07 12:59:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ee0ee4f4-74bc-4348-9f41-2ba718290d06 0xc002d9e8d7 0xc002d9e8d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5tgjn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5tgjn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5tgjn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d9e950} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d9e970}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 12:59:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 12:59:40.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4984" for this suite. May 7 13:00:02.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:00:02.535: INFO: namespace deployment-4984 deletion completed in 22.187792758s • [SLOW TEST:37.032 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:00:02.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 7 13:00:02.640: INFO: Waiting up to 5m0s for pod "downwardapi-volume-11f20756-96ba-4c58-8a9c-efa7ea3b1641" in namespace "projected-3138" to be "success or failure" May 7 13:00:02.643: INFO: Pod "downwardapi-volume-11f20756-96ba-4c58-8a9c-efa7ea3b1641": Phase="Pending", Reason="", readiness=false. Elapsed: 2.931754ms May 7 13:00:04.648: INFO: Pod "downwardapi-volume-11f20756-96ba-4c58-8a9c-efa7ea3b1641": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00770538s May 7 13:00:06.652: INFO: Pod "downwardapi-volume-11f20756-96ba-4c58-8a9c-efa7ea3b1641": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012253865s STEP: Saw pod success May 7 13:00:06.653: INFO: Pod "downwardapi-volume-11f20756-96ba-4c58-8a9c-efa7ea3b1641" satisfied condition "success or failure" May 7 13:00:06.656: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-11f20756-96ba-4c58-8a9c-efa7ea3b1641 container client-container: STEP: delete the pod May 7 13:00:06.680: INFO: Waiting for pod downwardapi-volume-11f20756-96ba-4c58-8a9c-efa7ea3b1641 to disappear May 7 13:00:06.684: INFO: Pod downwardapi-volume-11f20756-96ba-4c58-8a9c-efa7ea3b1641 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:00:06.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3138" for this suite. May 7 13:00:12.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:00:12.797: INFO: namespace projected-3138 deletion completed in 6.110209839s • [SLOW TEST:10.262 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:00:12.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 7 13:00:12.927: INFO: Waiting up to 5m0s for pod "downwardapi-volume-af986cd1-0918-475e-8d1e-40cba8603c72" in namespace "downward-api-1352" to be "success or failure" May 7 13:00:12.936: INFO: Pod "downwardapi-volume-af986cd1-0918-475e-8d1e-40cba8603c72": Phase="Pending", Reason="", readiness=false. Elapsed: 8.870564ms May 7 13:00:14.939: INFO: Pod "downwardapi-volume-af986cd1-0918-475e-8d1e-40cba8603c72": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01211168s May 7 13:00:16.943: INFO: Pod "downwardapi-volume-af986cd1-0918-475e-8d1e-40cba8603c72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015479954s STEP: Saw pod success May 7 13:00:16.943: INFO: Pod "downwardapi-volume-af986cd1-0918-475e-8d1e-40cba8603c72" satisfied condition "success or failure" May 7 13:00:16.944: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-af986cd1-0918-475e-8d1e-40cba8603c72 container client-container: STEP: delete the pod May 7 13:00:17.046: INFO: Waiting for pod downwardapi-volume-af986cd1-0918-475e-8d1e-40cba8603c72 to disappear May 7 13:00:17.080: INFO: Pod downwardapi-volume-af986cd1-0918-475e-8d1e-40cba8603c72 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:00:17.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1352" for this suite. May 7 13:00:23.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:00:23.219: INFO: namespace downward-api-1352 deletion completed in 6.09263085s • [SLOW TEST:10.421 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:00:23.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-6380/secret-test-1f6d43e3-96b0-4463-90d1-8d28833668bd STEP: Creating a pod to test consume secrets May 7 13:00:23.284: INFO: Waiting up to 5m0s for pod "pod-configmaps-5557f269-7cb9-4aa2-a0cc-02784297faa0" in namespace "secrets-6380" to be "success or failure" May 7 13:00:23.323: INFO: Pod "pod-configmaps-5557f269-7cb9-4aa2-a0cc-02784297faa0": Phase="Pending", Reason="", readiness=false. Elapsed: 38.716712ms May 7 13:00:25.327: INFO: Pod "pod-configmaps-5557f269-7cb9-4aa2-a0cc-02784297faa0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043155727s May 7 13:00:27.332: INFO: Pod "pod-configmaps-5557f269-7cb9-4aa2-a0cc-02784297faa0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047612303s STEP: Saw pod success May 7 13:00:27.332: INFO: Pod "pod-configmaps-5557f269-7cb9-4aa2-a0cc-02784297faa0" satisfied condition "success or failure" May 7 13:00:27.335: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-5557f269-7cb9-4aa2-a0cc-02784297faa0 container env-test: STEP: delete the pod May 7 13:00:27.371: INFO: Waiting for pod pod-configmaps-5557f269-7cb9-4aa2-a0cc-02784297faa0 to disappear May 7 13:00:27.389: INFO: Pod pod-configmaps-5557f269-7cb9-4aa2-a0cc-02784297faa0 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:00:27.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6380" for this suite. May 7 13:00:33.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:00:33.538: INFO: namespace secrets-6380 deletion completed in 6.146427685s • [SLOW TEST:10.318 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:00:33.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-c869dcdc-3167-4afa-92ad-092c76f05db2 STEP: Creating a pod to test consume configMaps May 7 13:00:33.638: INFO: Waiting up to 5m0s for pod "pod-configmaps-818585e5-1362-476c-ae52-4668a6156c8e" in namespace "configmap-699" to be "success or failure" May 7 13:00:33.641: INFO: Pod "pod-configmaps-818585e5-1362-476c-ae52-4668a6156c8e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.424627ms May 7 13:00:35.701: INFO: Pod "pod-configmaps-818585e5-1362-476c-ae52-4668a6156c8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063100927s May 7 13:00:37.705: INFO: Pod "pod-configmaps-818585e5-1362-476c-ae52-4668a6156c8e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067600143s STEP: Saw pod success May 7 13:00:37.705: INFO: Pod "pod-configmaps-818585e5-1362-476c-ae52-4668a6156c8e" satisfied condition "success or failure" May 7 13:00:37.708: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-818585e5-1362-476c-ae52-4668a6156c8e container configmap-volume-test: STEP: delete the pod May 7 13:00:37.745: INFO: Waiting for pod pod-configmaps-818585e5-1362-476c-ae52-4668a6156c8e to disappear May 7 13:00:37.762: INFO: Pod pod-configmaps-818585e5-1362-476c-ae52-4668a6156c8e no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:00:37.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-699" for this suite. May 7 13:00:43.777: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:00:43.882: INFO: namespace configmap-699 deletion completed in 6.117331321s • [SLOW TEST:10.344 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:00:43.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 7 13:00:43.981: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 16.827514ms) May 7 13:00:43.985: INFO: (1) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.824382ms) May 7 13:00:43.989: INFO: (2) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.525503ms) May 7 13:00:43.992: INFO: (3) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.110179ms) May 7 13:00:43.995: INFO: (4) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.396568ms) May 7 13:00:43.999: INFO: (5) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.336735ms) May 7 13:00:44.002: INFO: (6) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.932466ms) May 7 13:00:44.005: INFO: (7) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.428831ms) May 7 13:00:44.008: INFO: (8) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.178946ms) May 7 13:00:44.012: INFO: (9) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.476707ms) May 7 13:00:44.015: INFO: (10) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.92666ms) May 7 13:00:44.018: INFO: (11) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.930319ms) May 7 13:00:44.021: INFO: (12) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.710143ms) May 7 13:00:44.024: INFO: (13) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.438749ms) May 7 13:00:44.028: INFO: (14) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.320517ms) May 7 13:00:44.031: INFO: (15) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.517511ms) May 7 13:00:44.035: INFO: (16) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.539548ms) May 7 13:00:44.038: INFO: (17) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.597296ms) May 7 13:00:44.041: INFO: (18) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.085669ms) May 7 13:00:44.045: INFO: (19) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.915287ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:00:44.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-7543" for this suite. May 7 13:00:50.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:00:50.177: INFO: namespace proxy-7543 deletion completed in 6.128480757s • [SLOW TEST:6.295 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:00:50.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 7 13:01:14.339: INFO: Container started at 2020-05-07 13:00:53 +0000 UTC, pod became ready at 2020-05-07 13:01:13 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:01:14.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2123" for this suite. May 7 13:01:36.358: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:01:36.437: INFO: namespace container-probe-2123 deletion completed in 22.093531412s • [SLOW TEST:46.259 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:01:36.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 7 13:01:36.507: INFO: Waiting up to 5m0s for pod "downward-api-83002643-3284-4ef0-a749-7d486923197d" in namespace "downward-api-1211" to be "success or failure" May 7 13:01:36.558: INFO: Pod "downward-api-83002643-3284-4ef0-a749-7d486923197d": Phase="Pending", Reason="", readiness=false. Elapsed: 50.753131ms May 7 13:01:38.562: INFO: Pod "downward-api-83002643-3284-4ef0-a749-7d486923197d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0549592s May 7 13:01:40.566: INFO: Pod "downward-api-83002643-3284-4ef0-a749-7d486923197d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059461688s STEP: Saw pod success May 7 13:01:40.566: INFO: Pod "downward-api-83002643-3284-4ef0-a749-7d486923197d" satisfied condition "success or failure" May 7 13:01:40.569: INFO: Trying to get logs from node iruya-worker pod downward-api-83002643-3284-4ef0-a749-7d486923197d container dapi-container: STEP: delete the pod May 7 13:01:40.592: INFO: Waiting for pod downward-api-83002643-3284-4ef0-a749-7d486923197d to disappear May 7 13:01:40.596: INFO: Pod downward-api-83002643-3284-4ef0-a749-7d486923197d no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:01:40.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1211" for this suite. May 7 13:01:46.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:01:46.732: INFO: namespace downward-api-1211 deletion completed in 6.131087599s • [SLOW TEST:10.293 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:01:46.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 7 13:01:53.032: INFO: 0 pods remaining May 7 13:01:53.032: INFO: 0 pods has nil DeletionTimestamp May 7 13:01:53.032: INFO: STEP: Gathering metrics W0507 13:01:54.776583 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 7 13:01:54.776: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:01:54.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6786" for this suite. May 7 13:02:02.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:02:02.977: INFO: namespace gc-6786 deletion completed in 8.137396307s • [SLOW TEST:16.245 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:02:02.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:03:03.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-628" for this suite. May 7 13:03:25.109: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:03:25.183: INFO: namespace container-probe-628 deletion completed in 22.089309944s • [SLOW TEST:82.206 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:03:25.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc May 7 13:03:25.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3212' May 7 13:03:25.616: INFO: stderr: "" May 7 13:03:25.616: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. May 7 13:03:26.621: INFO: Selector matched 1 pods for map[app:redis] May 7 13:03:26.621: INFO: Found 0 / 1 May 7 13:03:27.620: INFO: Selector matched 1 pods for map[app:redis] May 7 13:03:27.620: INFO: Found 0 / 1 May 7 13:03:28.621: INFO: Selector matched 1 pods for map[app:redis] May 7 13:03:28.621: INFO: Found 1 / 1 May 7 13:03:28.621: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 7 13:03:28.625: INFO: Selector matched 1 pods for map[app:redis] May 7 13:03:28.625: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings May 7 13:03:28.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-95tsp redis-master --namespace=kubectl-3212' May 7 13:03:28.738: INFO: stderr: "" May 7 13:03:28.738: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 07 May 13:03:28.355 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 07 May 13:03:28.355 # Server started, Redis version 3.2.12\n1:M 07 May 13:03:28.355 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 07 May 13:03:28.355 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines May 7 13:03:28.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-95tsp redis-master --namespace=kubectl-3212 --tail=1' May 7 13:03:28.834: INFO: stderr: "" May 7 13:03:28.834: INFO: stdout: "1:M 07 May 13:03:28.355 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes May 7 13:03:28.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-95tsp redis-master --namespace=kubectl-3212 --limit-bytes=1' May 7 13:03:28.948: INFO: stderr: "" May 7 13:03:28.948: INFO: stdout: " " STEP: exposing timestamps May 7 13:03:28.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-95tsp redis-master --namespace=kubectl-3212 --tail=1 --timestamps' May 7 13:03:29.051: INFO: stderr: "" May 7 13:03:29.051: INFO: stdout: "2020-05-07T13:03:28.355926717Z 1:M 07 May 13:03:28.355 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range May 7 13:03:31.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-95tsp redis-master --namespace=kubectl-3212 --since=1s' May 7 13:03:31.664: INFO: stderr: "" May 7 13:03:31.664: INFO: stdout: "" May 7 13:03:31.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-95tsp redis-master --namespace=kubectl-3212 --since=24h' May 7 13:03:31.775: INFO: stderr: "" May 7 13:03:31.775: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 07 May 13:03:28.355 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 07 May 13:03:28.355 # Server started, Redis version 3.2.12\n1:M 07 May 13:03:28.355 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 07 May 13:03:28.355 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources May 7 13:03:31.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3212' May 7 13:03:31.882: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 7 13:03:31.882: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" May 7 13:03:31.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-3212' May 7 13:03:31.978: INFO: stderr: "No resources found.\n" May 7 13:03:31.978: INFO: stdout: "" May 7 13:03:31.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-3212 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 7 13:03:32.065: INFO: stderr: "" May 7 13:03:32.065: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:03:32.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3212" for this suite. May 7 13:03:38.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:03:38.162: INFO: namespace kubectl-3212 deletion completed in 6.09408099s • [SLOW TEST:12.978 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:03:38.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-d6fbba6d-32f3-4580-976e-159881ccae48 in namespace container-probe-1268 May 7 13:03:42.260: INFO: Started pod busybox-d6fbba6d-32f3-4580-976e-159881ccae48 in namespace container-probe-1268 STEP: checking the pod's current state and verifying that restartCount is present May 7 13:03:42.263: INFO: Initial restart count of pod busybox-d6fbba6d-32f3-4580-976e-159881ccae48 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:07:42.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1268" for this suite. May 7 13:07:48.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:07:48.938: INFO: namespace container-probe-1268 deletion completed in 6.118075939s • [SLOW TEST:250.776 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:07:48.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 7 13:07:57.131: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 7 13:07:57.176: INFO: Pod pod-with-poststart-http-hook still exists May 7 13:07:59.176: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 7 13:07:59.181: INFO: Pod pod-with-poststart-http-hook still exists May 7 13:08:01.176: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 7 13:08:01.181: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:08:01.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4087" for this suite. May 7 13:08:23.200: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:08:23.294: INFO: namespace container-lifecycle-hook-4087 deletion completed in 22.108801201s • [SLOW TEST:34.356 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:08:23.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-d8af522a-6932-43c5-8024-34010ba89457 STEP: Creating a pod to test consume secrets May 7 13:08:23.420: INFO: Waiting up to 5m0s for pod "pod-secrets-6df1c03e-826d-4363-8318-b85cb78854ef" in namespace "secrets-7651" to be "success or failure" May 7 13:08:23.432: INFO: Pod "pod-secrets-6df1c03e-826d-4363-8318-b85cb78854ef": Phase="Pending", Reason="", readiness=false. Elapsed: 12.182324ms May 7 13:08:25.438: INFO: Pod "pod-secrets-6df1c03e-826d-4363-8318-b85cb78854ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017504578s May 7 13:08:27.442: INFO: Pod "pod-secrets-6df1c03e-826d-4363-8318-b85cb78854ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021511646s STEP: Saw pod success May 7 13:08:27.442: INFO: Pod "pod-secrets-6df1c03e-826d-4363-8318-b85cb78854ef" satisfied condition "success or failure" May 7 13:08:27.444: INFO: Trying to get logs from node iruya-worker pod pod-secrets-6df1c03e-826d-4363-8318-b85cb78854ef container secret-volume-test: STEP: delete the pod May 7 13:08:27.524: INFO: Waiting for pod pod-secrets-6df1c03e-826d-4363-8318-b85cb78854ef to disappear May 7 13:08:27.553: INFO: Pod pod-secrets-6df1c03e-826d-4363-8318-b85cb78854ef no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:08:27.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7651" for this suite. May 7 13:08:33.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:08:33.643: INFO: namespace secrets-7651 deletion completed in 6.086728355s • [SLOW TEST:10.349 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:08:33.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 7 13:08:33.742: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b51ab673-dcaf-4589-83a7-7fce21491484" in namespace "projected-2132" to be "success or failure" May 7 13:08:33.745: INFO: Pod "downwardapi-volume-b51ab673-dcaf-4589-83a7-7fce21491484": Phase="Pending", Reason="", readiness=false. Elapsed: 3.353268ms May 7 13:08:35.750: INFO: Pod "downwardapi-volume-b51ab673-dcaf-4589-83a7-7fce21491484": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007739379s May 7 13:08:37.755: INFO: Pod "downwardapi-volume-b51ab673-dcaf-4589-83a7-7fce21491484": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012527369s STEP: Saw pod success May 7 13:08:37.755: INFO: Pod "downwardapi-volume-b51ab673-dcaf-4589-83a7-7fce21491484" satisfied condition "success or failure" May 7 13:08:37.758: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-b51ab673-dcaf-4589-83a7-7fce21491484 container client-container: STEP: delete the pod May 7 13:08:37.776: INFO: Waiting for pod downwardapi-volume-b51ab673-dcaf-4589-83a7-7fce21491484 to disappear May 7 13:08:37.781: INFO: Pod downwardapi-volume-b51ab673-dcaf-4589-83a7-7fce21491484 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:08:37.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2132" for this suite. May 7 13:08:43.808: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:08:43.900: INFO: namespace projected-2132 deletion completed in 6.116697979s • [SLOW TEST:10.256 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:08:43.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 7 13:08:44.018: INFO: Waiting up to 5m0s for pod "downwardapi-volume-10498420-5f02-4044-a72d-b8d8f26c8f03" in namespace "downward-api-4140" to be "success or failure" May 7 13:08:44.021: INFO: Pod "downwardapi-volume-10498420-5f02-4044-a72d-b8d8f26c8f03": Phase="Pending", Reason="", readiness=false. Elapsed: 3.4664ms May 7 13:08:46.026: INFO: Pod "downwardapi-volume-10498420-5f02-4044-a72d-b8d8f26c8f03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007931292s May 7 13:08:48.029: INFO: Pod "downwardapi-volume-10498420-5f02-4044-a72d-b8d8f26c8f03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011212058s STEP: Saw pod success May 7 13:08:48.029: INFO: Pod "downwardapi-volume-10498420-5f02-4044-a72d-b8d8f26c8f03" satisfied condition "success or failure" May 7 13:08:48.032: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-10498420-5f02-4044-a72d-b8d8f26c8f03 container client-container: STEP: delete the pod May 7 13:08:48.057: INFO: Waiting for pod downwardapi-volume-10498420-5f02-4044-a72d-b8d8f26c8f03 to disappear May 7 13:08:48.062: INFO: Pod downwardapi-volume-10498420-5f02-4044-a72d-b8d8f26c8f03 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:08:48.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4140" for this suite. May 7 13:08:54.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:08:54.166: INFO: namespace downward-api-4140 deletion completed in 6.10006448s • [SLOW TEST:10.265 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:08:54.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:09:24.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5215" for this suite. May 7 13:09:30.828: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:09:30.909: INFO: namespace container-runtime-5215 deletion completed in 6.089231997s • [SLOW TEST:36.742 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:09:30.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:09:36.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4403" for this suite. May 7 13:09:50.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:09:50.202: INFO: namespace replication-controller-4403 deletion completed in 14.118571991s • [SLOW TEST:19.293 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:09:50.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC May 7 13:09:50.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3764' May 7 13:09:53.124: INFO: stderr: "" May 7 13:09:53.124: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 7 13:09:54.131: INFO: Selector matched 1 pods for map[app:redis] May 7 13:09:54.132: INFO: Found 0 / 1 May 7 13:09:55.130: INFO: Selector matched 1 pods for map[app:redis] May 7 13:09:55.130: INFO: Found 0 / 1 May 7 13:09:56.182: INFO: Selector matched 1 pods for map[app:redis] May 7 13:09:56.182: INFO: Found 0 / 1 May 7 13:09:57.129: INFO: Selector matched 1 pods for map[app:redis] May 7 13:09:57.130: INFO: Found 1 / 1 May 7 13:09:57.130: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 7 13:09:57.133: INFO: Selector matched 1 pods for map[app:redis] May 7 13:09:57.133: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 7 13:09:57.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-vjgdk --namespace=kubectl-3764 -p {"metadata":{"annotations":{"x":"y"}}}' May 7 13:09:57.253: INFO: stderr: "" May 7 13:09:57.253: INFO: stdout: "pod/redis-master-vjgdk patched\n" STEP: checking annotations May 7 13:09:57.262: INFO: Selector matched 1 pods for map[app:redis] May 7 13:09:57.262: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:09:57.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3764" for this suite. May 7 13:10:19.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:10:19.370: INFO: namespace kubectl-3764 deletion completed in 22.104473811s • [SLOW TEST:29.167 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:10:19.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 7 13:10:23.994: INFO: Successfully updated pod "annotationupdatef7621c52-480b-4462-bbe9-18411234a1f7" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:10:26.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8313" for this suite. May 7 13:10:48.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:10:48.147: INFO: namespace projected-8313 deletion completed in 22.119568463s • [SLOW TEST:28.775 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:10:48.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-1404 STEP: creating a selector STEP: Creating the service pods in kubernetes May 7 13:10:48.242: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 7 13:11:14.392: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.131 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1404 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 13:11:14.392: INFO: >>> kubeConfig: /root/.kube/config I0507 13:11:14.435211 6 log.go:172] (0xc00238c8f0) (0xc00300af00) Create stream I0507 13:11:14.435253 6 log.go:172] (0xc00238c8f0) (0xc00300af00) Stream added, broadcasting: 1 I0507 13:11:14.438247 6 log.go:172] (0xc00238c8f0) Reply frame received for 1 I0507 13:11:14.438297 6 log.go:172] (0xc00238c8f0) (0xc000da3900) Create stream I0507 13:11:14.438312 6 log.go:172] (0xc00238c8f0) (0xc000da3900) Stream added, broadcasting: 3 I0507 13:11:14.439307 6 log.go:172] (0xc00238c8f0) Reply frame received for 3 I0507 13:11:14.439359 6 log.go:172] (0xc00238c8f0) (0xc00300afa0) Create stream I0507 13:11:14.439391 6 log.go:172] (0xc00238c8f0) (0xc00300afa0) Stream added, broadcasting: 5 I0507 13:11:14.440387 6 log.go:172] (0xc00238c8f0) Reply frame received for 5 I0507 13:11:15.525650 6 log.go:172] (0xc00238c8f0) Data frame received for 3 I0507 13:11:15.525698 6 log.go:172] (0xc000da3900) (3) Data frame handling I0507 13:11:15.525726 6 log.go:172] (0xc000da3900) (3) Data frame sent I0507 13:11:15.526243 6 log.go:172] (0xc00238c8f0) Data frame received for 5 I0507 13:11:15.526284 6 log.go:172] (0xc00238c8f0) Data frame received for 3 I0507 13:11:15.526337 6 log.go:172] (0xc000da3900) (3) Data frame handling I0507 13:11:15.526364 6 log.go:172] (0xc00300afa0) (5) Data frame handling I0507 13:11:15.528260 6 log.go:172] (0xc00238c8f0) Data frame received for 1 I0507 13:11:15.528296 6 log.go:172] (0xc00300af00) (1) Data frame handling I0507 13:11:15.528317 6 log.go:172] (0xc00300af00) (1) Data frame sent I0507 13:11:15.528346 6 log.go:172] (0xc00238c8f0) (0xc00300af00) Stream removed, broadcasting: 1 I0507 13:11:15.528375 6 log.go:172] (0xc00238c8f0) Go away received I0507 13:11:15.529394 6 log.go:172] (0xc00238c8f0) (0xc00300af00) Stream removed, broadcasting: 1 I0507 13:11:15.529418 6 log.go:172] (0xc00238c8f0) (0xc000da3900) Stream removed, broadcasting: 3 I0507 13:11:15.529430 6 log.go:172] (0xc00238c8f0) (0xc00300afa0) Stream removed, broadcasting: 5 May 7 13:11:15.529: INFO: Found all expected endpoints: [netserver-0] May 7 13:11:15.533: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.70 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1404 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 13:11:15.533: INFO: >>> kubeConfig: /root/.kube/config I0507 13:11:15.568373 6 log.go:172] (0xc00198cdc0) (0xc000da3ea0) Create stream I0507 13:11:15.568419 6 log.go:172] (0xc00198cdc0) (0xc000da3ea0) Stream added, broadcasting: 1 I0507 13:11:15.571176 6 log.go:172] (0xc00198cdc0) Reply frame received for 1 I0507 13:11:15.571232 6 log.go:172] (0xc00198cdc0) (0xc00235a000) Create stream I0507 13:11:15.571248 6 log.go:172] (0xc00198cdc0) (0xc00235a000) Stream added, broadcasting: 3 I0507 13:11:15.572101 6 log.go:172] (0xc00198cdc0) Reply frame received for 3 I0507 13:11:15.572140 6 log.go:172] (0xc00198cdc0) (0xc00300b040) Create stream I0507 13:11:15.572154 6 log.go:172] (0xc00198cdc0) (0xc00300b040) Stream added, broadcasting: 5 I0507 13:11:15.573324 6 log.go:172] (0xc00198cdc0) Reply frame received for 5 I0507 13:11:16.632716 6 log.go:172] (0xc00198cdc0) Data frame received for 5 I0507 13:11:16.632783 6 log.go:172] (0xc00300b040) (5) Data frame handling I0507 13:11:16.632826 6 log.go:172] (0xc00198cdc0) Data frame received for 3 I0507 13:11:16.632848 6 log.go:172] (0xc00235a000) (3) Data frame handling I0507 13:11:16.632878 6 log.go:172] (0xc00235a000) (3) Data frame sent I0507 13:11:16.633896 6 log.go:172] (0xc00198cdc0) Data frame received for 3 I0507 13:11:16.633929 6 log.go:172] (0xc00235a000) (3) Data frame handling I0507 13:11:16.635833 6 log.go:172] (0xc00198cdc0) Data frame received for 1 I0507 13:11:16.635865 6 log.go:172] (0xc000da3ea0) (1) Data frame handling I0507 13:11:16.635878 6 log.go:172] (0xc000da3ea0) (1) Data frame sent I0507 13:11:16.635892 6 log.go:172] (0xc00198cdc0) (0xc000da3ea0) Stream removed, broadcasting: 1 I0507 13:11:16.635933 6 log.go:172] (0xc00198cdc0) Go away received I0507 13:11:16.636007 6 log.go:172] (0xc00198cdc0) (0xc000da3ea0) Stream removed, broadcasting: 1 I0507 13:11:16.636036 6 log.go:172] (0xc00198cdc0) (0xc00235a000) Stream removed, broadcasting: 3 I0507 13:11:16.636068 6 log.go:172] (0xc00198cdc0) (0xc00300b040) Stream removed, broadcasting: 5 May 7 13:11:16.636: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:11:16.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1404" for this suite. May 7 13:11:40.656: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:11:40.741: INFO: namespace pod-network-test-1404 deletion completed in 24.100468938s • [SLOW TEST:52.594 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:11:40.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 7 13:11:44.869: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:11:45.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4895" for this suite. May 7 13:11:51.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:11:51.146: INFO: namespace container-runtime-4895 deletion completed in 6.098815347s • [SLOW TEST:10.403 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:11:51.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-ea95118e-217b-474e-8ac4-1374f4348595 STEP: Creating a pod to test consume secrets May 7 13:11:51.217: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ed8ce0ff-f204-4861-a3bc-fc599356bb3c" in namespace "projected-2053" to be "success or failure" May 7 13:11:51.238: INFO: Pod "pod-projected-secrets-ed8ce0ff-f204-4861-a3bc-fc599356bb3c": Phase="Pending", Reason="", readiness=false. Elapsed: 20.655541ms May 7 13:11:53.242: INFO: Pod "pod-projected-secrets-ed8ce0ff-f204-4861-a3bc-fc599356bb3c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024455755s May 7 13:11:55.246: INFO: Pod "pod-projected-secrets-ed8ce0ff-f204-4861-a3bc-fc599356bb3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028821335s STEP: Saw pod success May 7 13:11:55.246: INFO: Pod "pod-projected-secrets-ed8ce0ff-f204-4861-a3bc-fc599356bb3c" satisfied condition "success or failure" May 7 13:11:55.249: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-ed8ce0ff-f204-4861-a3bc-fc599356bb3c container projected-secret-volume-test: STEP: delete the pod May 7 13:11:55.300: INFO: Waiting for pod pod-projected-secrets-ed8ce0ff-f204-4861-a3bc-fc599356bb3c to disappear May 7 13:11:55.311: INFO: Pod pod-projected-secrets-ed8ce0ff-f204-4861-a3bc-fc599356bb3c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:11:55.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2053" for this suite. May 7 13:12:01.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:12:01.407: INFO: namespace projected-2053 deletion completed in 6.09224535s • [SLOW TEST:10.259 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:12:01.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 7 13:12:01.520: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 13:12:01.527: INFO: Number of nodes with available pods: 0 May 7 13:12:01.527: INFO: Node iruya-worker is running more than one daemon pod May 7 13:12:02.587: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 13:12:02.590: INFO: Number of nodes with available pods: 0 May 7 13:12:02.590: INFO: Node iruya-worker is running more than one daemon pod May 7 13:12:03.531: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 13:12:03.535: INFO: Number of nodes with available pods: 0 May 7 13:12:03.535: INFO: Node iruya-worker is running more than one daemon pod May 7 13:12:04.542: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 13:12:04.544: INFO: Number of nodes with available pods: 0 May 7 13:12:04.544: INFO: Node iruya-worker is running more than one daemon pod May 7 13:12:05.532: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 13:12:05.535: INFO: Number of nodes with available pods: 0 May 7 13:12:05.535: INFO: Node iruya-worker is running more than one daemon pod May 7 13:12:06.532: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 13:12:06.535: INFO: Number of nodes with available pods: 2 May 7 13:12:06.535: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 7 13:12:06.582: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 13:12:06.585: INFO: Number of nodes with available pods: 1 May 7 13:12:06.585: INFO: Node iruya-worker2 is running more than one daemon pod May 7 13:12:07.590: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 13:12:07.592: INFO: Number of nodes with available pods: 1 May 7 13:12:07.592: INFO: Node iruya-worker2 is running more than one daemon pod May 7 13:12:08.591: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 13:12:08.594: INFO: Number of nodes with available pods: 1 May 7 13:12:08.594: INFO: Node iruya-worker2 is running more than one daemon pod May 7 13:12:09.590: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 13:12:09.594: INFO: Number of nodes with available pods: 1 May 7 13:12:09.594: INFO: Node iruya-worker2 is running more than one daemon pod May 7 13:12:10.590: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 13:12:10.594: INFO: Number of nodes with available pods: 1 May 7 13:12:10.594: INFO: Node iruya-worker2 is running more than one daemon pod May 7 13:12:11.591: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 13:12:11.595: INFO: Number of nodes with available pods: 1 May 7 13:12:11.595: INFO: Node iruya-worker2 is running more than one daemon pod May 7 13:12:12.591: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 13:12:12.595: INFO: Number of nodes with available pods: 1 May 7 13:12:12.595: INFO: Node iruya-worker2 is running more than one daemon pod May 7 13:12:13.591: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 13:12:13.594: INFO: Number of nodes with available pods: 1 May 7 13:12:13.594: INFO: Node iruya-worker2 is running more than one daemon pod May 7 13:12:14.590: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 13:12:14.594: INFO: Number of nodes with available pods: 1 May 7 13:12:14.594: INFO: Node iruya-worker2 is running more than one daemon pod May 7 13:12:15.591: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 13:12:15.594: INFO: Number of nodes with available pods: 2 May 7 13:12:15.595: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2156, will wait for the garbage collector to delete the pods May 7 13:12:15.658: INFO: Deleting DaemonSet.extensions daemon-set took: 6.963236ms May 7 13:12:15.959: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.287507ms May 7 13:12:21.863: INFO: Number of nodes with available pods: 0 May 7 13:12:21.863: INFO: Number of running nodes: 0, number of available pods: 0 May 7 13:12:21.869: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2156/daemonsets","resourceVersion":"9528955"},"items":null} May 7 13:12:21.872: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2156/pods","resourceVersion":"9528955"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:12:21.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2156" for this suite. May 7 13:12:27.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:12:27.974: INFO: namespace daemonsets-2156 deletion completed in 6.090256322s • [SLOW TEST:26.567 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:12:27.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-a9dafacf-df8c-4d27-a0d6-5802782c3b10 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:12:32.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3776" for this suite. May 7 13:12:54.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:12:54.165: INFO: namespace configmap-3776 deletion completed in 22.091780675s • [SLOW TEST:26.191 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:12:54.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 7 13:12:54.202: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0a9ba107-0cda-48b8-bf83-6db14b668f15" in namespace "projected-9910" to be "success or failure" May 7 13:12:54.218: INFO: Pod "downwardapi-volume-0a9ba107-0cda-48b8-bf83-6db14b668f15": Phase="Pending", Reason="", readiness=false. Elapsed: 16.612569ms May 7 13:12:56.223: INFO: Pod "downwardapi-volume-0a9ba107-0cda-48b8-bf83-6db14b668f15": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021484106s May 7 13:12:58.227: INFO: Pod "downwardapi-volume-0a9ba107-0cda-48b8-bf83-6db14b668f15": Phase="Running", Reason="", readiness=true. Elapsed: 4.025799881s May 7 13:13:00.232: INFO: Pod "downwardapi-volume-0a9ba107-0cda-48b8-bf83-6db14b668f15": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.03053473s STEP: Saw pod success May 7 13:13:00.232: INFO: Pod "downwardapi-volume-0a9ba107-0cda-48b8-bf83-6db14b668f15" satisfied condition "success or failure" May 7 13:13:00.236: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-0a9ba107-0cda-48b8-bf83-6db14b668f15 container client-container: STEP: delete the pod May 7 13:13:00.269: INFO: Waiting for pod downwardapi-volume-0a9ba107-0cda-48b8-bf83-6db14b668f15 to disappear May 7 13:13:00.298: INFO: Pod downwardapi-volume-0a9ba107-0cda-48b8-bf83-6db14b668f15 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:13:00.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9910" for this suite. May 7 13:13:06.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:13:06.412: INFO: namespace projected-9910 deletion completed in 6.109314461s • [SLOW TEST:12.246 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:13:06.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 7 13:13:06.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-6130' May 7 13:13:06.573: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 7 13:13:06.573: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: rolling-update to same image controller May 7 13:13:06.607: INFO: scanned /root for discovery docs: May 7 13:13:06.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-6130' May 7 13:13:22.531: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 7 13:13:22.531: INFO: stdout: "Created e2e-test-nginx-rc-35bcd68b78688afcbcd124fa84164cfd\nScaling up e2e-test-nginx-rc-35bcd68b78688afcbcd124fa84164cfd from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-35bcd68b78688afcbcd124fa84164cfd up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-35bcd68b78688afcbcd124fa84164cfd to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" May 7 13:13:22.531: INFO: stdout: "Created e2e-test-nginx-rc-35bcd68b78688afcbcd124fa84164cfd\nScaling up e2e-test-nginx-rc-35bcd68b78688afcbcd124fa84164cfd from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-35bcd68b78688afcbcd124fa84164cfd up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-35bcd68b78688afcbcd124fa84164cfd to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. May 7 13:13:22.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-6130' May 7 13:13:22.634: INFO: stderr: "" May 7 13:13:22.634: INFO: stdout: "e2e-test-nginx-rc-35bcd68b78688afcbcd124fa84164cfd-v28mm " May 7 13:13:22.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-35bcd68b78688afcbcd124fa84164cfd-v28mm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6130' May 7 13:13:22.765: INFO: stderr: "" May 7 13:13:22.765: INFO: stdout: "true" May 7 13:13:22.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-35bcd68b78688afcbcd124fa84164cfd-v28mm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6130' May 7 13:13:22.862: INFO: stderr: "" May 7 13:13:22.862: INFO: stdout: "docker.io/library/nginx:1.14-alpine" May 7 13:13:22.862: INFO: e2e-test-nginx-rc-35bcd68b78688afcbcd124fa84164cfd-v28mm is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 May 7 13:13:22.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-6130' May 7 13:13:22.970: INFO: stderr: "" May 7 13:13:22.970: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:13:22.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6130" for this suite. May 7 13:13:45.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:13:45.082: INFO: namespace kubectl-6130 deletion completed in 22.106649855s • [SLOW TEST:38.669 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:13:45.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 7 13:13:45.197: INFO: Waiting up to 5m0s for pod "downwardapi-volume-11d326bc-6cd1-43b6-b734-4d5a32f06254" in namespace "downward-api-3707" to be "success or failure" May 7 13:13:45.200: INFO: Pod "downwardapi-volume-11d326bc-6cd1-43b6-b734-4d5a32f06254": Phase="Pending", Reason="", readiness=false. Elapsed: 2.760976ms May 7 13:13:47.215: INFO: Pod "downwardapi-volume-11d326bc-6cd1-43b6-b734-4d5a32f06254": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017704117s May 7 13:13:49.269: INFO: Pod "downwardapi-volume-11d326bc-6cd1-43b6-b734-4d5a32f06254": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07188488s STEP: Saw pod success May 7 13:13:49.269: INFO: Pod "downwardapi-volume-11d326bc-6cd1-43b6-b734-4d5a32f06254" satisfied condition "success or failure" May 7 13:13:49.273: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-11d326bc-6cd1-43b6-b734-4d5a32f06254 container client-container: STEP: delete the pod May 7 13:13:49.291: INFO: Waiting for pod downwardapi-volume-11d326bc-6cd1-43b6-b734-4d5a32f06254 to disappear May 7 13:13:49.296: INFO: Pod downwardapi-volume-11d326bc-6cd1-43b6-b734-4d5a32f06254 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:13:49.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3707" for this suite. May 7 13:13:55.311: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:13:55.384: INFO: namespace downward-api-3707 deletion completed in 6.084551646s • [SLOW TEST:10.302 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:13:55.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs May 7 13:13:55.505: INFO: Waiting up to 5m0s for pod "pod-af05c8bc-2bcb-4d42-9396-241b4632ff37" in namespace "emptydir-8223" to be "success or failure" May 7 13:13:55.512: INFO: Pod "pod-af05c8bc-2bcb-4d42-9396-241b4632ff37": Phase="Pending", Reason="", readiness=false. Elapsed: 7.230225ms May 7 13:13:57.516: INFO: Pod "pod-af05c8bc-2bcb-4d42-9396-241b4632ff37": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011066507s May 7 13:13:59.520: INFO: Pod "pod-af05c8bc-2bcb-4d42-9396-241b4632ff37": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015231718s STEP: Saw pod success May 7 13:13:59.521: INFO: Pod "pod-af05c8bc-2bcb-4d42-9396-241b4632ff37" satisfied condition "success or failure" May 7 13:13:59.523: INFO: Trying to get logs from node iruya-worker pod pod-af05c8bc-2bcb-4d42-9396-241b4632ff37 container test-container: STEP: delete the pod May 7 13:13:59.570: INFO: Waiting for pod pod-af05c8bc-2bcb-4d42-9396-241b4632ff37 to disappear May 7 13:13:59.585: INFO: Pod pod-af05c8bc-2bcb-4d42-9396-241b4632ff37 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:13:59.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8223" for this suite. May 7 13:14:05.599: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:14:05.672: INFO: namespace emptydir-8223 deletion completed in 6.083932664s • [SLOW TEST:10.287 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:14:05.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 7 13:14:05.758: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a5485a6c-f3e0-4b39-a6fe-9b7c889e55c9" in namespace "projected-7948" to be "success or failure" May 7 13:14:05.775: INFO: Pod "downwardapi-volume-a5485a6c-f3e0-4b39-a6fe-9b7c889e55c9": Phase="Pending", Reason="", readiness=false. Elapsed: 16.57944ms May 7 13:14:07.779: INFO: Pod "downwardapi-volume-a5485a6c-f3e0-4b39-a6fe-9b7c889e55c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02057963s May 7 13:14:09.783: INFO: Pod "downwardapi-volume-a5485a6c-f3e0-4b39-a6fe-9b7c889e55c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024874159s STEP: Saw pod success May 7 13:14:09.783: INFO: Pod "downwardapi-volume-a5485a6c-f3e0-4b39-a6fe-9b7c889e55c9" satisfied condition "success or failure" May 7 13:14:09.787: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-a5485a6c-f3e0-4b39-a6fe-9b7c889e55c9 container client-container: STEP: delete the pod May 7 13:14:09.825: INFO: Waiting for pod downwardapi-volume-a5485a6c-f3e0-4b39-a6fe-9b7c889e55c9 to disappear May 7 13:14:09.844: INFO: Pod downwardapi-volume-a5485a6c-f3e0-4b39-a6fe-9b7c889e55c9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:14:09.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7948" for this suite. May 7 13:14:15.863: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:14:15.978: INFO: namespace projected-7948 deletion completed in 6.130605858s • [SLOW TEST:10.306 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:14:15.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-4f8e8e92-ae97-4bee-8703-49e54bba4b2d STEP: Creating a pod to test consume secrets May 7 13:14:16.161: INFO: Waiting up to 5m0s for pod "pod-secrets-c5d7e312-ce83-45e7-9377-9e706727478c" in namespace "secrets-9362" to be "success or failure" May 7 13:14:16.200: INFO: Pod "pod-secrets-c5d7e312-ce83-45e7-9377-9e706727478c": Phase="Pending", Reason="", readiness=false. Elapsed: 39.492242ms May 7 13:14:18.205: INFO: Pod "pod-secrets-c5d7e312-ce83-45e7-9377-9e706727478c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043709889s May 7 13:14:20.208: INFO: Pod "pod-secrets-c5d7e312-ce83-45e7-9377-9e706727478c": Phase="Running", Reason="", readiness=true. Elapsed: 4.046863954s May 7 13:14:22.212: INFO: Pod "pod-secrets-c5d7e312-ce83-45e7-9377-9e706727478c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.051314056s STEP: Saw pod success May 7 13:14:22.212: INFO: Pod "pod-secrets-c5d7e312-ce83-45e7-9377-9e706727478c" satisfied condition "success or failure" May 7 13:14:22.216: INFO: Trying to get logs from node iruya-worker pod pod-secrets-c5d7e312-ce83-45e7-9377-9e706727478c container secret-volume-test: STEP: delete the pod May 7 13:14:22.271: INFO: Waiting for pod pod-secrets-c5d7e312-ce83-45e7-9377-9e706727478c to disappear May 7 13:14:22.279: INFO: Pod pod-secrets-c5d7e312-ce83-45e7-9377-9e706727478c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:14:22.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9362" for this suite. May 7 13:14:28.295: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:14:28.376: INFO: namespace secrets-9362 deletion completed in 6.093916347s STEP: Destroying namespace "secret-namespace-9534" for this suite. May 7 13:14:34.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:14:34.465: INFO: namespace secret-namespace-9534 deletion completed in 6.089063616s • [SLOW TEST:18.487 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:14:34.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 7 13:14:34.549: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8446,SelfLink:/api/v1/namespaces/watch-8446/configmaps/e2e-watch-test-configmap-a,UID:f58464b8-1bc2-4f45-9735-e72e82855bc4,ResourceVersion:9529487,Generation:0,CreationTimestamp:2020-05-07 13:14:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 7 13:14:34.549: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8446,SelfLink:/api/v1/namespaces/watch-8446/configmaps/e2e-watch-test-configmap-a,UID:f58464b8-1bc2-4f45-9735-e72e82855bc4,ResourceVersion:9529487,Generation:0,CreationTimestamp:2020-05-07 13:14:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 7 13:14:44.558: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8446,SelfLink:/api/v1/namespaces/watch-8446/configmaps/e2e-watch-test-configmap-a,UID:f58464b8-1bc2-4f45-9735-e72e82855bc4,ResourceVersion:9529507,Generation:0,CreationTimestamp:2020-05-07 13:14:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 7 13:14:44.558: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8446,SelfLink:/api/v1/namespaces/watch-8446/configmaps/e2e-watch-test-configmap-a,UID:f58464b8-1bc2-4f45-9735-e72e82855bc4,ResourceVersion:9529507,Generation:0,CreationTimestamp:2020-05-07 13:14:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 7 13:14:54.566: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8446,SelfLink:/api/v1/namespaces/watch-8446/configmaps/e2e-watch-test-configmap-a,UID:f58464b8-1bc2-4f45-9735-e72e82855bc4,ResourceVersion:9529527,Generation:0,CreationTimestamp:2020-05-07 13:14:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 7 13:14:54.567: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8446,SelfLink:/api/v1/namespaces/watch-8446/configmaps/e2e-watch-test-configmap-a,UID:f58464b8-1bc2-4f45-9735-e72e82855bc4,ResourceVersion:9529527,Generation:0,CreationTimestamp:2020-05-07 13:14:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 7 13:15:04.574: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8446,SelfLink:/api/v1/namespaces/watch-8446/configmaps/e2e-watch-test-configmap-a,UID:f58464b8-1bc2-4f45-9735-e72e82855bc4,ResourceVersion:9529547,Generation:0,CreationTimestamp:2020-05-07 13:14:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 7 13:15:04.574: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8446,SelfLink:/api/v1/namespaces/watch-8446/configmaps/e2e-watch-test-configmap-a,UID:f58464b8-1bc2-4f45-9735-e72e82855bc4,ResourceVersion:9529547,Generation:0,CreationTimestamp:2020-05-07 13:14:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 7 13:15:14.582: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-8446,SelfLink:/api/v1/namespaces/watch-8446/configmaps/e2e-watch-test-configmap-b,UID:d69c24f4-b873-4671-8aac-e0a26caa10ec,ResourceVersion:9529567,Generation:0,CreationTimestamp:2020-05-07 13:15:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 7 13:15:14.583: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-8446,SelfLink:/api/v1/namespaces/watch-8446/configmaps/e2e-watch-test-configmap-b,UID:d69c24f4-b873-4671-8aac-e0a26caa10ec,ResourceVersion:9529567,Generation:0,CreationTimestamp:2020-05-07 13:15:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 7 13:15:24.590: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-8446,SelfLink:/api/v1/namespaces/watch-8446/configmaps/e2e-watch-test-configmap-b,UID:d69c24f4-b873-4671-8aac-e0a26caa10ec,ResourceVersion:9529588,Generation:0,CreationTimestamp:2020-05-07 13:15:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 7 13:15:24.591: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-8446,SelfLink:/api/v1/namespaces/watch-8446/configmaps/e2e-watch-test-configmap-b,UID:d69c24f4-b873-4671-8aac-e0a26caa10ec,ResourceVersion:9529588,Generation:0,CreationTimestamp:2020-05-07 13:15:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:15:34.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8446" for this suite. May 7 13:15:40.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:15:40.696: INFO: namespace watch-8446 deletion completed in 6.100955313s • [SLOW TEST:66.231 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:15:40.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 7 13:15:40.804: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3c53b5f2-7303-45ae-a3bb-1b2da20f4cc2" in namespace "downward-api-1535" to be "success or failure" May 7 13:15:40.820: INFO: Pod "downwardapi-volume-3c53b5f2-7303-45ae-a3bb-1b2da20f4cc2": Phase="Pending", Reason="", readiness=false. Elapsed: 15.804369ms May 7 13:15:42.825: INFO: Pod "downwardapi-volume-3c53b5f2-7303-45ae-a3bb-1b2da20f4cc2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020031778s May 7 13:15:44.828: INFO: Pod "downwardapi-volume-3c53b5f2-7303-45ae-a3bb-1b2da20f4cc2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023155931s STEP: Saw pod success May 7 13:15:44.828: INFO: Pod "downwardapi-volume-3c53b5f2-7303-45ae-a3bb-1b2da20f4cc2" satisfied condition "success or failure" May 7 13:15:44.830: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-3c53b5f2-7303-45ae-a3bb-1b2da20f4cc2 container client-container: STEP: delete the pod May 7 13:15:44.852: INFO: Waiting for pod downwardapi-volume-3c53b5f2-7303-45ae-a3bb-1b2da20f4cc2 to disappear May 7 13:15:44.879: INFO: Pod downwardapi-volume-3c53b5f2-7303-45ae-a3bb-1b2da20f4cc2 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:15:44.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1535" for this suite. May 7 13:15:50.895: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:15:50.974: INFO: namespace downward-api-1535 deletion completed in 6.091247132s • [SLOW TEST:10.277 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:15:50.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 7 13:15:51.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-7633' May 7 13:15:51.153: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 7 13:15:51.153: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 May 7 13:15:53.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-7633' May 7 13:15:53.318: INFO: stderr: "" May 7 13:15:53.318: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:15:53.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7633" for this suite. May 7 13:17:55.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:17:55.498: INFO: namespace kubectl-7633 deletion completed in 2m2.175637294s • [SLOW TEST:124.524 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:17:55.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 7 13:17:55.589: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. May 7 13:17:56.504: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 7 13:17:58.992: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724454276, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724454276, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724454276, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724454276, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 7 13:18:00.996: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724454276, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724454276, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724454276, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724454276, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 7 13:18:03.721: INFO: Waited 716.347151ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:18:04.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-5517" for this suite. May 7 13:18:10.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:18:10.422: INFO: namespace aggregator-5517 deletion completed in 6.251323696s • [SLOW TEST:14.923 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:18:10.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition May 7 13:18:10.519: INFO: Waiting up to 5m0s for pod "var-expansion-8fdeaa9a-df7b-4725-931c-b4ad5b3dd2cf" in namespace "var-expansion-8926" to be "success or failure" May 7 13:18:10.529: INFO: Pod "var-expansion-8fdeaa9a-df7b-4725-931c-b4ad5b3dd2cf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.994487ms May 7 13:18:12.632: INFO: Pod "var-expansion-8fdeaa9a-df7b-4725-931c-b4ad5b3dd2cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112635938s May 7 13:18:14.636: INFO: Pod "var-expansion-8fdeaa9a-df7b-4725-931c-b4ad5b3dd2cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.116849477s STEP: Saw pod success May 7 13:18:14.636: INFO: Pod "var-expansion-8fdeaa9a-df7b-4725-931c-b4ad5b3dd2cf" satisfied condition "success or failure" May 7 13:18:14.640: INFO: Trying to get logs from node iruya-worker pod var-expansion-8fdeaa9a-df7b-4725-931c-b4ad5b3dd2cf container dapi-container: STEP: delete the pod May 7 13:18:14.680: INFO: Waiting for pod var-expansion-8fdeaa9a-df7b-4725-931c-b4ad5b3dd2cf to disappear May 7 13:18:14.696: INFO: Pod var-expansion-8fdeaa9a-df7b-4725-931c-b4ad5b3dd2cf no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:18:14.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8926" for this suite. May 7 13:18:20.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:18:20.851: INFO: namespace var-expansion-8926 deletion completed in 6.151409613s • [SLOW TEST:10.428 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:18:20.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 7 13:18:20.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-3124' May 7 13:18:21.041: INFO: stderr: "" May 7 13:18:21.041: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created May 7 13:18:26.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-3124 -o json' May 7 13:18:26.186: INFO: stderr: "" May 7 13:18:26.186: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-07T13:18:21Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-3124\",\n \"resourceVersion\": \"9530111\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-3124/pods/e2e-test-nginx-pod\",\n \"uid\": \"9b868151-afad-490a-b62b-351d64965467\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-rk6mv\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-rk6mv\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-rk6mv\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-07T13:18:21Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-07T13:18:24Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-07T13:18:24Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-07T13:18:21Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://566744e680a33e9eb3df4a04b47ad9dd82ab1375dbc082c32236788c14b83958\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-07T13:18:23Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.5\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.79\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-07T13:18:21Z\"\n }\n}\n" STEP: replace the image in the pod May 7 13:18:26.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-3124' May 7 13:18:26.548: INFO: stderr: "" May 7 13:18:26.548: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 May 7 13:18:26.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-3124' May 7 13:18:30.359: INFO: stderr: "" May 7 13:18:30.359: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:18:30.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3124" for this suite. May 7 13:18:36.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:18:36.539: INFO: namespace kubectl-3124 deletion completed in 6.157494551s • [SLOW TEST:15.688 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:18:36.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 7 13:18:36.658: INFO: Waiting up to 5m0s for pod "downward-api-2067c23a-34d1-4101-9ff1-5523954f5673" in namespace "downward-api-7939" to be "success or failure" May 7 13:18:36.669: INFO: Pod "downward-api-2067c23a-34d1-4101-9ff1-5523954f5673": Phase="Pending", Reason="", readiness=false. Elapsed: 11.435472ms May 7 13:18:38.675: INFO: Pod "downward-api-2067c23a-34d1-4101-9ff1-5523954f5673": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017407765s May 7 13:18:40.680: INFO: Pod "downward-api-2067c23a-34d1-4101-9ff1-5523954f5673": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021931781s STEP: Saw pod success May 7 13:18:40.680: INFO: Pod "downward-api-2067c23a-34d1-4101-9ff1-5523954f5673" satisfied condition "success or failure" May 7 13:18:40.683: INFO: Trying to get logs from node iruya-worker pod downward-api-2067c23a-34d1-4101-9ff1-5523954f5673 container dapi-container: STEP: delete the pod May 7 13:18:40.704: INFO: Waiting for pod downward-api-2067c23a-34d1-4101-9ff1-5523954f5673 to disappear May 7 13:18:40.709: INFO: Pod downward-api-2067c23a-34d1-4101-9ff1-5523954f5673 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:18:40.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7939" for this suite. May 7 13:18:46.724: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:18:46.809: INFO: namespace downward-api-7939 deletion completed in 6.097160066s • [SLOW TEST:10.270 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:18:46.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 7 13:18:46.911: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b638a9b5-38d5-4098-8b58-12aec100372d" in namespace "downward-api-245" to be "success or failure" May 7 13:18:46.925: INFO: Pod "downwardapi-volume-b638a9b5-38d5-4098-8b58-12aec100372d": Phase="Pending", Reason="", readiness=false. Elapsed: 14.288616ms May 7 13:18:48.929: INFO: Pod "downwardapi-volume-b638a9b5-38d5-4098-8b58-12aec100372d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017936149s May 7 13:18:50.933: INFO: Pod "downwardapi-volume-b638a9b5-38d5-4098-8b58-12aec100372d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022241921s STEP: Saw pod success May 7 13:18:50.933: INFO: Pod "downwardapi-volume-b638a9b5-38d5-4098-8b58-12aec100372d" satisfied condition "success or failure" May 7 13:18:50.936: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-b638a9b5-38d5-4098-8b58-12aec100372d container client-container: STEP: delete the pod May 7 13:18:50.975: INFO: Waiting for pod downwardapi-volume-b638a9b5-38d5-4098-8b58-12aec100372d to disappear May 7 13:18:50.990: INFO: Pod downwardapi-volume-b638a9b5-38d5-4098-8b58-12aec100372d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:18:50.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-245" for this suite. May 7 13:18:57.006: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:18:57.091: INFO: namespace downward-api-245 deletion completed in 6.095702826s • [SLOW TEST:10.282 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:18:57.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-37d57d0d-ed6f-4d79-981b-2ae51d396300 STEP: Creating a pod to test consume secrets May 7 13:18:57.185: INFO: Waiting up to 5m0s for pod "pod-secrets-c4d296cc-ba4a-42c2-b410-784c747cb9ae" in namespace "secrets-3485" to be "success or failure" May 7 13:18:57.188: INFO: Pod "pod-secrets-c4d296cc-ba4a-42c2-b410-784c747cb9ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.948933ms May 7 13:18:59.192: INFO: Pod "pod-secrets-c4d296cc-ba4a-42c2-b410-784c747cb9ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006802441s May 7 13:19:01.196: INFO: Pod "pod-secrets-c4d296cc-ba4a-42c2-b410-784c747cb9ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010864626s STEP: Saw pod success May 7 13:19:01.196: INFO: Pod "pod-secrets-c4d296cc-ba4a-42c2-b410-784c747cb9ae" satisfied condition "success or failure" May 7 13:19:01.199: INFO: Trying to get logs from node iruya-worker pod pod-secrets-c4d296cc-ba4a-42c2-b410-784c747cb9ae container secret-volume-test: STEP: delete the pod May 7 13:19:01.238: INFO: Waiting for pod pod-secrets-c4d296cc-ba4a-42c2-b410-784c747cb9ae to disappear May 7 13:19:01.244: INFO: Pod pod-secrets-c4d296cc-ba4a-42c2-b410-784c747cb9ae no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:19:01.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3485" for this suite. May 7 13:19:07.272: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:19:07.350: INFO: namespace secrets-3485 deletion completed in 6.101349532s • [SLOW TEST:10.258 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:19:07.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-3261 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet May 7 13:19:07.466: INFO: Found 0 stateful pods, waiting for 3 May 7 13:19:17.471: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 7 13:19:17.471: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 7 13:19:17.471: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 7 13:19:27.470: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 7 13:19:27.470: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 7 13:19:27.470: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 7 13:19:27.493: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 7 13:19:37.567: INFO: Updating stateful set ss2 May 7 13:19:37.604: INFO: Waiting for Pod statefulset-3261/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted May 7 13:19:47.780: INFO: Found 2 stateful pods, waiting for 3 May 7 13:19:57.785: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 7 13:19:57.785: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 7 13:19:57.785: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 7 13:19:57.810: INFO: Updating stateful set ss2 May 7 13:19:57.855: INFO: Waiting for Pod statefulset-3261/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 7 13:20:07.879: INFO: Updating stateful set ss2 May 7 13:20:07.935: INFO: Waiting for StatefulSet statefulset-3261/ss2 to complete update May 7 13:20:07.935: INFO: Waiting for Pod statefulset-3261/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 7 13:20:17.943: INFO: Deleting all statefulset in ns statefulset-3261 May 7 13:20:17.946: INFO: Scaling statefulset ss2 to 0 May 7 13:20:47.964: INFO: Waiting for statefulset status.replicas updated to 0 May 7 13:20:47.967: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:20:47.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3261" for this suite. May 7 13:20:56.015: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:20:56.088: INFO: namespace statefulset-3261 deletion completed in 8.087990716s • [SLOW TEST:108.737 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:20:56.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-5790 STEP: creating a selector STEP: Creating the service pods in kubernetes May 7 13:20:56.159: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 7 13:21:18.295: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.85:8080/dial?request=hostName&protocol=http&host=10.244.1.84&port=8080&tries=1'] Namespace:pod-network-test-5790 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 13:21:18.295: INFO: >>> kubeConfig: /root/.kube/config I0507 13:21:18.325308 6 log.go:172] (0xc0019804d0) (0xc002584780) Create stream I0507 13:21:18.325353 6 log.go:172] (0xc0019804d0) (0xc002584780) Stream added, broadcasting: 1 I0507 13:21:18.339318 6 log.go:172] (0xc0019804d0) Reply frame received for 1 I0507 13:21:18.339360 6 log.go:172] (0xc0019804d0) (0xc0002f4a00) Create stream I0507 13:21:18.339371 6 log.go:172] (0xc0019804d0) (0xc0002f4a00) Stream added, broadcasting: 3 I0507 13:21:18.340173 6 log.go:172] (0xc0019804d0) Reply frame received for 3 I0507 13:21:18.340200 6 log.go:172] (0xc0019804d0) (0xc0025848c0) Create stream I0507 13:21:18.340208 6 log.go:172] (0xc0019804d0) (0xc0025848c0) Stream added, broadcasting: 5 I0507 13:21:18.342308 6 log.go:172] (0xc0019804d0) Reply frame received for 5 I0507 13:21:18.417543 6 log.go:172] (0xc0019804d0) Data frame received for 3 I0507 13:21:18.417577 6 log.go:172] (0xc0002f4a00) (3) Data frame handling I0507 13:21:18.417599 6 log.go:172] (0xc0002f4a00) (3) Data frame sent I0507 13:21:18.418245 6 log.go:172] (0xc0019804d0) Data frame received for 5 I0507 13:21:18.418271 6 log.go:172] (0xc0025848c0) (5) Data frame handling I0507 13:21:18.418293 6 log.go:172] (0xc0019804d0) Data frame received for 3 I0507 13:21:18.418322 6 log.go:172] (0xc0002f4a00) (3) Data frame handling I0507 13:21:18.419810 6 log.go:172] (0xc0019804d0) Data frame received for 1 I0507 13:21:18.419841 6 log.go:172] (0xc002584780) (1) Data frame handling I0507 13:21:18.419868 6 log.go:172] (0xc002584780) (1) Data frame sent I0507 13:21:18.419889 6 log.go:172] (0xc0019804d0) (0xc002584780) Stream removed, broadcasting: 1 I0507 13:21:18.419913 6 log.go:172] (0xc0019804d0) Go away received I0507 13:21:18.420001 6 log.go:172] (0xc0019804d0) (0xc002584780) Stream removed, broadcasting: 1 I0507 13:21:18.420025 6 log.go:172] (0xc0019804d0) (0xc0002f4a00) Stream removed, broadcasting: 3 I0507 13:21:18.420042 6 log.go:172] (0xc0019804d0) (0xc0025848c0) Stream removed, broadcasting: 5 May 7 13:21:18.420: INFO: Waiting for endpoints: map[] May 7 13:21:18.423: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.85:8080/dial?request=hostName&protocol=http&host=10.244.2.149&port=8080&tries=1'] Namespace:pod-network-test-5790 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 13:21:18.423: INFO: >>> kubeConfig: /root/.kube/config I0507 13:21:18.460771 6 log.go:172] (0xc0018ec6e0) (0xc0002f52c0) Create stream I0507 13:21:18.460803 6 log.go:172] (0xc0018ec6e0) (0xc0002f52c0) Stream added, broadcasting: 1 I0507 13:21:18.463434 6 log.go:172] (0xc0018ec6e0) Reply frame received for 1 I0507 13:21:18.463470 6 log.go:172] (0xc0018ec6e0) (0xc0002f5400) Create stream I0507 13:21:18.463479 6 log.go:172] (0xc0018ec6e0) (0xc0002f5400) Stream added, broadcasting: 3 I0507 13:21:18.464612 6 log.go:172] (0xc0018ec6e0) Reply frame received for 3 I0507 13:21:18.464657 6 log.go:172] (0xc0018ec6e0) (0xc0013b4460) Create stream I0507 13:21:18.464672 6 log.go:172] (0xc0018ec6e0) (0xc0013b4460) Stream added, broadcasting: 5 I0507 13:21:18.465941 6 log.go:172] (0xc0018ec6e0) Reply frame received for 5 I0507 13:21:18.527065 6 log.go:172] (0xc0018ec6e0) Data frame received for 3 I0507 13:21:18.527089 6 log.go:172] (0xc0002f5400) (3) Data frame handling I0507 13:21:18.527103 6 log.go:172] (0xc0002f5400) (3) Data frame sent I0507 13:21:18.527545 6 log.go:172] (0xc0018ec6e0) Data frame received for 5 I0507 13:21:18.527604 6 log.go:172] (0xc0013b4460) (5) Data frame handling I0507 13:21:18.527635 6 log.go:172] (0xc0018ec6e0) Data frame received for 3 I0507 13:21:18.527673 6 log.go:172] (0xc0002f5400) (3) Data frame handling I0507 13:21:18.529273 6 log.go:172] (0xc0018ec6e0) Data frame received for 1 I0507 13:21:18.529301 6 log.go:172] (0xc0002f52c0) (1) Data frame handling I0507 13:21:18.529320 6 log.go:172] (0xc0002f52c0) (1) Data frame sent I0507 13:21:18.529336 6 log.go:172] (0xc0018ec6e0) (0xc0002f52c0) Stream removed, broadcasting: 1 I0507 13:21:18.529418 6 log.go:172] (0xc0018ec6e0) (0xc0002f52c0) Stream removed, broadcasting: 1 I0507 13:21:18.529437 6 log.go:172] (0xc0018ec6e0) (0xc0002f5400) Stream removed, broadcasting: 3 I0507 13:21:18.529445 6 log.go:172] (0xc0018ec6e0) (0xc0013b4460) Stream removed, broadcasting: 5 May 7 13:21:18.529: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:21:18.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0507 13:21:18.529798 6 log.go:172] (0xc0018ec6e0) Go away received STEP: Destroying namespace "pod-network-test-5790" for this suite. May 7 13:21:42.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:21:42.624: INFO: namespace pod-network-test-5790 deletion completed in 24.091928093s • [SLOW TEST:46.537 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:21:42.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-8021, will wait for the garbage collector to delete the pods May 7 13:21:48.771: INFO: Deleting Job.batch foo took: 5.730308ms May 7 13:21:49.071: INFO: Terminating Job.batch foo pods took: 300.252664ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:22:32.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8021" for this suite. May 7 13:22:38.193: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:22:38.264: INFO: namespace job-8021 deletion completed in 6.086941965s • [SLOW TEST:55.640 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:22:38.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:22:44.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4115" for this suite. May 7 13:22:50.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:22:50.646: INFO: namespace namespaces-4115 deletion completed in 6.099162371s STEP: Destroying namespace "nsdeletetest-2765" for this suite. May 7 13:22:50.648: INFO: Namespace nsdeletetest-2765 was already deleted STEP: Destroying namespace "nsdeletetest-7334" for this suite. May 7 13:22:56.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:22:56.739: INFO: namespace nsdeletetest-7334 deletion completed in 6.091316887s • [SLOW TEST:18.474 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:22:56.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-1357 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1357 to expose endpoints map[] May 7 13:22:56.942: INFO: successfully validated that service endpoint-test2 in namespace services-1357 exposes endpoints map[] (50.00416ms elapsed) STEP: Creating pod pod1 in namespace services-1357 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1357 to expose endpoints map[pod1:[80]] May 7 13:23:00.062: INFO: successfully validated that service endpoint-test2 in namespace services-1357 exposes endpoints map[pod1:[80]] (3.05572603s elapsed) STEP: Creating pod pod2 in namespace services-1357 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1357 to expose endpoints map[pod1:[80] pod2:[80]] May 7 13:23:03.161: INFO: successfully validated that service endpoint-test2 in namespace services-1357 exposes endpoints map[pod1:[80] pod2:[80]] (3.096170402s elapsed) STEP: Deleting pod pod1 in namespace services-1357 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1357 to expose endpoints map[pod2:[80]] May 7 13:23:03.200: INFO: successfully validated that service endpoint-test2 in namespace services-1357 exposes endpoints map[pod2:[80]] (23.04581ms elapsed) STEP: Deleting pod pod2 in namespace services-1357 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1357 to expose endpoints map[] May 7 13:23:03.223: INFO: successfully validated that service endpoint-test2 in namespace services-1357 exposes endpoints map[] (18.841682ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:23:03.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1357" for this suite. May 7 13:23:09.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:23:09.726: INFO: namespace services-1357 deletion completed in 6.189719184s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:12.986 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:23:09.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 7 13:23:14.350: INFO: Successfully updated pod "labelsupdate0fe7a463-d34a-4994-b185-aef5d23291b7" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:23:16.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8308" for this suite. May 7 13:23:38.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:23:38.521: INFO: namespace downward-api-8308 deletion completed in 22.147648519s • [SLOW TEST:28.795 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:23:38.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-27b030b3-8dc1-4e36-8929-dc9368ee8b11 STEP: Creating secret with name s-test-opt-upd-20560586-a056-42bc-8b8f-8883491a9041 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-27b030b3-8dc1-4e36-8929-dc9368ee8b11 STEP: Updating secret s-test-opt-upd-20560586-a056-42bc-8b8f-8883491a9041 STEP: Creating secret with name s-test-opt-create-901fc16a-be59-40ff-bee5-8bc9b4e5662a STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:25:01.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2910" for this suite. May 7 13:25:25.109: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:25:25.185: INFO: namespace projected-2910 deletion completed in 24.109825459s • [SLOW TEST:106.663 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:25:25.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 7 13:25:25.275: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6740,SelfLink:/api/v1/namespaces/watch-6740/configmaps/e2e-watch-test-label-changed,UID:164999d1-1cfb-495d-9d12-9454c2b9bf04,ResourceVersion:9531556,Generation:0,CreationTimestamp:2020-05-07 13:25:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 7 13:25:25.275: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6740,SelfLink:/api/v1/namespaces/watch-6740/configmaps/e2e-watch-test-label-changed,UID:164999d1-1cfb-495d-9d12-9454c2b9bf04,ResourceVersion:9531557,Generation:0,CreationTimestamp:2020-05-07 13:25:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 7 13:25:25.275: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6740,SelfLink:/api/v1/namespaces/watch-6740/configmaps/e2e-watch-test-label-changed,UID:164999d1-1cfb-495d-9d12-9454c2b9bf04,ResourceVersion:9531558,Generation:0,CreationTimestamp:2020-05-07 13:25:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 7 13:25:35.372: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6740,SelfLink:/api/v1/namespaces/watch-6740/configmaps/e2e-watch-test-label-changed,UID:164999d1-1cfb-495d-9d12-9454c2b9bf04,ResourceVersion:9531581,Generation:0,CreationTimestamp:2020-05-07 13:25:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 7 13:25:35.373: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6740,SelfLink:/api/v1/namespaces/watch-6740/configmaps/e2e-watch-test-label-changed,UID:164999d1-1cfb-495d-9d12-9454c2b9bf04,ResourceVersion:9531582,Generation:0,CreationTimestamp:2020-05-07 13:25:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 7 13:25:35.373: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6740,SelfLink:/api/v1/namespaces/watch-6740/configmaps/e2e-watch-test-label-changed,UID:164999d1-1cfb-495d-9d12-9454c2b9bf04,ResourceVersion:9531583,Generation:0,CreationTimestamp:2020-05-07 13:25:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:25:35.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6740" for this suite. May 7 13:25:41.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:25:41.476: INFO: namespace watch-6740 deletion completed in 6.088797703s • [SLOW TEST:16.291 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:25:41.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 7 13:25:41.564: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a9191341-bc47-4768-8993-bfb083e1a3c7" in namespace "projected-6683" to be "success or failure" May 7 13:25:41.567: INFO: Pod "downwardapi-volume-a9191341-bc47-4768-8993-bfb083e1a3c7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.543153ms May 7 13:25:43.572: INFO: Pod "downwardapi-volume-a9191341-bc47-4768-8993-bfb083e1a3c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008331414s May 7 13:25:45.576: INFO: Pod "downwardapi-volume-a9191341-bc47-4768-8993-bfb083e1a3c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01247603s STEP: Saw pod success May 7 13:25:45.576: INFO: Pod "downwardapi-volume-a9191341-bc47-4768-8993-bfb083e1a3c7" satisfied condition "success or failure" May 7 13:25:45.579: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-a9191341-bc47-4768-8993-bfb083e1a3c7 container client-container: STEP: delete the pod May 7 13:25:45.651: INFO: Waiting for pod downwardapi-volume-a9191341-bc47-4768-8993-bfb083e1a3c7 to disappear May 7 13:25:45.654: INFO: Pod downwardapi-volume-a9191341-bc47-4768-8993-bfb083e1a3c7 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:25:45.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6683" for this suite. May 7 13:25:51.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:25:51.746: INFO: namespace projected-6683 deletion completed in 6.088962175s • [SLOW TEST:10.269 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:25:51.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-404bc649-d593-4c52-a68e-a8cc8533fb55 STEP: Creating configMap with name cm-test-opt-upd-c906b985-92d2-4bf5-9557-ed7211d336ba STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-404bc649-d593-4c52-a68e-a8cc8533fb55 STEP: Updating configmap cm-test-opt-upd-c906b985-92d2-4bf5-9557-ed7211d336ba STEP: Creating configMap with name cm-test-opt-create-549c4323-aea6-4213-9ab0-82c508092172 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:26:00.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9602" for this suite. May 7 13:26:24.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:26:24.184: INFO: namespace configmap-9602 deletion completed in 24.112384304s • [SLOW TEST:32.438 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:26:24.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode May 7 13:26:24.337: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-9381" to be "success or failure" May 7 13:26:24.382: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 44.998995ms May 7 13:26:26.386: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049317976s May 7 13:26:28.391: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053981967s May 7 13:26:30.396: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.058546808s STEP: Saw pod success May 7 13:26:30.396: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 7 13:26:30.399: INFO: Trying to get logs from node iruya-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod May 7 13:26:30.420: INFO: Waiting for pod pod-host-path-test to disappear May 7 13:26:30.442: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:26:30.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-9381" for this suite. May 7 13:26:36.465: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:26:36.555: INFO: namespace hostpath-9381 deletion completed in 6.109608446s • [SLOW TEST:12.370 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:26:36.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 7 13:26:36.692: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 7 13:26:41.697: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 7 13:26:41.697: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 7 13:26:43.702: INFO: Creating deployment "test-rollover-deployment" May 7 13:26:43.732: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 7 13:26:45.739: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 7 13:26:45.745: INFO: Ensure that both replica sets have 1 created replica May 7 13:26:45.750: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 7 13:26:45.756: INFO: Updating deployment test-rollover-deployment May 7 13:26:45.756: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 7 13:26:47.769: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 7 13:26:47.776: INFO: Make sure deployment "test-rollover-deployment" is complete May 7 13:26:47.782: INFO: all replica sets need to contain the pod-template-hash label May 7 13:26:47.782: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724454803, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724454803, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724454805, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724454803, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 7 13:26:49.790: INFO: all replica sets need to contain the pod-template-hash label May 7 13:26:49.790: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724454803, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724454803, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724454809, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724454803, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 7 13:26:51.791: INFO: all replica sets need to contain the pod-template-hash label May 7 13:26:51.791: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724454803, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724454803, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724454809, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724454803, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 7 13:26:53.805: INFO: all replica sets need to contain the pod-template-hash label May 7 13:26:53.805: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724454803, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724454803, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724454809, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724454803, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 7 13:26:55.790: INFO: all replica sets need to contain the pod-template-hash label May 7 13:26:55.790: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724454803, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724454803, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724454809, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724454803, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 7 13:26:57.791: INFO: all replica sets need to contain the pod-template-hash label May 7 13:26:57.791: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724454803, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724454803, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724454809, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724454803, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 7 13:26:59.789: INFO: May 7 13:26:59.789: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 7 13:26:59.796: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-5671,SelfLink:/apis/apps/v1/namespaces/deployment-5671/deployments/test-rollover-deployment,UID:532d3daa-2df7-47dd-9fb9-76eb2f0ec681,ResourceVersion:9531918,Generation:2,CreationTimestamp:2020-05-07 13:26:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-07 13:26:43 +0000 UTC 2020-05-07 13:26:43 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-07 13:26:59 +0000 UTC 2020-05-07 13:26:43 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 7 13:26:59.799: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-5671,SelfLink:/apis/apps/v1/namespaces/deployment-5671/replicasets/test-rollover-deployment-854595fc44,UID:6be2059e-9b90-4ea3-86ac-cd6d2c78f484,ResourceVersion:9531907,Generation:2,CreationTimestamp:2020-05-07 13:26:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 532d3daa-2df7-47dd-9fb9-76eb2f0ec681 0xc002c93f17 0xc002c93f18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 7 13:26:59.799: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 7 13:26:59.799: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-5671,SelfLink:/apis/apps/v1/namespaces/deployment-5671/replicasets/test-rollover-controller,UID:1e379ef1-65a9-4691-8a93-2d6bb6722f32,ResourceVersion:9531916,Generation:2,CreationTimestamp:2020-05-07 13:26:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 532d3daa-2df7-47dd-9fb9-76eb2f0ec681 0xc002c93e47 0xc002c93e48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 7 13:26:59.800: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-5671,SelfLink:/apis/apps/v1/namespaces/deployment-5671/replicasets/test-rollover-deployment-9b8b997cf,UID:16b7917e-339a-46a5-8652-582f63c67c9a,ResourceVersion:9531871,Generation:2,CreationTimestamp:2020-05-07 13:26:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 532d3daa-2df7-47dd-9fb9-76eb2f0ec681 0xc002c93fe0 0xc002c93fe1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 7 13:26:59.802: INFO: Pod "test-rollover-deployment-854595fc44-c8l8z" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-c8l8z,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-5671,SelfLink:/api/v1/namespaces/deployment-5671/pods/test-rollover-deployment-854595fc44-c8l8z,UID:386f7fcc-3c51-4a67-9d80-5dc772e5f10f,ResourceVersion:9531885,Generation:0,CreationTimestamp:2020-05-07 13:26:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 6be2059e-9b90-4ea3-86ac-cd6d2c78f484 0xc002324bb7 0xc002324bb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vv5p4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vv5p4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-vv5p4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002324c30} {node.kubernetes.io/unreachable Exists NoExecute 0xc002324c50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:26:45 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:26:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:26:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:26:45 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.155,StartTime:2020-05-07 13:26:45 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-07 13:26:48 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://3c9364feccee0327747cb3a990e5a57693525dbf9262cdc11dc140be925c6092}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:26:59.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5671" for this suite. May 7 13:27:05.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:27:05.943: INFO: namespace deployment-5671 deletion completed in 6.137930261s • [SLOW TEST:29.388 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:27:05.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 7 13:27:06.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9850' May 7 13:27:08.949: INFO: stderr: "" May 7 13:27:08.949: INFO: stdout: "replicationcontroller/redis-master created\n" May 7 13:27:08.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9850' May 7 13:27:09.258: INFO: stderr: "" May 7 13:27:09.258: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. May 7 13:27:10.262: INFO: Selector matched 1 pods for map[app:redis] May 7 13:27:10.262: INFO: Found 0 / 1 May 7 13:27:11.348: INFO: Selector matched 1 pods for map[app:redis] May 7 13:27:11.348: INFO: Found 0 / 1 May 7 13:27:12.262: INFO: Selector matched 1 pods for map[app:redis] May 7 13:27:12.262: INFO: Found 0 / 1 May 7 13:27:13.263: INFO: Selector matched 1 pods for map[app:redis] May 7 13:27:13.263: INFO: Found 0 / 1 May 7 13:27:14.262: INFO: Selector matched 1 pods for map[app:redis] May 7 13:27:14.262: INFO: Found 1 / 1 May 7 13:27:14.262: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 7 13:27:14.265: INFO: Selector matched 1 pods for map[app:redis] May 7 13:27:14.265: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 7 13:27:14.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-mpjxj --namespace=kubectl-9850' May 7 13:27:14.364: INFO: stderr: "" May 7 13:27:14.365: INFO: stdout: "Name: redis-master-mpjxj\nNamespace: kubectl-9850\nPriority: 0\nNode: iruya-worker2/172.17.0.5\nStart Time: Thu, 07 May 2020 13:27:08 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.92\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://68da91673578e7c34b170503c6725fed8b7c807f9653f5aff8e4c1f9409cf1b3\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Thu, 07 May 2020 13:27:12 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-4vm9z (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-4vm9z:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-4vm9z\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 6s default-scheduler Successfully assigned kubectl-9850/redis-master-mpjxj to iruya-worker2\n Normal Pulled 4s kubelet, iruya-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, iruya-worker2 Created container redis-master\n Normal Started 2s kubelet, iruya-worker2 Started container redis-master\n" May 7 13:27:14.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-9850' May 7 13:27:14.489: INFO: stderr: "" May 7 13:27:14.489: INFO: stdout: "Name: redis-master\nNamespace: kubectl-9850\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 6s replication-controller Created pod: redis-master-mpjxj\n" May 7 13:27:14.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-9850' May 7 13:27:14.611: INFO: stderr: "" May 7 13:27:14.611: INFO: stdout: "Name: redis-master\nNamespace: kubectl-9850\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.100.172.9\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.1.92:6379\nSession Affinity: None\nEvents: \n" May 7 13:27:14.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane' May 7 13:27:14.741: INFO: stderr: "" May 7 13:27:14.741: INFO: stdout: "Name: iruya-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:24:20 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Thu, 07 May 2020 13:26:32 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Thu, 07 May 2020 13:26:32 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Thu, 07 May 2020 13:26:32 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Thu, 07 May 2020 13:26:32 +0000 Sun, 15 Mar 2020 18:25:00 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.7\n Hostname: iruya-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 09f14f6f4d1640fcaab2243401c9f154\n System UUID: 7c6ca533-492e-400c-b058-c282f97a69ec\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.15.7\n Kube-Proxy Version: v1.15.7\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-iruya-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 52d\n kube-system kindnet-zn8sx 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 52d\n kube-system kube-apiserver-iruya-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 52d\n kube-system kube-controller-manager-iruya-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 52d\n kube-system kube-proxy-46nsr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 52d\n kube-system kube-scheduler-iruya-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 52d\n local-path-storage local-path-provisioner-d4947b89c-72frh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 52d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 7 13:27:14.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-9850' May 7 13:27:14.852: INFO: stderr: "" May 7 13:27:14.852: INFO: stdout: "Name: kubectl-9850\nLabels: e2e-framework=kubectl\n e2e-run=c79be77e-30b8-437a-9018-ff7265094089\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:27:14.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9850" for this suite. May 7 13:27:36.864: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:27:36.960: INFO: namespace kubectl-9850 deletion completed in 22.105144203s • [SLOW TEST:31.017 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:27:36.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 7 13:27:37.100: INFO: Create a RollingUpdate DaemonSet May 7 13:27:37.105: INFO: Check that daemon pods launch on every node of the cluster May 7 13:27:37.128: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 13:27:37.151: INFO: Number of nodes with available pods: 0 May 7 13:27:37.151: INFO: Node iruya-worker is running more than one daemon pod May 7 13:27:38.156: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 13:27:38.159: INFO: Number of nodes with available pods: 0 May 7 13:27:38.159: INFO: Node iruya-worker is running more than one daemon pod May 7 13:27:39.318: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 13:27:39.321: INFO: Number of nodes with available pods: 0 May 7 13:27:39.321: INFO: Node iruya-worker is running more than one daemon pod May 7 13:27:40.155: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 13:27:40.157: INFO: Number of nodes with available pods: 0 May 7 13:27:40.157: INFO: Node iruya-worker is running more than one daemon pod May 7 13:27:41.156: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 13:27:41.159: INFO: Number of nodes with available pods: 2 May 7 13:27:41.159: INFO: Number of running nodes: 2, number of available pods: 2 May 7 13:27:41.159: INFO: Update the DaemonSet to trigger a rollout May 7 13:27:41.165: INFO: Updating DaemonSet daemon-set May 7 13:27:52.186: INFO: Roll back the DaemonSet before rollout is complete May 7 13:27:52.191: INFO: Updating DaemonSet daemon-set May 7 13:27:52.191: INFO: Make sure DaemonSet rollback is complete May 7 13:27:52.198: INFO: Wrong image for pod: daemon-set-xfvlk. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 7 13:27:52.198: INFO: Pod daemon-set-xfvlk is not available May 7 13:27:52.221: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 13:27:53.227: INFO: Wrong image for pod: daemon-set-xfvlk. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 7 13:27:53.227: INFO: Pod daemon-set-xfvlk is not available May 7 13:27:53.230: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 13:27:54.227: INFO: Wrong image for pod: daemon-set-xfvlk. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 7 13:27:54.227: INFO: Pod daemon-set-xfvlk is not available May 7 13:27:54.231: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 13:27:55.225: INFO: Pod daemon-set-s26p6 is not available May 7 13:27:55.227: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-462, will wait for the garbage collector to delete the pods May 7 13:27:55.291: INFO: Deleting DaemonSet.extensions daemon-set took: 6.7616ms May 7 13:27:55.591: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.312248ms May 7 13:28:02.295: INFO: Number of nodes with available pods: 0 May 7 13:28:02.295: INFO: Number of running nodes: 0, number of available pods: 0 May 7 13:28:02.297: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-462/daemonsets","resourceVersion":"9532188"},"items":null} May 7 13:28:02.300: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-462/pods","resourceVersion":"9532188"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:28:02.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-462" for this suite. May 7 13:28:08.358: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:28:08.439: INFO: namespace daemonsets-462 deletion completed in 6.099289045s • [SLOW TEST:31.479 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:28:08.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy May 7 13:28:08.576: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix011770636/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:28:08.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3640" for this suite. May 7 13:28:14.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:28:14.736: INFO: namespace kubectl-3640 deletion completed in 6.089875037s • [SLOW TEST:6.297 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:28:14.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0507 13:28:26.742021 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 7 13:28:26.742: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:28:26.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5825" for this suite. May 7 13:28:34.826: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:28:34.982: INFO: namespace gc-5825 deletion completed in 8.23741535s • [SLOW TEST:20.245 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:28:34.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-634 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-634 STEP: Creating statefulset with conflicting port in namespace statefulset-634 STEP: Waiting until pod test-pod will start running in namespace statefulset-634 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-634 May 7 13:28:39.153: INFO: Observed stateful pod in namespace: statefulset-634, name: ss-0, uid: 3855f9e4-2294-4c4b-bdf0-4fa1e942986f, status phase: Pending. Waiting for statefulset controller to delete. May 7 13:28:39.671: INFO: Observed stateful pod in namespace: statefulset-634, name: ss-0, uid: 3855f9e4-2294-4c4b-bdf0-4fa1e942986f, status phase: Failed. Waiting for statefulset controller to delete. May 7 13:28:39.681: INFO: Observed stateful pod in namespace: statefulset-634, name: ss-0, uid: 3855f9e4-2294-4c4b-bdf0-4fa1e942986f, status phase: Failed. Waiting for statefulset controller to delete. May 7 13:28:39.687: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-634 STEP: Removing pod with conflicting port in namespace statefulset-634 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-634 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 7 13:28:43.788: INFO: Deleting all statefulset in ns statefulset-634 May 7 13:28:43.790: INFO: Scaling statefulset ss to 0 May 7 13:28:53.803: INFO: Waiting for statefulset status.replicas updated to 0 May 7 13:28:53.805: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:28:53.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-634" for this suite. May 7 13:28:59.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:28:59.914: INFO: namespace statefulset-634 deletion completed in 6.09449835s • [SLOW TEST:24.932 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:28:59.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments May 7 13:29:00.022: INFO: Waiting up to 5m0s for pod "client-containers-99eec285-5433-4a30-b7ba-9de9d9e77209" in namespace "containers-9470" to be "success or failure" May 7 13:29:00.057: INFO: Pod "client-containers-99eec285-5433-4a30-b7ba-9de9d9e77209": Phase="Pending", Reason="", readiness=false. Elapsed: 34.931175ms May 7 13:29:02.060: INFO: Pod "client-containers-99eec285-5433-4a30-b7ba-9de9d9e77209": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038125255s May 7 13:29:04.240: INFO: Pod "client-containers-99eec285-5433-4a30-b7ba-9de9d9e77209": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.21889377s STEP: Saw pod success May 7 13:29:04.241: INFO: Pod "client-containers-99eec285-5433-4a30-b7ba-9de9d9e77209" satisfied condition "success or failure" May 7 13:29:04.244: INFO: Trying to get logs from node iruya-worker pod client-containers-99eec285-5433-4a30-b7ba-9de9d9e77209 container test-container: STEP: delete the pod May 7 13:29:04.279: INFO: Waiting for pod client-containers-99eec285-5433-4a30-b7ba-9de9d9e77209 to disappear May 7 13:29:04.296: INFO: Pod client-containers-99eec285-5433-4a30-b7ba-9de9d9e77209 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:29:04.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9470" for this suite. May 7 13:29:10.330: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:29:10.400: INFO: namespace containers-9470 deletion completed in 6.099789526s • [SLOW TEST:10.485 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:29:10.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-1464/configmap-test-cb2fadc5-8e6b-4ff1-82c4-cbc38a91f469 STEP: Creating a pod to test consume configMaps May 7 13:29:10.496: INFO: Waiting up to 5m0s for pod "pod-configmaps-e9bbaea6-968e-46cb-8a3a-a100f40e65f2" in namespace "configmap-1464" to be "success or failure" May 7 13:29:10.515: INFO: Pod "pod-configmaps-e9bbaea6-968e-46cb-8a3a-a100f40e65f2": Phase="Pending", Reason="", readiness=false. Elapsed: 18.957625ms May 7 13:29:12.518: INFO: Pod "pod-configmaps-e9bbaea6-968e-46cb-8a3a-a100f40e65f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022421718s May 7 13:29:14.523: INFO: Pod "pod-configmaps-e9bbaea6-968e-46cb-8a3a-a100f40e65f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027071235s STEP: Saw pod success May 7 13:29:14.523: INFO: Pod "pod-configmaps-e9bbaea6-968e-46cb-8a3a-a100f40e65f2" satisfied condition "success or failure" May 7 13:29:14.526: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-e9bbaea6-968e-46cb-8a3a-a100f40e65f2 container env-test: STEP: delete the pod May 7 13:29:14.550: INFO: Waiting for pod pod-configmaps-e9bbaea6-968e-46cb-8a3a-a100f40e65f2 to disappear May 7 13:29:14.554: INFO: Pod pod-configmaps-e9bbaea6-968e-46cb-8a3a-a100f40e65f2 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:29:14.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1464" for this suite. May 7 13:29:20.592: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:29:20.665: INFO: namespace configmap-1464 deletion completed in 6.107655763s • [SLOW TEST:10.266 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:29:20.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-237855cb-75b7-4a73-9c5a-508461ab00a6 STEP: Creating a pod to test consume configMaps May 7 13:29:20.762: INFO: Waiting up to 5m0s for pod "pod-configmaps-426fb210-3237-4b0f-9b8e-54dfad3d52df" in namespace "configmap-2602" to be "success or failure" May 7 13:29:20.764: INFO: Pod "pod-configmaps-426fb210-3237-4b0f-9b8e-54dfad3d52df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.150963ms May 7 13:29:22.768: INFO: Pod "pod-configmaps-426fb210-3237-4b0f-9b8e-54dfad3d52df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006458299s May 7 13:29:24.773: INFO: Pod "pod-configmaps-426fb210-3237-4b0f-9b8e-54dfad3d52df": Phase="Running", Reason="", readiness=true. Elapsed: 4.0110877s May 7 13:29:26.778: INFO: Pod "pod-configmaps-426fb210-3237-4b0f-9b8e-54dfad3d52df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01574011s STEP: Saw pod success May 7 13:29:26.778: INFO: Pod "pod-configmaps-426fb210-3237-4b0f-9b8e-54dfad3d52df" satisfied condition "success or failure" May 7 13:29:26.781: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-426fb210-3237-4b0f-9b8e-54dfad3d52df container configmap-volume-test: STEP: delete the pod May 7 13:29:26.847: INFO: Waiting for pod pod-configmaps-426fb210-3237-4b0f-9b8e-54dfad3d52df to disappear May 7 13:29:26.860: INFO: Pod pod-configmaps-426fb210-3237-4b0f-9b8e-54dfad3d52df no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:29:26.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2602" for this suite. May 7 13:29:32.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:29:32.957: INFO: namespace configmap-2602 deletion completed in 6.09421827s • [SLOW TEST:12.291 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:29:32.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 7 13:29:37.074: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-9af959bb-89d2-4ba5-b43c-065e79e6dbd0,GenerateName:,Namespace:events-6884,SelfLink:/api/v1/namespaces/events-6884/pods/send-events-9af959bb-89d2-4ba5-b43c-065e79e6dbd0,UID:0d30a327-c4e1-45df-88d5-a9a8b57ff564,ResourceVersion:9532824,Generation:0,CreationTimestamp:2020-05-07 13:29:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 5903867,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cc22w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cc22w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-cc22w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002994b50} {node.kubernetes.io/unreachable Exists NoExecute 0xc002994b70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:29:33 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:29:35 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:29:35 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:29:33 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.102,StartTime:2020-05-07 13:29:33 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-05-07 13:29:35 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://9337f558c73605d388504581b62698bc77e2ba786c5f2e3160a4c8b75f36044d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod May 7 13:29:39.079: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 7 13:29:41.084: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:29:41.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-6884" for this suite. May 7 13:30:19.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:30:19.216: INFO: namespace events-6884 deletion completed in 38.11582425s • [SLOW TEST:46.258 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:30:19.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-5594 STEP: creating a selector STEP: Creating the service pods in kubernetes May 7 13:30:19.285: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 7 13:30:39.406: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.103:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5594 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 13:30:39.406: INFO: >>> kubeConfig: /root/.kube/config I0507 13:30:39.436993 6 log.go:172] (0xc0025e8dc0) (0xc002da7e00) Create stream I0507 13:30:39.437027 6 log.go:172] (0xc0025e8dc0) (0xc002da7e00) Stream added, broadcasting: 1 I0507 13:30:39.439281 6 log.go:172] (0xc0025e8dc0) Reply frame received for 1 I0507 13:30:39.439332 6 log.go:172] (0xc0025e8dc0) (0xc000e205a0) Create stream I0507 13:30:39.439349 6 log.go:172] (0xc0025e8dc0) (0xc000e205a0) Stream added, broadcasting: 3 I0507 13:30:39.440426 6 log.go:172] (0xc0025e8dc0) Reply frame received for 3 I0507 13:30:39.440470 6 log.go:172] (0xc0025e8dc0) (0xc002aa41e0) Create stream I0507 13:30:39.440482 6 log.go:172] (0xc0025e8dc0) (0xc002aa41e0) Stream added, broadcasting: 5 I0507 13:30:39.441541 6 log.go:172] (0xc0025e8dc0) Reply frame received for 5 I0507 13:30:39.540281 6 log.go:172] (0xc0025e8dc0) Data frame received for 5 I0507 13:30:39.540325 6 log.go:172] (0xc002aa41e0) (5) Data frame handling I0507 13:30:39.540349 6 log.go:172] (0xc0025e8dc0) Data frame received for 3 I0507 13:30:39.540372 6 log.go:172] (0xc000e205a0) (3) Data frame handling I0507 13:30:39.540413 6 log.go:172] (0xc000e205a0) (3) Data frame sent I0507 13:30:39.540436 6 log.go:172] (0xc0025e8dc0) Data frame received for 3 I0507 13:30:39.540447 6 log.go:172] (0xc000e205a0) (3) Data frame handling I0507 13:30:39.542582 6 log.go:172] (0xc0025e8dc0) Data frame received for 1 I0507 13:30:39.542608 6 log.go:172] (0xc002da7e00) (1) Data frame handling I0507 13:30:39.542797 6 log.go:172] (0xc002da7e00) (1) Data frame sent I0507 13:30:39.542812 6 log.go:172] (0xc0025e8dc0) (0xc002da7e00) Stream removed, broadcasting: 1 I0507 13:30:39.542842 6 log.go:172] (0xc0025e8dc0) Go away received I0507 13:30:39.543046 6 log.go:172] (0xc0025e8dc0) (0xc002da7e00) Stream removed, broadcasting: 1 I0507 13:30:39.543073 6 log.go:172] (0xc0025e8dc0) (0xc000e205a0) Stream removed, broadcasting: 3 I0507 13:30:39.543084 6 log.go:172] (0xc0025e8dc0) (0xc002aa41e0) Stream removed, broadcasting: 5 May 7 13:30:39.543: INFO: Found all expected endpoints: [netserver-0] May 7 13:30:39.546: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.166:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5594 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 13:30:39.547: INFO: >>> kubeConfig: /root/.kube/config I0507 13:30:39.582642 6 log.go:172] (0xc0025e9810) (0xc0017c40a0) Create stream I0507 13:30:39.582690 6 log.go:172] (0xc0025e9810) (0xc0017c40a0) Stream added, broadcasting: 1 I0507 13:30:39.585417 6 log.go:172] (0xc0025e9810) Reply frame received for 1 I0507 13:30:39.585462 6 log.go:172] (0xc0025e9810) (0xc000e20be0) Create stream I0507 13:30:39.585478 6 log.go:172] (0xc0025e9810) (0xc000e20be0) Stream added, broadcasting: 3 I0507 13:30:39.586461 6 log.go:172] (0xc0025e9810) Reply frame received for 3 I0507 13:30:39.586497 6 log.go:172] (0xc0025e9810) (0xc002aa4280) Create stream I0507 13:30:39.586508 6 log.go:172] (0xc0025e9810) (0xc002aa4280) Stream added, broadcasting: 5 I0507 13:30:39.587341 6 log.go:172] (0xc0025e9810) Reply frame received for 5 I0507 13:30:39.652017 6 log.go:172] (0xc0025e9810) Data frame received for 5 I0507 13:30:39.652067 6 log.go:172] (0xc002aa4280) (5) Data frame handling I0507 13:30:39.652095 6 log.go:172] (0xc0025e9810) Data frame received for 3 I0507 13:30:39.652109 6 log.go:172] (0xc000e20be0) (3) Data frame handling I0507 13:30:39.652125 6 log.go:172] (0xc000e20be0) (3) Data frame sent I0507 13:30:39.652154 6 log.go:172] (0xc0025e9810) Data frame received for 3 I0507 13:30:39.652166 6 log.go:172] (0xc000e20be0) (3) Data frame handling I0507 13:30:39.654289 6 log.go:172] (0xc0025e9810) Data frame received for 1 I0507 13:30:39.654324 6 log.go:172] (0xc0017c40a0) (1) Data frame handling I0507 13:30:39.654339 6 log.go:172] (0xc0017c40a0) (1) Data frame sent I0507 13:30:39.654361 6 log.go:172] (0xc0025e9810) (0xc0017c40a0) Stream removed, broadcasting: 1 I0507 13:30:39.654387 6 log.go:172] (0xc0025e9810) Go away received I0507 13:30:39.654504 6 log.go:172] (0xc0025e9810) (0xc0017c40a0) Stream removed, broadcasting: 1 I0507 13:30:39.654528 6 log.go:172] (0xc0025e9810) (0xc000e20be0) Stream removed, broadcasting: 3 I0507 13:30:39.654541 6 log.go:172] (0xc0025e9810) (0xc002aa4280) Stream removed, broadcasting: 5 May 7 13:30:39.654: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:30:39.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5594" for this suite. May 7 13:31:03.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:31:03.794: INFO: namespace pod-network-test-5594 deletion completed in 24.136082422s • [SLOW TEST:44.578 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:31:03.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-1c4e9c33-2b86-4a30-aabf-40151327e066 STEP: Creating a pod to test consume configMaps May 7 13:31:03.996: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-68c305b0-13ee-4f24-ab57-780ea69a0042" in namespace "projected-2202" to be "success or failure" May 7 13:31:04.015: INFO: Pod "pod-projected-configmaps-68c305b0-13ee-4f24-ab57-780ea69a0042": Phase="Pending", Reason="", readiness=false. Elapsed: 18.204614ms May 7 13:31:06.021: INFO: Pod "pod-projected-configmaps-68c305b0-13ee-4f24-ab57-780ea69a0042": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024323314s May 7 13:31:08.025: INFO: Pod "pod-projected-configmaps-68c305b0-13ee-4f24-ab57-780ea69a0042": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028858991s STEP: Saw pod success May 7 13:31:08.025: INFO: Pod "pod-projected-configmaps-68c305b0-13ee-4f24-ab57-780ea69a0042" satisfied condition "success or failure" May 7 13:31:08.028: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-68c305b0-13ee-4f24-ab57-780ea69a0042 container projected-configmap-volume-test: STEP: delete the pod May 7 13:31:08.065: INFO: Waiting for pod pod-projected-configmaps-68c305b0-13ee-4f24-ab57-780ea69a0042 to disappear May 7 13:31:08.079: INFO: Pod pod-projected-configmaps-68c305b0-13ee-4f24-ab57-780ea69a0042 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:31:08.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2202" for this suite. May 7 13:31:14.100: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:31:14.174: INFO: namespace projected-2202 deletion completed in 6.08818678s • [SLOW TEST:10.379 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:31:14.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 7 13:31:18.349: INFO: Waiting up to 5m0s for pod "client-envvars-efe2700c-8f59-4b14-bb5a-55fb410a7d6c" in namespace "pods-8792" to be "success or failure" May 7 13:31:18.369: INFO: Pod "client-envvars-efe2700c-8f59-4b14-bb5a-55fb410a7d6c": Phase="Pending", Reason="", readiness=false. Elapsed: 20.129516ms May 7 13:31:21.045: INFO: Pod "client-envvars-efe2700c-8f59-4b14-bb5a-55fb410a7d6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.696012268s May 7 13:31:23.049: INFO: Pod "client-envvars-efe2700c-8f59-4b14-bb5a-55fb410a7d6c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.700383842s May 7 13:31:25.054: INFO: Pod "client-envvars-efe2700c-8f59-4b14-bb5a-55fb410a7d6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.704699432s STEP: Saw pod success May 7 13:31:25.054: INFO: Pod "client-envvars-efe2700c-8f59-4b14-bb5a-55fb410a7d6c" satisfied condition "success or failure" May 7 13:31:25.056: INFO: Trying to get logs from node iruya-worker2 pod client-envvars-efe2700c-8f59-4b14-bb5a-55fb410a7d6c container env3cont: STEP: delete the pod May 7 13:31:25.166: INFO: Waiting for pod client-envvars-efe2700c-8f59-4b14-bb5a-55fb410a7d6c to disappear May 7 13:31:25.181: INFO: Pod client-envvars-efe2700c-8f59-4b14-bb5a-55fb410a7d6c no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:31:25.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8792" for this suite. May 7 13:32:15.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:32:15.316: INFO: namespace pods-8792 deletion completed in 50.131391578s • [SLOW TEST:61.142 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:32:15.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-98fe7080-f847-40b1-947f-d314c69d16b7 STEP: Creating a pod to test consume configMaps May 7 13:32:15.447: INFO: Waiting up to 5m0s for pod "pod-configmaps-4b1b2c25-db32-4fd7-b6cb-1fd0fc4a512b" in namespace "configmap-9734" to be "success or failure" May 7 13:32:15.451: INFO: Pod "pod-configmaps-4b1b2c25-db32-4fd7-b6cb-1fd0fc4a512b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.941364ms May 7 13:32:17.455: INFO: Pod "pod-configmaps-4b1b2c25-db32-4fd7-b6cb-1fd0fc4a512b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0078196s May 7 13:32:19.460: INFO: Pod "pod-configmaps-4b1b2c25-db32-4fd7-b6cb-1fd0fc4a512b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012489421s STEP: Saw pod success May 7 13:32:19.460: INFO: Pod "pod-configmaps-4b1b2c25-db32-4fd7-b6cb-1fd0fc4a512b" satisfied condition "success or failure" May 7 13:32:19.463: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-4b1b2c25-db32-4fd7-b6cb-1fd0fc4a512b container configmap-volume-test: STEP: delete the pod May 7 13:32:19.532: INFO: Waiting for pod pod-configmaps-4b1b2c25-db32-4fd7-b6cb-1fd0fc4a512b to disappear May 7 13:32:19.540: INFO: Pod pod-configmaps-4b1b2c25-db32-4fd7-b6cb-1fd0fc4a512b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:32:19.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9734" for this suite. May 7 13:32:25.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:32:25.645: INFO: namespace configmap-9734 deletion completed in 6.100501245s • [SLOW TEST:10.328 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:32:25.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 7 13:32:25.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-8645' May 7 13:32:25.835: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 7 13:32:25.835: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 May 7 13:32:29.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-8645' May 7 13:32:30.014: INFO: stderr: "" May 7 13:32:30.014: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:32:30.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8645" for this suite. May 7 13:32:36.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:32:36.143: INFO: namespace kubectl-8645 deletion completed in 6.124665851s • [SLOW TEST:10.498 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:32:36.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod May 7 13:32:40.289: INFO: Pod pod-hostip-114442b0-950f-444a-a480-ec3c56c5cb6b has hostIP: 172.17.0.5 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:32:40.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2087" for this suite. May 7 13:33:02.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:33:02.384: INFO: namespace pods-2087 deletion completed in 22.092030454s • [SLOW TEST:26.241 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:33:02.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-czvk STEP: Creating a pod to test atomic-volume-subpath May 7 13:33:02.501: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-czvk" in namespace "subpath-4171" to be "success or failure" May 7 13:33:02.518: INFO: Pod "pod-subpath-test-configmap-czvk": Phase="Pending", Reason="", readiness=false. Elapsed: 16.643494ms May 7 13:33:04.522: INFO: Pod "pod-subpath-test-configmap-czvk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020413412s May 7 13:33:06.526: INFO: Pod "pod-subpath-test-configmap-czvk": Phase="Running", Reason="", readiness=true. Elapsed: 4.025201744s May 7 13:33:08.531: INFO: Pod "pod-subpath-test-configmap-czvk": Phase="Running", Reason="", readiness=true. Elapsed: 6.029632376s May 7 13:33:10.535: INFO: Pod "pod-subpath-test-configmap-czvk": Phase="Running", Reason="", readiness=true. Elapsed: 8.033960987s May 7 13:33:12.539: INFO: Pod "pod-subpath-test-configmap-czvk": Phase="Running", Reason="", readiness=true. Elapsed: 10.03809375s May 7 13:33:14.544: INFO: Pod "pod-subpath-test-configmap-czvk": Phase="Running", Reason="", readiness=true. Elapsed: 12.04237633s May 7 13:33:16.548: INFO: Pod "pod-subpath-test-configmap-czvk": Phase="Running", Reason="", readiness=true. Elapsed: 14.046764742s May 7 13:33:18.553: INFO: Pod "pod-subpath-test-configmap-czvk": Phase="Running", Reason="", readiness=true. Elapsed: 16.051746786s May 7 13:33:20.558: INFO: Pod "pod-subpath-test-configmap-czvk": Phase="Running", Reason="", readiness=true. Elapsed: 18.056704442s May 7 13:33:22.562: INFO: Pod "pod-subpath-test-configmap-czvk": Phase="Running", Reason="", readiness=true. Elapsed: 20.060533076s May 7 13:33:24.566: INFO: Pod "pod-subpath-test-configmap-czvk": Phase="Running", Reason="", readiness=true. Elapsed: 22.064950918s May 7 13:33:26.571: INFO: Pod "pod-subpath-test-configmap-czvk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.069905441s STEP: Saw pod success May 7 13:33:26.571: INFO: Pod "pod-subpath-test-configmap-czvk" satisfied condition "success or failure" May 7 13:33:26.574: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-czvk container test-container-subpath-configmap-czvk: STEP: delete the pod May 7 13:33:26.718: INFO: Waiting for pod pod-subpath-test-configmap-czvk to disappear May 7 13:33:26.752: INFO: Pod pod-subpath-test-configmap-czvk no longer exists STEP: Deleting pod pod-subpath-test-configmap-czvk May 7 13:33:26.752: INFO: Deleting pod "pod-subpath-test-configmap-czvk" in namespace "subpath-4171" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:33:26.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4171" for this suite. May 7 13:33:32.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:33:32.852: INFO: namespace subpath-4171 deletion completed in 6.094946039s • [SLOW TEST:30.467 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:33:32.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 7 13:33:32.921: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:33:37.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5856" for this suite. May 7 13:34:27.097: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:34:27.177: INFO: namespace pods-5856 deletion completed in 50.0975106s • [SLOW TEST:54.324 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:34:27.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-lkqk STEP: Creating a pod to test atomic-volume-subpath May 7 13:34:27.258: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-lkqk" in namespace "subpath-5535" to be "success or failure" May 7 13:34:27.272: INFO: Pod "pod-subpath-test-projected-lkqk": Phase="Pending", Reason="", readiness=false. Elapsed: 14.208019ms May 7 13:34:29.281: INFO: Pod "pod-subpath-test-projected-lkqk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023351466s May 7 13:34:31.285: INFO: Pod "pod-subpath-test-projected-lkqk": Phase="Running", Reason="", readiness=true. Elapsed: 4.027522931s May 7 13:34:33.289: INFO: Pod "pod-subpath-test-projected-lkqk": Phase="Running", Reason="", readiness=true. Elapsed: 6.031477942s May 7 13:34:35.294: INFO: Pod "pod-subpath-test-projected-lkqk": Phase="Running", Reason="", readiness=true. Elapsed: 8.03595451s May 7 13:34:37.299: INFO: Pod "pod-subpath-test-projected-lkqk": Phase="Running", Reason="", readiness=true. Elapsed: 10.04092398s May 7 13:34:39.303: INFO: Pod "pod-subpath-test-projected-lkqk": Phase="Running", Reason="", readiness=true. Elapsed: 12.045255754s May 7 13:34:41.307: INFO: Pod "pod-subpath-test-projected-lkqk": Phase="Running", Reason="", readiness=true. Elapsed: 14.04946706s May 7 13:34:43.314: INFO: Pod "pod-subpath-test-projected-lkqk": Phase="Running", Reason="", readiness=true. Elapsed: 16.056059094s May 7 13:34:45.317: INFO: Pod "pod-subpath-test-projected-lkqk": Phase="Running", Reason="", readiness=true. Elapsed: 18.059831427s May 7 13:34:47.322: INFO: Pod "pod-subpath-test-projected-lkqk": Phase="Running", Reason="", readiness=true. Elapsed: 20.064415588s May 7 13:34:49.327: INFO: Pod "pod-subpath-test-projected-lkqk": Phase="Running", Reason="", readiness=true. Elapsed: 22.069058528s May 7 13:34:51.331: INFO: Pod "pod-subpath-test-projected-lkqk": Phase="Running", Reason="", readiness=true. Elapsed: 24.073205551s May 7 13:34:53.335: INFO: Pod "pod-subpath-test-projected-lkqk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.077166832s STEP: Saw pod success May 7 13:34:53.335: INFO: Pod "pod-subpath-test-projected-lkqk" satisfied condition "success or failure" May 7 13:34:53.337: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-projected-lkqk container test-container-subpath-projected-lkqk: STEP: delete the pod May 7 13:34:53.360: INFO: Waiting for pod pod-subpath-test-projected-lkqk to disappear May 7 13:34:53.364: INFO: Pod pod-subpath-test-projected-lkqk no longer exists STEP: Deleting pod pod-subpath-test-projected-lkqk May 7 13:34:53.364: INFO: Deleting pod "pod-subpath-test-projected-lkqk" in namespace "subpath-5535" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:34:53.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5535" for this suite. May 7 13:34:59.380: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:34:59.472: INFO: namespace subpath-5535 deletion completed in 6.102419327s • [SLOW TEST:32.295 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:34:59.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 7 13:34:59.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 7 13:34:59.720: INFO: stderr: "" May 7 13:34:59.720: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.11\", GitCommit:\"d94a81c724ea8e1ccc9002d89b7fe81d58f89ede\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:43Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:28:37Z\", GoVersion:\"go1.12.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:34:59.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1372" for this suite. May 7 13:35:05.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:35:05.848: INFO: namespace kubectl-1372 deletion completed in 6.122951476s • [SLOW TEST:6.375 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:35:05.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0507 13:35:15.956694 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 7 13:35:15.956: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:35:15.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2522" for this suite. May 7 13:35:21.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:35:22.075: INFO: namespace gc-2522 deletion completed in 6.115355291s • [SLOW TEST:16.227 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:35:22.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-d27e3bed-9a33-4404-9112-16e769d644ef [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:35:22.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-273" for this suite. May 7 13:35:28.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:35:28.342: INFO: namespace configmap-273 deletion completed in 6.145170437s • [SLOW TEST:6.267 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:35:28.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-mj62r in namespace proxy-2680 I0507 13:35:28.509376 6 runners.go:180] Created replication controller with name: proxy-service-mj62r, namespace: proxy-2680, replica count: 1 I0507 13:35:29.559874 6 runners.go:180] proxy-service-mj62r Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0507 13:35:30.560117 6 runners.go:180] proxy-service-mj62r Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0507 13:35:31.560327 6 runners.go:180] proxy-service-mj62r Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0507 13:35:32.560556 6 runners.go:180] proxy-service-mj62r Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0507 13:35:33.560755 6 runners.go:180] proxy-service-mj62r Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0507 13:35:34.561002 6 runners.go:180] proxy-service-mj62r Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0507 13:35:35.561398 6 runners.go:180] proxy-service-mj62r Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0507 13:35:36.561603 6 runners.go:180] proxy-service-mj62r Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0507 13:35:37.561790 6 runners.go:180] proxy-service-mj62r Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0507 13:35:38.561981 6 runners.go:180] proxy-service-mj62r Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 7 13:35:38.565: INFO: setup took 10.118767918s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 7 13:35:38.568: INFO: (0) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx:162/proxy/: bar (200; 3.712588ms) May 7 13:35:38.570: INFO: (0) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx:1080/proxy/: test<... (200; 5.612065ms) May 7 13:35:38.571: INFO: (0) /api/v1/namespaces/proxy-2680/pods/http:proxy-service-mj62r-hg9hx:1080/proxy/: ... (200; 6.133172ms) May 7 13:35:38.572: INFO: (0) /api/v1/namespaces/proxy-2680/services/http:proxy-service-mj62r:portname1/proxy/: foo (200; 6.788146ms) May 7 13:35:38.572: INFO: (0) /api/v1/namespaces/proxy-2680/services/http:proxy-service-mj62r:portname2/proxy/: bar (200; 6.911772ms) May 7 13:35:38.572: INFO: (0) /api/v1/namespaces/proxy-2680/services/proxy-service-mj62r:portname1/proxy/: foo (200; 7.0534ms) May 7 13:35:38.572: INFO: (0) /api/v1/namespaces/proxy-2680/pods/http:proxy-service-mj62r-hg9hx:160/proxy/: foo (200; 7.120165ms) May 7 13:35:38.572: INFO: (0) /api/v1/namespaces/proxy-2680/pods/http:proxy-service-mj62r-hg9hx:162/proxy/: bar (200; 7.184751ms) May 7 13:35:38.572: INFO: (0) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx:160/proxy/: foo (200; 7.239014ms) May 7 13:35:38.573: INFO: (0) /api/v1/namespaces/proxy-2680/services/proxy-service-mj62r:portname2/proxy/: bar (200; 8.54177ms) May 7 13:35:38.575: INFO: (0) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx/proxy/: test (200; 10.08846ms) May 7 13:35:38.578: INFO: (0) /api/v1/namespaces/proxy-2680/pods/https:proxy-service-mj62r-hg9hx:443/proxy/: ... (200; 3.511323ms) May 7 13:35:38.585: INFO: (1) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx:162/proxy/: bar (200; 3.802026ms) May 7 13:35:38.586: INFO: (1) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx:1080/proxy/: test<... (200; 3.979953ms) May 7 13:35:38.586: INFO: (1) /api/v1/namespaces/proxy-2680/pods/https:proxy-service-mj62r-hg9hx:443/proxy/: test (200; 5.308097ms) May 7 13:35:38.587: INFO: (1) /api/v1/namespaces/proxy-2680/pods/https:proxy-service-mj62r-hg9hx:460/proxy/: tls baz (200; 5.391518ms) May 7 13:35:38.587: INFO: (1) /api/v1/namespaces/proxy-2680/pods/http:proxy-service-mj62r-hg9hx:160/proxy/: foo (200; 5.422315ms) May 7 13:35:38.587: INFO: (1) /api/v1/namespaces/proxy-2680/services/proxy-service-mj62r:portname2/proxy/: bar (200; 5.625317ms) May 7 13:35:38.587: INFO: (1) /api/v1/namespaces/proxy-2680/services/proxy-service-mj62r:portname1/proxy/: foo (200; 5.713769ms) May 7 13:35:38.587: INFO: (1) /api/v1/namespaces/proxy-2680/pods/http:proxy-service-mj62r-hg9hx:162/proxy/: bar (200; 5.610123ms) May 7 13:35:38.587: INFO: (1) /api/v1/namespaces/proxy-2680/services/https:proxy-service-mj62r:tlsportname2/proxy/: tls qux (200; 5.695334ms) May 7 13:35:38.587: INFO: (1) /api/v1/namespaces/proxy-2680/services/https:proxy-service-mj62r:tlsportname1/proxy/: tls baz (200; 5.797162ms) May 7 13:35:38.587: INFO: (1) /api/v1/namespaces/proxy-2680/services/http:proxy-service-mj62r:portname2/proxy/: bar (200; 5.695998ms) May 7 13:35:38.591: INFO: (2) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx:160/proxy/: foo (200; 3.093785ms) May 7 13:35:38.591: INFO: (2) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx/proxy/: test (200; 3.493161ms) May 7 13:35:38.592: INFO: (2) /api/v1/namespaces/proxy-2680/pods/http:proxy-service-mj62r-hg9hx:160/proxy/: foo (200; 4.258478ms) May 7 13:35:38.592: INFO: (2) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx:1080/proxy/: test<... (200; 4.330714ms) May 7 13:35:38.592: INFO: (2) /api/v1/namespaces/proxy-2680/pods/http:proxy-service-mj62r-hg9hx:1080/proxy/: ... (200; 4.403985ms) May 7 13:35:38.592: INFO: (2) /api/v1/namespaces/proxy-2680/services/proxy-service-mj62r:portname1/proxy/: foo (200; 4.476836ms) May 7 13:35:38.592: INFO: (2) /api/v1/namespaces/proxy-2680/pods/https:proxy-service-mj62r-hg9hx:462/proxy/: tls qux (200; 4.474949ms) May 7 13:35:38.592: INFO: (2) /api/v1/namespaces/proxy-2680/pods/http:proxy-service-mj62r-hg9hx:162/proxy/: bar (200; 4.455067ms) May 7 13:35:38.592: INFO: (2) /api/v1/namespaces/proxy-2680/services/http:proxy-service-mj62r:portname1/proxy/: foo (200; 4.650042ms) May 7 13:35:38.592: INFO: (2) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx:162/proxy/: bar (200; 4.718991ms) May 7 13:35:38.592: INFO: (2) /api/v1/namespaces/proxy-2680/services/http:proxy-service-mj62r:portname2/proxy/: bar (200; 4.899198ms) May 7 13:35:38.592: INFO: (2) /api/v1/namespaces/proxy-2680/pods/https:proxy-service-mj62r-hg9hx:443/proxy/: test<... (200; 4.269472ms) May 7 13:35:38.597: INFO: (3) /api/v1/namespaces/proxy-2680/services/proxy-service-mj62r:portname2/proxy/: bar (200; 4.488155ms) May 7 13:35:38.597: INFO: (3) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx:162/proxy/: bar (200; 4.520395ms) May 7 13:35:38.598: INFO: (3) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx/proxy/: test (200; 5.443242ms) May 7 13:35:38.598: INFO: (3) /api/v1/namespaces/proxy-2680/pods/https:proxy-service-mj62r-hg9hx:460/proxy/: tls baz (200; 5.469589ms) May 7 13:35:38.598: INFO: (3) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx:160/proxy/: foo (200; 5.42022ms) May 7 13:35:38.598: INFO: (3) /api/v1/namespaces/proxy-2680/pods/http:proxy-service-mj62r-hg9hx:1080/proxy/: ... (200; 5.468427ms) May 7 13:35:38.598: INFO: (3) /api/v1/namespaces/proxy-2680/services/https:proxy-service-mj62r:tlsportname2/proxy/: tls qux (200; 5.475763ms) May 7 13:35:38.598: INFO: (3) /api/v1/namespaces/proxy-2680/pods/http:proxy-service-mj62r-hg9hx:160/proxy/: foo (200; 5.530298ms) May 7 13:35:38.598: INFO: (3) /api/v1/namespaces/proxy-2680/services/proxy-service-mj62r:portname1/proxy/: foo (200; 5.580179ms) May 7 13:35:38.598: INFO: (3) /api/v1/namespaces/proxy-2680/services/http:proxy-service-mj62r:portname1/proxy/: foo (200; 5.578045ms) May 7 13:35:38.598: INFO: (3) /api/v1/namespaces/proxy-2680/pods/https:proxy-service-mj62r-hg9hx:443/proxy/: ... (200; 5.385169ms) May 7 13:35:38.606: INFO: (4) /api/v1/namespaces/proxy-2680/pods/https:proxy-service-mj62r-hg9hx:462/proxy/: tls qux (200; 5.688284ms) May 7 13:35:38.606: INFO: (4) /api/v1/namespaces/proxy-2680/pods/http:proxy-service-mj62r-hg9hx:162/proxy/: bar (200; 5.7948ms) May 7 13:35:38.606: INFO: (4) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx:1080/proxy/: test<... (200; 5.776939ms) May 7 13:35:38.606: INFO: (4) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx:160/proxy/: foo (200; 5.837669ms) May 7 13:35:38.606: INFO: (4) /api/v1/namespaces/proxy-2680/services/https:proxy-service-mj62r:tlsportname2/proxy/: tls qux (200; 5.842835ms) May 7 13:35:38.606: INFO: (4) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx/proxy/: test (200; 5.910305ms) May 7 13:35:38.606: INFO: (4) /api/v1/namespaces/proxy-2680/pods/https:proxy-service-mj62r-hg9hx:460/proxy/: tls baz (200; 5.951092ms) May 7 13:35:38.606: INFO: (4) /api/v1/namespaces/proxy-2680/pods/https:proxy-service-mj62r-hg9hx:443/proxy/: ... (200; 4.296344ms) May 7 13:35:38.611: INFO: (5) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx:1080/proxy/: test<... (200; 4.342277ms) May 7 13:35:38.611: INFO: (5) /api/v1/namespaces/proxy-2680/pods/https:proxy-service-mj62r-hg9hx:460/proxy/: tls baz (200; 4.718354ms) May 7 13:35:38.611: INFO: (5) /api/v1/namespaces/proxy-2680/services/https:proxy-service-mj62r:tlsportname1/proxy/: tls baz (200; 4.768867ms) May 7 13:35:38.611: INFO: (5) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx:160/proxy/: foo (200; 4.831143ms) May 7 13:35:38.611: INFO: (5) /api/v1/namespaces/proxy-2680/pods/https:proxy-service-mj62r-hg9hx:462/proxy/: tls qux (200; 4.838836ms) May 7 13:35:38.611: INFO: (5) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx/proxy/: test (200; 4.879429ms) May 7 13:35:38.611: INFO: (5) /api/v1/namespaces/proxy-2680/pods/http:proxy-service-mj62r-hg9hx:160/proxy/: foo (200; 4.767835ms) May 7 13:35:38.612: INFO: (5) /api/v1/namespaces/proxy-2680/services/https:proxy-service-mj62r:tlsportname2/proxy/: tls qux (200; 5.216119ms) May 7 13:35:38.612: INFO: (5) /api/v1/namespaces/proxy-2680/services/http:proxy-service-mj62r:portname2/proxy/: bar (200; 5.3148ms) May 7 13:35:38.612: INFO: (5) /api/v1/namespaces/proxy-2680/services/proxy-service-mj62r:portname2/proxy/: bar (200; 5.290501ms) May 7 13:35:38.612: INFO: (5) /api/v1/namespaces/proxy-2680/services/http:proxy-service-mj62r:portname1/proxy/: foo (200; 5.316098ms) May 7 13:35:38.615: INFO: (6) /api/v1/namespaces/proxy-2680/pods/http:proxy-service-mj62r-hg9hx:1080/proxy/: ... (200; 3.168677ms) May 7 13:35:38.615: INFO: (6) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx:160/proxy/: foo (200; 3.227694ms) May 7 13:35:38.615: INFO: (6) /api/v1/namespaces/proxy-2680/pods/http:proxy-service-mj62r-hg9hx:160/proxy/: foo (200; 3.453106ms) May 7 13:35:38.615: INFO: (6) /api/v1/namespaces/proxy-2680/pods/https:proxy-service-mj62r-hg9hx:443/proxy/: test<... (200; 3.518931ms) May 7 13:35:38.615: INFO: (6) /api/v1/namespaces/proxy-2680/pods/https:proxy-service-mj62r-hg9hx:460/proxy/: tls baz (200; 3.509653ms) May 7 13:35:38.616: INFO: (6) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx/proxy/: test (200; 3.483545ms) May 7 13:35:38.616: INFO: (6) /api/v1/namespaces/proxy-2680/pods/http:proxy-service-mj62r-hg9hx:162/proxy/: bar (200; 3.493533ms) May 7 13:35:38.616: INFO: (6) /api/v1/namespaces/proxy-2680/pods/https:proxy-service-mj62r-hg9hx:462/proxy/: tls qux (200; 3.597461ms) May 7 13:35:38.616: INFO: (6) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx:162/proxy/: bar (200; 3.694428ms) May 7 13:35:38.616: INFO: (6) /api/v1/namespaces/proxy-2680/services/proxy-service-mj62r:portname1/proxy/: foo (200; 4.281576ms) May 7 13:35:38.616: INFO: (6) /api/v1/namespaces/proxy-2680/services/proxy-service-mj62r:portname2/proxy/: bar (200; 4.42504ms) May 7 13:35:38.616: INFO: (6) /api/v1/namespaces/proxy-2680/services/https:proxy-service-mj62r:tlsportname2/proxy/: tls qux (200; 4.483883ms) May 7 13:35:38.616: INFO: (6) /api/v1/namespaces/proxy-2680/services/http:proxy-service-mj62r:portname2/proxy/: bar (200; 4.481791ms) May 7 13:35:38.617: INFO: (6) /api/v1/namespaces/proxy-2680/services/https:proxy-service-mj62r:tlsportname1/proxy/: tls baz (200; 4.575248ms) May 7 13:35:38.617: INFO: (6) /api/v1/namespaces/proxy-2680/services/http:proxy-service-mj62r:portname1/proxy/: foo (200; 4.496331ms) May 7 13:35:38.620: INFO: (7) /api/v1/namespaces/proxy-2680/pods/https:proxy-service-mj62r-hg9hx:462/proxy/: tls qux (200; 2.706691ms) May 7 13:35:38.620: INFO: (7) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx:1080/proxy/: test<... (200; 3.016494ms) May 7 13:35:38.620: INFO: (7) /api/v1/namespaces/proxy-2680/pods/https:proxy-service-mj62r-hg9hx:443/proxy/: test (200; 3.078669ms) May 7 13:35:38.620: INFO: (7) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx:162/proxy/: bar (200; 3.478054ms) May 7 13:35:38.620: INFO: (7) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx:160/proxy/: foo (200; 3.426366ms) May 7 13:35:38.620: INFO: (7) /api/v1/namespaces/proxy-2680/pods/http:proxy-service-mj62r-hg9hx:162/proxy/: bar (200; 3.281358ms) May 7 13:35:38.620: INFO: (7) /api/v1/namespaces/proxy-2680/pods/http:proxy-service-mj62r-hg9hx:1080/proxy/: ... (200; 3.277799ms) May 7 13:35:38.620: INFO: (7) /api/v1/namespaces/proxy-2680/pods/https:proxy-service-mj62r-hg9hx:460/proxy/: tls baz (200; 3.512736ms) May 7 13:35:38.622: INFO: (7) /api/v1/namespaces/proxy-2680/services/http:proxy-service-mj62r:portname1/proxy/: foo (200; 5.06162ms) May 7 13:35:38.622: INFO: (7) /api/v1/namespaces/proxy-2680/pods/http:proxy-service-mj62r-hg9hx:160/proxy/: foo (200; 5.049519ms) May 7 13:35:38.622: INFO: (7) /api/v1/namespaces/proxy-2680/services/http:proxy-service-mj62r:portname2/proxy/: bar (200; 5.319336ms) May 7 13:35:38.622: INFO: (7) /api/v1/namespaces/proxy-2680/services/proxy-service-mj62r:portname2/proxy/: bar (200; 5.417645ms) May 7 13:35:38.622: INFO: (7) /api/v1/namespaces/proxy-2680/services/proxy-service-mj62r:portname1/proxy/: foo (200; 5.395009ms) May 7 13:35:38.622: INFO: (7) /api/v1/namespaces/proxy-2680/services/https:proxy-service-mj62r:tlsportname2/proxy/: tls qux (200; 5.489293ms) May 7 13:35:38.622: INFO: (7) /api/v1/namespaces/proxy-2680/services/https:proxy-service-mj62r:tlsportname1/proxy/: tls baz (200; 5.538424ms) May 7 13:35:38.626: INFO: (8) /api/v1/namespaces/proxy-2680/pods/http:proxy-service-mj62r-hg9hx:162/proxy/: bar (200; 3.451557ms) May 7 13:35:38.626: INFO: (8) /api/v1/namespaces/proxy-2680/services/https:proxy-service-mj62r:tlsportname1/proxy/: tls baz (200; 3.793223ms) May 7 13:35:38.626: INFO: (8) /api/v1/namespaces/proxy-2680/pods/http:proxy-service-mj62r-hg9hx:160/proxy/: foo (200; 3.723054ms) May 7 13:35:38.627: INFO: (8) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx:162/proxy/: bar (200; 4.149658ms) May 7 13:35:38.627: INFO: (8) /api/v1/namespaces/proxy-2680/pods/https:proxy-service-mj62r-hg9hx:462/proxy/: tls qux (200; 4.231147ms) May 7 13:35:38.627: INFO: (8) /api/v1/namespaces/proxy-2680/pods/http:proxy-service-mj62r-hg9hx:1080/proxy/: ... (200; 4.396037ms) May 7 13:35:38.627: INFO: (8) /api/v1/namespaces/proxy-2680/pods/https:proxy-service-mj62r-hg9hx:443/proxy/: test<... (200; 4.687178ms) May 7 13:35:38.627: INFO: (8) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx/proxy/: test (200; 4.760317ms) May 7 13:35:38.627: INFO: (8) /api/v1/namespaces/proxy-2680/pods/https:proxy-service-mj62r-hg9hx:460/proxy/: tls baz (200; 4.713893ms) May 7 13:35:38.627: INFO: (8) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx:160/proxy/: foo (200; 4.695537ms) May 7 13:35:38.627: INFO: (8) /api/v1/namespaces/proxy-2680/services/proxy-service-mj62r:portname1/proxy/: foo (200; 4.98649ms) May 7 13:35:38.627: INFO: (8) /api/v1/namespaces/proxy-2680/services/proxy-service-mj62r:portname2/proxy/: bar (200; 4.979116ms) May 7 13:35:38.628: INFO: (8) /api/v1/namespaces/proxy-2680/services/http:proxy-service-mj62r:portname1/proxy/: foo (200; 5.140064ms) May 7 13:35:38.628: INFO: (8) /api/v1/namespaces/proxy-2680/services/http:proxy-service-mj62r:portname2/proxy/: bar (200; 5.157479ms) May 7 13:35:38.628: INFO: (8) /api/v1/namespaces/proxy-2680/services/https:proxy-service-mj62r:tlsportname2/proxy/: tls qux (200; 5.152101ms) May 7 13:35:38.630: INFO: (9) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx/proxy/: test (200; 1.949399ms) May 7 13:35:38.631: INFO: (9) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx:1080/proxy/: test<... (200; 3.261047ms) May 7 13:35:38.631: INFO: (9) /api/v1/namespaces/proxy-2680/pods/https:proxy-service-mj62r-hg9hx:460/proxy/: tls baz (200; 3.452547ms) May 7 13:35:38.633: INFO: (9) /api/v1/namespaces/proxy-2680/pods/https:proxy-service-mj62r-hg9hx:443/proxy/: ... (200; 5.34565ms) May 7 13:35:38.633: INFO: (9) /api/v1/namespaces/proxy-2680/pods/https:proxy-service-mj62r-hg9hx:462/proxy/: tls qux (200; 5.166335ms) May 7 13:35:38.633: INFO: (9) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx:160/proxy/: foo (200; 4.92064ms) May 7 13:35:38.633: INFO: (9) /api/v1/namespaces/proxy-2680/services/https:proxy-service-mj62r:tlsportname2/proxy/: tls qux (200; 5.515668ms) May 7 13:35:38.634: INFO: (9) /api/v1/namespaces/proxy-2680/services/proxy-service-mj62r:portname1/proxy/: foo (200; 5.902865ms) May 7 13:35:38.634: INFO: (9) /api/v1/namespaces/proxy-2680/services/http:proxy-service-mj62r:portname1/proxy/: foo (200; 5.761808ms) May 7 13:35:38.634: INFO: (9) /api/v1/namespaces/proxy-2680/services/http:proxy-service-mj62r:portname2/proxy/: bar (200; 6.174229ms) May 7 13:35:38.634: INFO: (9) /api/v1/namespaces/proxy-2680/services/proxy-service-mj62r:portname2/proxy/: bar (200; 5.892803ms) May 7 13:35:38.634: INFO: (9) /api/v1/namespaces/proxy-2680/services/https:proxy-service-mj62r:tlsportname1/proxy/: tls baz (200; 5.983895ms) May 7 13:35:38.637: INFO: (10) /api/v1/namespaces/proxy-2680/pods/https:proxy-service-mj62r-hg9hx:462/proxy/: tls qux (200; 3.200351ms) May 7 13:35:38.638: INFO: (10) /api/v1/namespaces/proxy-2680/pods/https:proxy-service-mj62r-hg9hx:443/proxy/: test<... (200; 4.60298ms) May 7 13:35:38.639: INFO: (10) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx/proxy/: test (200; 4.720754ms) May 7 13:35:38.639: INFO: (10) /api/v1/namespaces/proxy-2680/pods/http:proxy-service-mj62r-hg9hx:162/proxy/: bar (200; 4.775629ms) May 7 13:35:38.639: INFO: (10) /api/v1/namespaces/proxy-2680/pods/https:proxy-service-mj62r-hg9hx:460/proxy/: tls baz (200; 4.539535ms) May 7 13:35:38.639: INFO: (10) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx:162/proxy/: bar (200; 4.748352ms) May 7 13:35:38.639: INFO: (10) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx:160/proxy/: foo (200; 4.598747ms) May 7 13:35:38.639: INFO: (10) /api/v1/namespaces/proxy-2680/pods/http:proxy-service-mj62r-hg9hx:1080/proxy/: ... (200; 4.656199ms) May 7 13:35:38.639: INFO: (10) /api/v1/namespaces/proxy-2680/services/http:proxy-service-mj62r:portname2/proxy/: bar (200; 5.200033ms) May 7 13:35:38.640: INFO: (10) /api/v1/namespaces/proxy-2680/services/http:proxy-service-mj62r:portname1/proxy/: foo (200; 5.397196ms) May 7 13:35:38.640: INFO: (10) /api/v1/namespaces/proxy-2680/services/proxy-service-mj62r:portname2/proxy/: bar (200; 5.32598ms) May 7 13:35:38.640: INFO: (10) /api/v1/namespaces/proxy-2680/services/https:proxy-service-mj62r:tlsportname1/proxy/: tls baz (200; 5.412897ms) May 7 13:35:38.640: INFO: (10) /api/v1/namespaces/proxy-2680/services/https:proxy-service-mj62r:tlsportname2/proxy/: tls qux (200; 5.472065ms) May 7 13:35:38.640: INFO: (10) /api/v1/namespaces/proxy-2680/services/proxy-service-mj62r:portname1/proxy/: foo (200; 5.497815ms) May 7 13:35:38.644: INFO: (11) /api/v1/namespaces/proxy-2680/pods/https:proxy-service-mj62r-hg9hx:460/proxy/: tls baz (200; 3.678085ms) May 7 13:35:38.644: INFO: (11) /api/v1/namespaces/proxy-2680/pods/http:proxy-service-mj62r-hg9hx:162/proxy/: bar (200; 3.817902ms) May 7 13:35:38.644: INFO: (11) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx:1080/proxy/: test<... (200; 3.805681ms) May 7 13:35:38.644: INFO: (11) /api/v1/namespaces/proxy-2680/pods/http:proxy-service-mj62r-hg9hx:1080/proxy/: ... (200; 3.785367ms) May 7 13:35:38.644: INFO: (11) /api/v1/namespaces/proxy-2680/pods/https:proxy-service-mj62r-hg9hx:462/proxy/: tls qux (200; 3.806444ms) May 7 13:35:38.645: INFO: (11) /api/v1/namespaces/proxy-2680/pods/http:proxy-service-mj62r-hg9hx:160/proxy/: foo (200; 4.857062ms) May 7 13:35:38.645: INFO: (11) /api/v1/namespaces/proxy-2680/services/proxy-service-mj62r:portname1/proxy/: foo (200; 4.802285ms) May 7 13:35:38.645: INFO: (11) /api/v1/namespaces/proxy-2680/pods/https:proxy-service-mj62r-hg9hx:443/proxy/: test (200; 5.370969ms) May 7 13:35:38.646: INFO: (11) /api/v1/namespaces/proxy-2680/services/https:proxy-service-mj62r:tlsportname1/proxy/: tls baz (200; 5.471471ms) May 7 13:35:38.646: INFO: (11) /api/v1/namespaces/proxy-2680/services/proxy-service-mj62r:portname2/proxy/: bar (200; 5.438528ms) May 7 13:35:38.648: INFO: (12) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx:160/proxy/: foo (200; 2.833538ms) May 7 13:35:38.649: INFO: (12) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx:1080/proxy/: test<... (200; 3.289192ms) May 7 13:35:38.650: INFO: (12) /api/v1/namespaces/proxy-2680/pods/http:proxy-service-mj62r-hg9hx:1080/proxy/: ... (200; 3.991032ms) May 7 13:35:38.650: INFO: (12) /api/v1/namespaces/proxy-2680/pods/http:proxy-service-mj62r-hg9hx:160/proxy/: foo (200; 4.71235ms) May 7 13:35:38.650: INFO: (12) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx:162/proxy/: bar (200; 4.730653ms) May 7 13:35:38.650: INFO: (12) /api/v1/namespaces/proxy-2680/services/https:proxy-service-mj62r:tlsportname2/proxy/: tls qux (200; 4.664864ms) May 7 13:35:38.650: INFO: (12) /api/v1/namespaces/proxy-2680/services/http:proxy-service-mj62r:portname1/proxy/: foo (200; 4.824241ms) May 7 13:35:38.650: INFO: (12) /api/v1/namespaces/proxy-2680/services/http:proxy-service-mj62r:portname2/proxy/: bar (200; 4.848273ms) May 7 13:35:38.650: INFO: (12) /api/v1/namespaces/proxy-2680/pods/https:proxy-service-mj62r-hg9hx:462/proxy/: tls qux (200; 4.705331ms) May 7 13:35:38.650: INFO: (12) /api/v1/namespaces/proxy-2680/services/proxy-service-mj62r:portname2/proxy/: bar (200; 4.74456ms) May 7 13:35:38.650: INFO: (12) /api/v1/namespaces/proxy-2680/pods/https:proxy-service-mj62r-hg9hx:443/proxy/: test (200; 4.879293ms) May 7 13:35:38.655: INFO: (13) /api/v1/namespaces/proxy-2680/pods/http:proxy-service-mj62r-hg9hx:162/proxy/: bar (200; 4.109981ms) May 7 13:35:38.655: INFO: (13) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx:160/proxy/: foo (200; 4.163956ms) May 7 13:35:38.655: INFO: (13) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx:1080/proxy/: test<... (200; 4.208196ms) May 7 13:35:38.655: INFO: (13) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx/proxy/: test (200; 4.205771ms) May 7 13:35:38.655: INFO: (13) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx:162/proxy/: bar (200; 4.223203ms) May 7 13:35:38.655: INFO: (13) /api/v1/namespaces/proxy-2680/pods/https:proxy-service-mj62r-hg9hx:443/proxy/: ... (200; 4.223626ms) May 7 13:35:38.656: INFO: (13) /api/v1/namespaces/proxy-2680/pods/http:proxy-service-mj62r-hg9hx:160/proxy/: foo (200; 4.937478ms) May 7 13:35:38.656: INFO: (13) /api/v1/namespaces/proxy-2680/services/http:proxy-service-mj62r:portname2/proxy/: bar (200; 4.988377ms) May 7 13:35:38.656: INFO: (13) /api/v1/namespaces/proxy-2680/services/https:proxy-service-mj62r:tlsportname2/proxy/: tls qux (200; 4.972547ms) May 7 13:35:38.656: INFO: (13) /api/v1/namespaces/proxy-2680/services/http:proxy-service-mj62r:portname1/proxy/: foo (200; 5.054924ms) May 7 13:35:38.656: INFO: (13) /api/v1/namespaces/proxy-2680/services/proxy-service-mj62r:portname2/proxy/: bar (200; 5.044172ms) May 7 13:35:38.656: INFO: (13) /api/v1/namespaces/proxy-2680/pods/https:proxy-service-mj62r-hg9hx:460/proxy/: tls baz (200; 5.176814ms) May 7 13:35:38.656: INFO: (13) /api/v1/namespaces/proxy-2680/services/proxy-service-mj62r:portname1/proxy/: foo (200; 5.168607ms) May 7 13:35:38.656: INFO: (13) /api/v1/namespaces/proxy-2680/services/https:proxy-service-mj62r:tlsportname1/proxy/: tls baz (200; 5.292051ms) May 7 13:35:38.659: INFO: (14) /api/v1/namespaces/proxy-2680/pods/http:proxy-service-mj62r-hg9hx:162/proxy/: bar (200; 3.422646ms) May 7 13:35:38.659: INFO: (14) /api/v1/namespaces/proxy-2680/pods/http:proxy-service-mj62r-hg9hx:1080/proxy/: ... (200; 3.43809ms) May 7 13:35:38.659: INFO: (14) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx:162/proxy/: bar (200; 3.483864ms) May 7 13:35:38.660: INFO: (14) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx:160/proxy/: foo (200; 3.820137ms) May 7 13:35:38.660: INFO: (14) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx/proxy/: test (200; 4.094926ms) May 7 13:35:38.660: INFO: (14) /api/v1/namespaces/proxy-2680/pods/http:proxy-service-mj62r-hg9hx:160/proxy/: foo (200; 4.196658ms) May 7 13:35:38.660: INFO: (14) /api/v1/namespaces/proxy-2680/pods/https:proxy-service-mj62r-hg9hx:460/proxy/: tls baz (200; 4.154082ms) May 7 13:35:38.660: INFO: (14) /api/v1/namespaces/proxy-2680/services/http:proxy-service-mj62r:portname2/proxy/: bar (200; 4.302362ms) May 7 13:35:38.660: INFO: (14) /api/v1/namespaces/proxy-2680/services/proxy-service-mj62r:portname2/proxy/: bar (200; 4.201729ms) May 7 13:35:38.660: INFO: (14) /api/v1/namespaces/proxy-2680/services/http:proxy-service-mj62r:portname1/proxy/: foo (200; 4.27624ms) May 7 13:35:38.660: INFO: (14) /api/v1/namespaces/proxy-2680/pods/https:proxy-service-mj62r-hg9hx:443/proxy/: test<... (200; 4.51487ms) May 7 13:35:38.663: INFO: (15) /api/v1/namespaces/proxy-2680/pods/https:proxy-service-mj62r-hg9hx:443/proxy/: test<... (200; 2.676495ms) May 7 13:35:38.664: INFO: (15) /api/v1/namespaces/proxy-2680/pods/https:proxy-service-mj62r-hg9hx:460/proxy/: tls baz (200; 2.742714ms) May 7 13:35:38.665: INFO: (15) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx:160/proxy/: foo (200; 3.73932ms) May 7 13:35:38.665: INFO: (15) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx/proxy/: test (200; 3.914203ms) May 7 13:35:38.667: INFO: (15) /api/v1/namespaces/proxy-2680/pods/http:proxy-service-mj62r-hg9hx:1080/proxy/: ... (200; 5.941731ms) May 7 13:35:38.667: INFO: (15) /api/v1/namespaces/proxy-2680/pods/http:proxy-service-mj62r-hg9hx:162/proxy/: bar (200; 6.313701ms) May 7 13:35:38.667: INFO: (15) /api/v1/namespaces/proxy-2680/pods/http:proxy-service-mj62r-hg9hx:160/proxy/: foo (200; 6.374371ms) May 7 13:35:38.667: INFO: (15) /api/v1/namespaces/proxy-2680/services/http:proxy-service-mj62r:portname1/proxy/: foo (200; 6.341126ms) May 7 13:35:38.667: INFO: (15) /api/v1/namespaces/proxy-2680/services/http:proxy-service-mj62r:portname2/proxy/: bar (200; 6.4031ms) May 7 13:35:38.667: INFO: (15) /api/v1/namespaces/proxy-2680/services/https:proxy-service-mj62r:tlsportname1/proxy/: tls baz (200; 6.457292ms) May 7 13:35:38.667: INFO: (15) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx:162/proxy/: bar (200; 6.403312ms) May 7 13:35:38.667: INFO: (15) /api/v1/namespaces/proxy-2680/pods/https:proxy-service-mj62r-hg9hx:462/proxy/: tls qux (200; 6.655337ms) May 7 13:35:38.667: INFO: (15) /api/v1/namespaces/proxy-2680/services/proxy-service-mj62r:portname1/proxy/: foo (200; 6.481425ms) May 7 13:35:38.667: INFO: (15) /api/v1/namespaces/proxy-2680/services/proxy-service-mj62r:portname2/proxy/: bar (200; 6.417908ms) May 7 13:35:38.667: INFO: (15) /api/v1/namespaces/proxy-2680/services/https:proxy-service-mj62r:tlsportname2/proxy/: tls qux (200; 6.472705ms) May 7 13:35:38.670: INFO: (16) /api/v1/namespaces/proxy-2680/pods/https:proxy-service-mj62r-hg9hx:460/proxy/: tls baz (200; 2.344557ms) May 7 13:35:38.670: INFO: (16) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx:160/proxy/: foo (200; 2.765898ms) May 7 13:35:38.670: INFO: (16) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx/proxy/: test (200; 2.795737ms) May 7 13:35:38.671: INFO: (16) /api/v1/namespaces/proxy-2680/pods/http:proxy-service-mj62r-hg9hx:1080/proxy/: ... (200; 3.206127ms) May 7 13:35:38.671: INFO: (16) /api/v1/namespaces/proxy-2680/pods/http:proxy-service-mj62r-hg9hx:160/proxy/: foo (200; 3.483587ms) May 7 13:35:38.671: INFO: (16) /api/v1/namespaces/proxy-2680/pods/https:proxy-service-mj62r-hg9hx:462/proxy/: tls qux (200; 3.732446ms) May 7 13:35:38.671: INFO: (16) /api/v1/namespaces/proxy-2680/pods/https:proxy-service-mj62r-hg9hx:443/proxy/: test<... (200; 6.080779ms) May 7 13:35:38.674: INFO: (16) /api/v1/namespaces/proxy-2680/services/http:proxy-service-mj62r:portname1/proxy/: foo (200; 6.172753ms) May 7 13:35:38.674: INFO: (16) /api/v1/namespaces/proxy-2680/pods/http:proxy-service-mj62r-hg9hx:162/proxy/: bar (200; 6.166542ms) May 7 13:35:38.674: INFO: (16) /api/v1/namespaces/proxy-2680/services/proxy-service-mj62r:portname1/proxy/: foo (200; 6.213348ms) May 7 13:35:38.674: INFO: (16) /api/v1/namespaces/proxy-2680/services/https:proxy-service-mj62r:tlsportname2/proxy/: tls qux (200; 6.152906ms) May 7 13:35:38.674: INFO: (16) /api/v1/namespaces/proxy-2680/services/proxy-service-mj62r:portname2/proxy/: bar (200; 6.37415ms) May 7 13:35:38.674: INFO: (16) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx:162/proxy/: bar (200; 6.659171ms) May 7 13:35:38.677: INFO: (17) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx:162/proxy/: bar (200; 3.311421ms) May 7 13:35:38.678: INFO: (17) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx:160/proxy/: foo (200; 3.386324ms) May 7 13:35:38.678: INFO: (17) /api/v1/namespaces/proxy-2680/pods/https:proxy-service-mj62r-hg9hx:443/proxy/: ... (200; 4.00069ms) May 7 13:35:38.678: INFO: (17) /api/v1/namespaces/proxy-2680/services/http:proxy-service-mj62r:portname1/proxy/: foo (200; 4.018906ms) May 7 13:35:38.679: INFO: (17) /api/v1/namespaces/proxy-2680/services/http:proxy-service-mj62r:portname2/proxy/: bar (200; 4.459683ms) May 7 13:35:38.679: INFO: (17) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx:1080/proxy/: test<... (200; 4.467556ms) May 7 13:35:38.679: INFO: (17) /api/v1/namespaces/proxy-2680/services/proxy-service-mj62r:portname1/proxy/: foo (200; 4.601604ms) May 7 13:35:38.679: INFO: (17) /api/v1/namespaces/proxy-2680/pods/http:proxy-service-mj62r-hg9hx:162/proxy/: bar (200; 4.602734ms) May 7 13:35:38.679: INFO: (17) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx/proxy/: test (200; 4.67868ms) May 7 13:35:38.679: INFO: (17) /api/v1/namespaces/proxy-2680/pods/https:proxy-service-mj62r-hg9hx:460/proxy/: tls baz (200; 4.701819ms) May 7 13:35:38.679: INFO: (17) /api/v1/namespaces/proxy-2680/services/https:proxy-service-mj62r:tlsportname1/proxy/: tls baz (200; 5.026519ms) May 7 13:35:38.680: INFO: (17) /api/v1/namespaces/proxy-2680/services/proxy-service-mj62r:portname2/proxy/: bar (200; 5.497551ms) May 7 13:35:38.680: INFO: (17) /api/v1/namespaces/proxy-2680/services/https:proxy-service-mj62r:tlsportname2/proxy/: tls qux (200; 5.658498ms) May 7 13:35:38.683: INFO: (18) /api/v1/namespaces/proxy-2680/pods/http:proxy-service-mj62r-hg9hx:162/proxy/: bar (200; 3.175316ms) May 7 13:35:38.683: INFO: (18) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx/proxy/: test (200; 3.30134ms) May 7 13:35:38.683: INFO: (18) /api/v1/namespaces/proxy-2680/pods/https:proxy-service-mj62r-hg9hx:443/proxy/: ... (200; 3.751508ms) May 7 13:35:38.684: INFO: (18) /api/v1/namespaces/proxy-2680/pods/http:proxy-service-mj62r-hg9hx:160/proxy/: foo (200; 3.780026ms) May 7 13:35:38.684: INFO: (18) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx:1080/proxy/: test<... (200; 3.95401ms) May 7 13:35:38.684: INFO: (18) /api/v1/namespaces/proxy-2680/pods/https:proxy-service-mj62r-hg9hx:460/proxy/: tls baz (200; 4.007192ms) May 7 13:35:38.685: INFO: (18) /api/v1/namespaces/proxy-2680/services/proxy-service-mj62r:portname1/proxy/: foo (200; 5.000526ms) May 7 13:35:38.685: INFO: (18) /api/v1/namespaces/proxy-2680/services/http:proxy-service-mj62r:portname2/proxy/: bar (200; 5.073225ms) May 7 13:35:38.685: INFO: (18) /api/v1/namespaces/proxy-2680/services/http:proxy-service-mj62r:portname1/proxy/: foo (200; 5.373584ms) May 7 13:35:38.685: INFO: (18) /api/v1/namespaces/proxy-2680/services/https:proxy-service-mj62r:tlsportname2/proxy/: tls qux (200; 5.418671ms) May 7 13:35:38.685: INFO: (18) /api/v1/namespaces/proxy-2680/services/proxy-service-mj62r:portname2/proxy/: bar (200; 5.446237ms) May 7 13:35:38.685: INFO: (18) /api/v1/namespaces/proxy-2680/services/https:proxy-service-mj62r:tlsportname1/proxy/: tls baz (200; 5.538871ms) May 7 13:35:38.688: INFO: (19) /api/v1/namespaces/proxy-2680/pods/http:proxy-service-mj62r-hg9hx:1080/proxy/: ... (200; 2.912807ms) May 7 13:35:38.689: INFO: (19) /api/v1/namespaces/proxy-2680/pods/https:proxy-service-mj62r-hg9hx:462/proxy/: tls qux (200; 3.520627ms) May 7 13:35:38.689: INFO: (19) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx:162/proxy/: bar (200; 3.654249ms) May 7 13:35:38.690: INFO: (19) /api/v1/namespaces/proxy-2680/pods/https:proxy-service-mj62r-hg9hx:460/proxy/: tls baz (200; 3.950509ms) May 7 13:35:38.690: INFO: (19) /api/v1/namespaces/proxy-2680/services/https:proxy-service-mj62r:tlsportname2/proxy/: tls qux (200; 4.705438ms) May 7 13:35:38.690: INFO: (19) /api/v1/namespaces/proxy-2680/services/https:proxy-service-mj62r:tlsportname1/proxy/: tls baz (200; 4.72185ms) May 7 13:35:38.691: INFO: (19) /api/v1/namespaces/proxy-2680/services/http:proxy-service-mj62r:portname1/proxy/: foo (200; 5.275711ms) May 7 13:35:38.691: INFO: (19) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx/proxy/: test (200; 5.153693ms) May 7 13:35:38.691: INFO: (19) /api/v1/namespaces/proxy-2680/services/proxy-service-mj62r:portname2/proxy/: bar (200; 5.131595ms) May 7 13:35:38.691: INFO: (19) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx:1080/proxy/: test<... (200; 5.220845ms) May 7 13:35:38.691: INFO: (19) /api/v1/namespaces/proxy-2680/pods/proxy-service-mj62r-hg9hx:160/proxy/: foo (200; 5.205905ms) May 7 13:35:38.691: INFO: (19) /api/v1/namespaces/proxy-2680/pods/http:proxy-service-mj62r-hg9hx:162/proxy/: bar (200; 5.224829ms) May 7 13:35:38.691: INFO: (19) /api/v1/namespaces/proxy-2680/pods/https:proxy-service-mj62r-hg9hx:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all May 7 13:35:58.105: INFO: Waiting up to 5m0s for pod "client-containers-6f4ef8e6-130c-41d0-b269-dda64ab39c37" in namespace "containers-8893" to be "success or failure" May 7 13:35:58.151: INFO: Pod "client-containers-6f4ef8e6-130c-41d0-b269-dda64ab39c37": Phase="Pending", Reason="", readiness=false. Elapsed: 45.642876ms May 7 13:36:00.154: INFO: Pod "client-containers-6f4ef8e6-130c-41d0-b269-dda64ab39c37": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048815177s May 7 13:36:02.158: INFO: Pod "client-containers-6f4ef8e6-130c-41d0-b269-dda64ab39c37": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052816688s STEP: Saw pod success May 7 13:36:02.158: INFO: Pod "client-containers-6f4ef8e6-130c-41d0-b269-dda64ab39c37" satisfied condition "success or failure" May 7 13:36:02.160: INFO: Trying to get logs from node iruya-worker pod client-containers-6f4ef8e6-130c-41d0-b269-dda64ab39c37 container test-container: STEP: delete the pod May 7 13:36:02.197: INFO: Waiting for pod client-containers-6f4ef8e6-130c-41d0-b269-dda64ab39c37 to disappear May 7 13:36:02.209: INFO: Pod client-containers-6f4ef8e6-130c-41d0-b269-dda64ab39c37 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:36:02.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8893" for this suite. May 7 13:36:08.226: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:36:08.307: INFO: namespace containers-8893 deletion completed in 6.094352748s • [SLOW TEST:10.246 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:36:08.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-f9769ec8-5c0b-46da-9df3-cd1a5a37255d STEP: Creating a pod to test consume configMaps May 7 13:36:08.447: INFO: Waiting up to 5m0s for pod "pod-configmaps-df4d1c6d-a44c-4918-a3ed-a258d0464372" in namespace "configmap-5758" to be "success or failure" May 7 13:36:08.455: INFO: Pod "pod-configmaps-df4d1c6d-a44c-4918-a3ed-a258d0464372": Phase="Pending", Reason="", readiness=false. Elapsed: 8.224521ms May 7 13:36:10.460: INFO: Pod "pod-configmaps-df4d1c6d-a44c-4918-a3ed-a258d0464372": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012656146s May 7 13:36:12.464: INFO: Pod "pod-configmaps-df4d1c6d-a44c-4918-a3ed-a258d0464372": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017110261s STEP: Saw pod success May 7 13:36:12.464: INFO: Pod "pod-configmaps-df4d1c6d-a44c-4918-a3ed-a258d0464372" satisfied condition "success or failure" May 7 13:36:12.467: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-df4d1c6d-a44c-4918-a3ed-a258d0464372 container configmap-volume-test: STEP: delete the pod May 7 13:36:12.508: INFO: Waiting for pod pod-configmaps-df4d1c6d-a44c-4918-a3ed-a258d0464372 to disappear May 7 13:36:12.511: INFO: Pod pod-configmaps-df4d1c6d-a44c-4918-a3ed-a258d0464372 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:36:12.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5758" for this suite. May 7 13:36:18.530: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:36:18.603: INFO: namespace configmap-5758 deletion completed in 6.087909877s • [SLOW TEST:10.296 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:36:18.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 7 13:36:18.654: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:36:19.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1198" for this suite. May 7 13:36:25.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:36:25.849: INFO: namespace custom-resource-definition-1198 deletion completed in 6.107608193s • [SLOW TEST:7.245 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:36:25.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 7 13:36:25.954: INFO: Creating ReplicaSet my-hostname-basic-72eb2c6c-3beb-400c-9e89-696224dcd118 May 7 13:36:25.978: INFO: Pod name my-hostname-basic-72eb2c6c-3beb-400c-9e89-696224dcd118: Found 0 pods out of 1 May 7 13:36:30.983: INFO: Pod name my-hostname-basic-72eb2c6c-3beb-400c-9e89-696224dcd118: Found 1 pods out of 1 May 7 13:36:30.983: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-72eb2c6c-3beb-400c-9e89-696224dcd118" is running May 7 13:36:30.986: INFO: Pod "my-hostname-basic-72eb2c6c-3beb-400c-9e89-696224dcd118-jqb2k" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-07 13:36:26 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-07 13:36:29 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-07 13:36:29 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-07 13:36:25 +0000 UTC Reason: Message:}]) May 7 13:36:30.986: INFO: Trying to dial the pod May 7 13:36:35.998: INFO: Controller my-hostname-basic-72eb2c6c-3beb-400c-9e89-696224dcd118: Got expected result from replica 1 [my-hostname-basic-72eb2c6c-3beb-400c-9e89-696224dcd118-jqb2k]: "my-hostname-basic-72eb2c6c-3beb-400c-9e89-696224dcd118-jqb2k", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:36:35.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-7060" for this suite. May 7 13:36:42.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:36:42.091: INFO: namespace replicaset-7060 deletion completed in 6.090603974s • [SLOW TEST:16.242 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:36:42.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium May 7 13:36:42.176: INFO: Waiting up to 5m0s for pod "pod-962f8d57-203f-4bc9-807c-361a5f64a68a" in namespace "emptydir-2141" to be "success or failure" May 7 13:36:42.217: INFO: Pod "pod-962f8d57-203f-4bc9-807c-361a5f64a68a": Phase="Pending", Reason="", readiness=false. Elapsed: 40.825506ms May 7 13:36:44.396: INFO: Pod "pod-962f8d57-203f-4bc9-807c-361a5f64a68a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220412002s May 7 13:36:46.401: INFO: Pod "pod-962f8d57-203f-4bc9-807c-361a5f64a68a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.225051328s STEP: Saw pod success May 7 13:36:46.401: INFO: Pod "pod-962f8d57-203f-4bc9-807c-361a5f64a68a" satisfied condition "success or failure" May 7 13:36:46.405: INFO: Trying to get logs from node iruya-worker pod pod-962f8d57-203f-4bc9-807c-361a5f64a68a container test-container: STEP: delete the pod May 7 13:36:46.429: INFO: Waiting for pod pod-962f8d57-203f-4bc9-807c-361a5f64a68a to disappear May 7 13:36:46.432: INFO: Pod pod-962f8d57-203f-4bc9-807c-361a5f64a68a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:36:46.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2141" for this suite. May 7 13:36:52.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:36:52.570: INFO: namespace emptydir-2141 deletion completed in 6.135359766s • [SLOW TEST:10.479 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:36:52.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 7 13:36:52.675: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-4596,SelfLink:/api/v1/namespaces/watch-4596/configmaps/e2e-watch-test-watch-closed,UID:f12b5409-bfc2-4037-ac35-fce81a882557,ResourceVersion:9534227,Generation:0,CreationTimestamp:2020-05-07 13:36:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 7 13:36:52.675: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-4596,SelfLink:/api/v1/namespaces/watch-4596/configmaps/e2e-watch-test-watch-closed,UID:f12b5409-bfc2-4037-ac35-fce81a882557,ResourceVersion:9534228,Generation:0,CreationTimestamp:2020-05-07 13:36:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 7 13:36:52.691: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-4596,SelfLink:/api/v1/namespaces/watch-4596/configmaps/e2e-watch-test-watch-closed,UID:f12b5409-bfc2-4037-ac35-fce81a882557,ResourceVersion:9534229,Generation:0,CreationTimestamp:2020-05-07 13:36:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 7 13:36:52.691: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-4596,SelfLink:/api/v1/namespaces/watch-4596/configmaps/e2e-watch-test-watch-closed,UID:f12b5409-bfc2-4037-ac35-fce81a882557,ResourceVersion:9534230,Generation:0,CreationTimestamp:2020-05-07 13:36:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:36:52.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4596" for this suite. May 7 13:36:58.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:36:58.778: INFO: namespace watch-4596 deletion completed in 6.08199466s • [SLOW TEST:6.207 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:36:58.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:37:24.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3082" for this suite. May 7 13:37:31.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:37:31.074: INFO: namespace namespaces-3082 deletion completed in 6.079519717s STEP: Destroying namespace "nsdeletetest-3462" for this suite. May 7 13:37:31.076: INFO: Namespace nsdeletetest-3462 was already deleted STEP: Destroying namespace "nsdeletetest-5138" for this suite. May 7 13:37:37.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:37:37.181: INFO: namespace nsdeletetest-5138 deletion completed in 6.105128418s • [SLOW TEST:38.403 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:37:37.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:37:41.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2890" for this suite. May 7 13:37:47.305: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:37:47.378: INFO: namespace kubelet-test-2890 deletion completed in 6.086410554s • [SLOW TEST:10.197 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:37:47.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs May 7 13:37:47.435: INFO: Waiting up to 5m0s for pod "pod-fe429dc6-6133-46ad-81ee-60082c876a5c" in namespace "emptydir-6935" to be "success or failure" May 7 13:37:47.451: INFO: Pod "pod-fe429dc6-6133-46ad-81ee-60082c876a5c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.449346ms May 7 13:37:49.455: INFO: Pod "pod-fe429dc6-6133-46ad-81ee-60082c876a5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020206215s May 7 13:37:51.459: INFO: Pod "pod-fe429dc6-6133-46ad-81ee-60082c876a5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024606647s STEP: Saw pod success May 7 13:37:51.459: INFO: Pod "pod-fe429dc6-6133-46ad-81ee-60082c876a5c" satisfied condition "success or failure" May 7 13:37:51.463: INFO: Trying to get logs from node iruya-worker pod pod-fe429dc6-6133-46ad-81ee-60082c876a5c container test-container: STEP: delete the pod May 7 13:37:51.500: INFO: Waiting for pod pod-fe429dc6-6133-46ad-81ee-60082c876a5c to disappear May 7 13:37:51.505: INFO: Pod pod-fe429dc6-6133-46ad-81ee-60082c876a5c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:37:51.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6935" for this suite. May 7 13:37:57.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:37:57.584: INFO: namespace emptydir-6935 deletion completed in 6.077325017s • [SLOW TEST:10.206 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:37:57.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components May 7 13:37:57.659: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend May 7 13:37:57.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1608' May 7 13:38:00.681: INFO: stderr: "" May 7 13:38:00.681: INFO: stdout: "service/redis-slave created\n" May 7 13:38:00.681: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend May 7 13:38:00.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1608' May 7 13:38:01.024: INFO: stderr: "" May 7 13:38:01.024: INFO: stdout: "service/redis-master created\n" May 7 13:38:01.024: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 7 13:38:01.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1608' May 7 13:38:01.320: INFO: stderr: "" May 7 13:38:01.320: INFO: stdout: "service/frontend created\n" May 7 13:38:01.320: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 May 7 13:38:01.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1608' May 7 13:38:01.582: INFO: stderr: "" May 7 13:38:01.582: INFO: stdout: "deployment.apps/frontend created\n" May 7 13:38:01.582: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 7 13:38:01.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1608' May 7 13:38:01.929: INFO: stderr: "" May 7 13:38:01.929: INFO: stdout: "deployment.apps/redis-master created\n" May 7 13:38:01.929: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 May 7 13:38:01.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1608' May 7 13:38:02.277: INFO: stderr: "" May 7 13:38:02.277: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app May 7 13:38:02.277: INFO: Waiting for all frontend pods to be Running. May 7 13:38:12.328: INFO: Waiting for frontend to serve content. May 7 13:38:12.347: INFO: Trying to add a new entry to the guestbook. May 7 13:38:12.363: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 7 13:38:12.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1608' May 7 13:38:12.580: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 7 13:38:12.580: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources May 7 13:38:12.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1608' May 7 13:38:12.763: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 7 13:38:12.763: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 7 13:38:12.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1608' May 7 13:38:12.889: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 7 13:38:12.889: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 7 13:38:12.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1608' May 7 13:38:12.984: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 7 13:38:12.984: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 7 13:38:12.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1608' May 7 13:38:13.093: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 7 13:38:13.093: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 7 13:38:13.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1608' May 7 13:38:13.217: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 7 13:38:13.217: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:38:13.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1608" for this suite. May 7 13:38:53.320: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:38:53.391: INFO: namespace kubectl-1608 deletion completed in 40.157649625s • [SLOW TEST:55.806 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:38:53.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0507 13:39:33.469950 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 7 13:39:33.470: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:39:33.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9369" for this suite. May 7 13:39:43.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:39:43.570: INFO: namespace gc-9369 deletion completed in 10.097876656s • [SLOW TEST:50.179 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:39:43.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-5a31771c-4302-43b1-b528-4789fba95cae STEP: Creating a pod to test consume secrets May 7 13:39:43.712: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-da3150ca-3af5-46da-b732-81c8595093a9" in namespace "projected-2336" to be "success or failure" May 7 13:39:43.728: INFO: Pod "pod-projected-secrets-da3150ca-3af5-46da-b732-81c8595093a9": Phase="Pending", Reason="", readiness=false. Elapsed: 16.06842ms May 7 13:39:45.758: INFO: Pod "pod-projected-secrets-da3150ca-3af5-46da-b732-81c8595093a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046301328s May 7 13:39:47.781: INFO: Pod "pod-projected-secrets-da3150ca-3af5-46da-b732-81c8595093a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069342701s STEP: Saw pod success May 7 13:39:47.782: INFO: Pod "pod-projected-secrets-da3150ca-3af5-46da-b732-81c8595093a9" satisfied condition "success or failure" May 7 13:39:47.784: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-da3150ca-3af5-46da-b732-81c8595093a9 container projected-secret-volume-test: STEP: delete the pod May 7 13:39:47.808: INFO: Waiting for pod pod-projected-secrets-da3150ca-3af5-46da-b732-81c8595093a9 to disappear May 7 13:39:47.812: INFO: Pod pod-projected-secrets-da3150ca-3af5-46da-b732-81c8595093a9 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:39:47.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2336" for this suite. May 7 13:39:53.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:39:53.989: INFO: namespace projected-2336 deletion completed in 6.173991875s • [SLOW TEST:10.418 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:39:53.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-4983 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-4983 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4983 May 7 13:39:54.133: INFO: Found 0 stateful pods, waiting for 1 May 7 13:40:04.138: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 7 13:40:04.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4983 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 7 13:40:04.423: INFO: stderr: "I0507 13:40:04.275908 1202 log.go:172] (0xc000118e70) (0xc0007646e0) Create stream\nI0507 13:40:04.275983 1202 log.go:172] (0xc000118e70) (0xc0007646e0) Stream added, broadcasting: 1\nI0507 13:40:04.279297 1202 log.go:172] (0xc000118e70) Reply frame received for 1\nI0507 13:40:04.279360 1202 log.go:172] (0xc000118e70) (0xc0006d61e0) Create stream\nI0507 13:40:04.279386 1202 log.go:172] (0xc000118e70) (0xc0006d61e0) Stream added, broadcasting: 3\nI0507 13:40:04.280540 1202 log.go:172] (0xc000118e70) Reply frame received for 3\nI0507 13:40:04.280585 1202 log.go:172] (0xc000118e70) (0xc000930000) Create stream\nI0507 13:40:04.280630 1202 log.go:172] (0xc000118e70) (0xc000930000) Stream added, broadcasting: 5\nI0507 13:40:04.281947 1202 log.go:172] (0xc000118e70) Reply frame received for 5\nI0507 13:40:04.359662 1202 log.go:172] (0xc000118e70) Data frame received for 5\nI0507 13:40:04.359689 1202 log.go:172] (0xc000930000) (5) Data frame handling\nI0507 13:40:04.359704 1202 log.go:172] (0xc000930000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0507 13:40:04.413733 1202 log.go:172] (0xc000118e70) Data frame received for 3\nI0507 13:40:04.413760 1202 log.go:172] (0xc0006d61e0) (3) Data frame handling\nI0507 13:40:04.413781 1202 log.go:172] (0xc0006d61e0) (3) Data frame sent\nI0507 13:40:04.413972 1202 log.go:172] (0xc000118e70) Data frame received for 3\nI0507 13:40:04.414023 1202 log.go:172] (0xc0006d61e0) (3) Data frame handling\nI0507 13:40:04.414252 1202 log.go:172] (0xc000118e70) Data frame received for 5\nI0507 13:40:04.414287 1202 log.go:172] (0xc000930000) (5) Data frame handling\nI0507 13:40:04.416185 1202 log.go:172] (0xc000118e70) Data frame received for 1\nI0507 13:40:04.416226 1202 log.go:172] (0xc0007646e0) (1) Data frame handling\nI0507 13:40:04.416247 1202 log.go:172] (0xc0007646e0) (1) Data frame sent\nI0507 13:40:04.416269 1202 log.go:172] (0xc000118e70) (0xc0007646e0) Stream removed, broadcasting: 1\nI0507 13:40:04.416497 1202 log.go:172] (0xc000118e70) Go away received\nI0507 13:40:04.416826 1202 log.go:172] (0xc000118e70) (0xc0007646e0) Stream removed, broadcasting: 1\nI0507 13:40:04.416852 1202 log.go:172] (0xc000118e70) (0xc0006d61e0) Stream removed, broadcasting: 3\nI0507 13:40:04.416864 1202 log.go:172] (0xc000118e70) (0xc000930000) Stream removed, broadcasting: 5\n" May 7 13:40:04.423: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 7 13:40:04.423: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 7 13:40:04.426: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 7 13:40:14.430: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 7 13:40:14.430: INFO: Waiting for statefulset status.replicas updated to 0 May 7 13:40:14.471: INFO: POD NODE PHASE GRACE CONDITIONS May 7 13:40:14.471: INFO: ss-0 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:39:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:39:54 +0000 UTC }] May 7 13:40:14.471: INFO: May 7 13:40:14.471: INFO: StatefulSet ss has not reached scale 3, at 1 May 7 13:40:15.475: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.967433755s May 7 13:40:16.480: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.963603477s May 7 13:40:17.485: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.958443814s May 7 13:40:18.491: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.953371522s May 7 13:40:19.496: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.947775799s May 7 13:40:20.501: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.94251574s May 7 13:40:21.506: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.937067253s May 7 13:40:22.526: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.932742741s May 7 13:40:23.531: INFO: Verifying statefulset ss doesn't scale past 3 for another 912.913424ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4983 May 7 13:40:24.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 7 13:40:24.782: INFO: stderr: "I0507 13:40:24.675355 1222 log.go:172] (0xc0009e62c0) (0xc0008f2640) Create stream\nI0507 13:40:24.675408 1222 log.go:172] (0xc0009e62c0) (0xc0008f2640) Stream added, broadcasting: 1\nI0507 13:40:24.677895 1222 log.go:172] (0xc0009e62c0) Reply frame received for 1\nI0507 13:40:24.677963 1222 log.go:172] (0xc0009e62c0) (0xc0005c41e0) Create stream\nI0507 13:40:24.677984 1222 log.go:172] (0xc0009e62c0) (0xc0005c41e0) Stream added, broadcasting: 3\nI0507 13:40:24.678988 1222 log.go:172] (0xc0009e62c0) Reply frame received for 3\nI0507 13:40:24.679020 1222 log.go:172] (0xc0009e62c0) (0xc0008f26e0) Create stream\nI0507 13:40:24.679031 1222 log.go:172] (0xc0009e62c0) (0xc0008f26e0) Stream added, broadcasting: 5\nI0507 13:40:24.680049 1222 log.go:172] (0xc0009e62c0) Reply frame received for 5\nI0507 13:40:24.775869 1222 log.go:172] (0xc0009e62c0) Data frame received for 3\nI0507 13:40:24.775912 1222 log.go:172] (0xc0005c41e0) (3) Data frame handling\nI0507 13:40:24.775923 1222 log.go:172] (0xc0005c41e0) (3) Data frame sent\nI0507 13:40:24.775932 1222 log.go:172] (0xc0009e62c0) Data frame received for 3\nI0507 13:40:24.775939 1222 log.go:172] (0xc0005c41e0) (3) Data frame handling\nI0507 13:40:24.775965 1222 log.go:172] (0xc0009e62c0) Data frame received for 5\nI0507 13:40:24.775974 1222 log.go:172] (0xc0008f26e0) (5) Data frame handling\nI0507 13:40:24.775986 1222 log.go:172] (0xc0008f26e0) (5) Data frame sent\nI0507 13:40:24.776006 1222 log.go:172] (0xc0009e62c0) Data frame received for 5\nI0507 13:40:24.776027 1222 log.go:172] (0xc0008f26e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0507 13:40:24.778165 1222 log.go:172] (0xc0009e62c0) Data frame received for 1\nI0507 13:40:24.778203 1222 log.go:172] (0xc0008f2640) (1) Data frame handling\nI0507 13:40:24.778223 1222 log.go:172] (0xc0008f2640) (1) Data frame sent\nI0507 13:40:24.778244 1222 log.go:172] (0xc0009e62c0) (0xc0008f2640) Stream removed, broadcasting: 1\nI0507 13:40:24.778314 1222 log.go:172] (0xc0009e62c0) Go away received\nI0507 13:40:24.778706 1222 log.go:172] (0xc0009e62c0) (0xc0008f2640) Stream removed, broadcasting: 1\nI0507 13:40:24.778726 1222 log.go:172] (0xc0009e62c0) (0xc0005c41e0) Stream removed, broadcasting: 3\nI0507 13:40:24.778737 1222 log.go:172] (0xc0009e62c0) (0xc0008f26e0) Stream removed, broadcasting: 5\n" May 7 13:40:24.782: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 7 13:40:24.783: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 7 13:40:24.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4983 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 7 13:40:24.987: INFO: stderr: "I0507 13:40:24.910899 1244 log.go:172] (0xc000134dc0) (0xc0006a2820) Create stream\nI0507 13:40:24.910942 1244 log.go:172] (0xc000134dc0) (0xc0006a2820) Stream added, broadcasting: 1\nI0507 13:40:24.912874 1244 log.go:172] (0xc000134dc0) Reply frame received for 1\nI0507 13:40:24.912917 1244 log.go:172] (0xc000134dc0) (0xc0008e4000) Create stream\nI0507 13:40:24.912932 1244 log.go:172] (0xc000134dc0) (0xc0008e4000) Stream added, broadcasting: 3\nI0507 13:40:24.913881 1244 log.go:172] (0xc000134dc0) Reply frame received for 3\nI0507 13:40:24.913902 1244 log.go:172] (0xc000134dc0) (0xc0006a28c0) Create stream\nI0507 13:40:24.913909 1244 log.go:172] (0xc000134dc0) (0xc0006a28c0) Stream added, broadcasting: 5\nI0507 13:40:24.914517 1244 log.go:172] (0xc000134dc0) Reply frame received for 5\nI0507 13:40:24.982396 1244 log.go:172] (0xc000134dc0) Data frame received for 5\nI0507 13:40:24.982432 1244 log.go:172] (0xc0006a28c0) (5) Data frame handling\nI0507 13:40:24.982443 1244 log.go:172] (0xc0006a28c0) (5) Data frame sent\nI0507 13:40:24.982452 1244 log.go:172] (0xc000134dc0) Data frame received for 5\nI0507 13:40:24.982460 1244 log.go:172] (0xc0006a28c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0507 13:40:24.982481 1244 log.go:172] (0xc000134dc0) Data frame received for 3\nI0507 13:40:24.982503 1244 log.go:172] (0xc0008e4000) (3) Data frame handling\nI0507 13:40:24.982523 1244 log.go:172] (0xc0008e4000) (3) Data frame sent\nI0507 13:40:24.982534 1244 log.go:172] (0xc000134dc0) Data frame received for 3\nI0507 13:40:24.982542 1244 log.go:172] (0xc0008e4000) (3) Data frame handling\nI0507 13:40:24.983987 1244 log.go:172] (0xc000134dc0) Data frame received for 1\nI0507 13:40:24.984004 1244 log.go:172] (0xc0006a2820) (1) Data frame handling\nI0507 13:40:24.984014 1244 log.go:172] (0xc0006a2820) (1) Data frame sent\nI0507 13:40:24.984025 1244 log.go:172] (0xc000134dc0) (0xc0006a2820) Stream removed, broadcasting: 1\nI0507 13:40:24.984106 1244 log.go:172] (0xc000134dc0) Go away received\nI0507 13:40:24.984246 1244 log.go:172] (0xc000134dc0) (0xc0006a2820) Stream removed, broadcasting: 1\nI0507 13:40:24.984255 1244 log.go:172] (0xc000134dc0) (0xc0008e4000) Stream removed, broadcasting: 3\nI0507 13:40:24.984260 1244 log.go:172] (0xc000134dc0) (0xc0006a28c0) Stream removed, broadcasting: 5\n" May 7 13:40:24.987: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 7 13:40:24.987: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 7 13:40:24.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4983 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 7 13:40:25.191: INFO: stderr: "I0507 13:40:25.110939 1265 log.go:172] (0xc0006c6c60) (0xc000784aa0) Create stream\nI0507 13:40:25.110980 1265 log.go:172] (0xc0006c6c60) (0xc000784aa0) Stream added, broadcasting: 1\nI0507 13:40:25.113299 1265 log.go:172] (0xc0006c6c60) Reply frame received for 1\nI0507 13:40:25.113334 1265 log.go:172] (0xc0006c6c60) (0xc000888000) Create stream\nI0507 13:40:25.113346 1265 log.go:172] (0xc0006c6c60) (0xc000888000) Stream added, broadcasting: 3\nI0507 13:40:25.114359 1265 log.go:172] (0xc0006c6c60) Reply frame received for 3\nI0507 13:40:25.114404 1265 log.go:172] (0xc0006c6c60) (0xc0008880a0) Create stream\nI0507 13:40:25.114422 1265 log.go:172] (0xc0006c6c60) (0xc0008880a0) Stream added, broadcasting: 5\nI0507 13:40:25.115240 1265 log.go:172] (0xc0006c6c60) Reply frame received for 5\nI0507 13:40:25.185851 1265 log.go:172] (0xc0006c6c60) Data frame received for 3\nI0507 13:40:25.185879 1265 log.go:172] (0xc000888000) (3) Data frame handling\nI0507 13:40:25.185887 1265 log.go:172] (0xc000888000) (3) Data frame sent\nI0507 13:40:25.185892 1265 log.go:172] (0xc0006c6c60) Data frame received for 3\nI0507 13:40:25.185896 1265 log.go:172] (0xc000888000) (3) Data frame handling\nI0507 13:40:25.185958 1265 log.go:172] (0xc0006c6c60) Data frame received for 5\nI0507 13:40:25.185995 1265 log.go:172] (0xc0008880a0) (5) Data frame handling\nI0507 13:40:25.186020 1265 log.go:172] (0xc0008880a0) (5) Data frame sent\nI0507 13:40:25.186042 1265 log.go:172] (0xc0006c6c60) Data frame received for 5\nI0507 13:40:25.186057 1265 log.go:172] (0xc0008880a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0507 13:40:25.187175 1265 log.go:172] (0xc0006c6c60) Data frame received for 1\nI0507 13:40:25.187196 1265 log.go:172] (0xc000784aa0) (1) Data frame handling\nI0507 13:40:25.187203 1265 log.go:172] (0xc000784aa0) (1) Data frame sent\nI0507 13:40:25.187289 1265 log.go:172] (0xc0006c6c60) (0xc000784aa0) Stream removed, broadcasting: 1\nI0507 13:40:25.187533 1265 log.go:172] (0xc0006c6c60) (0xc000784aa0) Stream removed, broadcasting: 1\nI0507 13:40:25.187547 1265 log.go:172] (0xc0006c6c60) (0xc000888000) Stream removed, broadcasting: 3\nI0507 13:40:25.187555 1265 log.go:172] (0xc0006c6c60) (0xc0008880a0) Stream removed, broadcasting: 5\n" May 7 13:40:25.191: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 7 13:40:25.191: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 7 13:40:25.195: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false May 7 13:40:35.201: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 7 13:40:35.201: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 7 13:40:35.201: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 7 13:40:35.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4983 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 7 13:40:35.405: INFO: stderr: "I0507 13:40:35.324614 1287 log.go:172] (0xc000116f20) (0xc0006aabe0) Create stream\nI0507 13:40:35.324687 1287 log.go:172] (0xc000116f20) (0xc0006aabe0) Stream added, broadcasting: 1\nI0507 13:40:35.327501 1287 log.go:172] (0xc000116f20) Reply frame received for 1\nI0507 13:40:35.327561 1287 log.go:172] (0xc000116f20) (0xc000714000) Create stream\nI0507 13:40:35.327604 1287 log.go:172] (0xc000116f20) (0xc000714000) Stream added, broadcasting: 3\nI0507 13:40:35.328885 1287 log.go:172] (0xc000116f20) Reply frame received for 3\nI0507 13:40:35.328946 1287 log.go:172] (0xc000116f20) (0xc000952000) Create stream\nI0507 13:40:35.328963 1287 log.go:172] (0xc000116f20) (0xc000952000) Stream added, broadcasting: 5\nI0507 13:40:35.330441 1287 log.go:172] (0xc000116f20) Reply frame received for 5\nI0507 13:40:35.397467 1287 log.go:172] (0xc000116f20) Data frame received for 3\nI0507 13:40:35.397511 1287 log.go:172] (0xc000714000) (3) Data frame handling\nI0507 13:40:35.397536 1287 log.go:172] (0xc000714000) (3) Data frame sent\nI0507 13:40:35.397577 1287 log.go:172] (0xc000116f20) Data frame received for 5\nI0507 13:40:35.397592 1287 log.go:172] (0xc000952000) (5) Data frame handling\nI0507 13:40:35.397613 1287 log.go:172] (0xc000952000) (5) Data frame sent\nI0507 13:40:35.397631 1287 log.go:172] (0xc000116f20) Data frame received for 5\nI0507 13:40:35.397663 1287 log.go:172] (0xc000952000) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0507 13:40:35.397733 1287 log.go:172] (0xc000116f20) Data frame received for 3\nI0507 13:40:35.397759 1287 log.go:172] (0xc000714000) (3) Data frame handling\nI0507 13:40:35.399368 1287 log.go:172] (0xc000116f20) Data frame received for 1\nI0507 13:40:35.399386 1287 log.go:172] (0xc0006aabe0) (1) Data frame handling\nI0507 13:40:35.399395 1287 log.go:172] (0xc0006aabe0) (1) Data frame sent\nI0507 13:40:35.399408 1287 log.go:172] (0xc000116f20) (0xc0006aabe0) Stream removed, broadcasting: 1\nI0507 13:40:35.399573 1287 log.go:172] (0xc000116f20) Go away received\nI0507 13:40:35.399758 1287 log.go:172] (0xc000116f20) (0xc0006aabe0) Stream removed, broadcasting: 1\nI0507 13:40:35.399779 1287 log.go:172] (0xc000116f20) (0xc000714000) Stream removed, broadcasting: 3\nI0507 13:40:35.399791 1287 log.go:172] (0xc000116f20) (0xc000952000) Stream removed, broadcasting: 5\n" May 7 13:40:35.405: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 7 13:40:35.405: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 7 13:40:35.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4983 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 7 13:40:35.673: INFO: stderr: "I0507 13:40:35.554378 1307 log.go:172] (0xc000116dc0) (0xc0008246e0) Create stream\nI0507 13:40:35.554463 1307 log.go:172] (0xc000116dc0) (0xc0008246e0) Stream added, broadcasting: 1\nI0507 13:40:35.558181 1307 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0507 13:40:35.558230 1307 log.go:172] (0xc000116dc0) (0xc0005e0320) Create stream\nI0507 13:40:35.558245 1307 log.go:172] (0xc000116dc0) (0xc0005e0320) Stream added, broadcasting: 3\nI0507 13:40:35.559331 1307 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0507 13:40:35.559362 1307 log.go:172] (0xc000116dc0) (0xc0005e03c0) Create stream\nI0507 13:40:35.559370 1307 log.go:172] (0xc000116dc0) (0xc0005e03c0) Stream added, broadcasting: 5\nI0507 13:40:35.560354 1307 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0507 13:40:35.630423 1307 log.go:172] (0xc000116dc0) Data frame received for 5\nI0507 13:40:35.630455 1307 log.go:172] (0xc0005e03c0) (5) Data frame handling\nI0507 13:40:35.630477 1307 log.go:172] (0xc0005e03c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0507 13:40:35.663470 1307 log.go:172] (0xc000116dc0) Data frame received for 3\nI0507 13:40:35.663501 1307 log.go:172] (0xc0005e0320) (3) Data frame handling\nI0507 13:40:35.663551 1307 log.go:172] (0xc0005e0320) (3) Data frame sent\nI0507 13:40:35.663660 1307 log.go:172] (0xc000116dc0) Data frame received for 3\nI0507 13:40:35.663689 1307 log.go:172] (0xc0005e0320) (3) Data frame handling\nI0507 13:40:35.663721 1307 log.go:172] (0xc000116dc0) Data frame received for 5\nI0507 13:40:35.663751 1307 log.go:172] (0xc0005e03c0) (5) Data frame handling\nI0507 13:40:35.666261 1307 log.go:172] (0xc000116dc0) Data frame received for 1\nI0507 13:40:35.666290 1307 log.go:172] (0xc0008246e0) (1) Data frame handling\nI0507 13:40:35.666317 1307 log.go:172] (0xc0008246e0) (1) Data frame sent\nI0507 13:40:35.666337 1307 log.go:172] (0xc000116dc0) (0xc0008246e0) Stream removed, broadcasting: 1\nI0507 13:40:35.666537 1307 log.go:172] (0xc000116dc0) Go away received\nI0507 13:40:35.666746 1307 log.go:172] (0xc000116dc0) (0xc0008246e0) Stream removed, broadcasting: 1\nI0507 13:40:35.666773 1307 log.go:172] (0xc000116dc0) (0xc0005e0320) Stream removed, broadcasting: 3\nI0507 13:40:35.666784 1307 log.go:172] (0xc000116dc0) (0xc0005e03c0) Stream removed, broadcasting: 5\n" May 7 13:40:35.673: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 7 13:40:35.673: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 7 13:40:35.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4983 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 7 13:40:35.942: INFO: stderr: "I0507 13:40:35.813675 1326 log.go:172] (0xc0001166e0) (0xc0009be6e0) Create stream\nI0507 13:40:35.813723 1326 log.go:172] (0xc0001166e0) (0xc0009be6e0) Stream added, broadcasting: 1\nI0507 13:40:35.815815 1326 log.go:172] (0xc0001166e0) Reply frame received for 1\nI0507 13:40:35.815858 1326 log.go:172] (0xc0001166e0) (0xc000650320) Create stream\nI0507 13:40:35.815874 1326 log.go:172] (0xc0001166e0) (0xc000650320) Stream added, broadcasting: 3\nI0507 13:40:35.816924 1326 log.go:172] (0xc0001166e0) Reply frame received for 3\nI0507 13:40:35.816982 1326 log.go:172] (0xc0001166e0) (0xc000918000) Create stream\nI0507 13:40:35.817009 1326 log.go:172] (0xc0001166e0) (0xc000918000) Stream added, broadcasting: 5\nI0507 13:40:35.818079 1326 log.go:172] (0xc0001166e0) Reply frame received for 5\nI0507 13:40:35.889578 1326 log.go:172] (0xc0001166e0) Data frame received for 5\nI0507 13:40:35.889601 1326 log.go:172] (0xc000918000) (5) Data frame handling\nI0507 13:40:35.889614 1326 log.go:172] (0xc000918000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0507 13:40:35.933458 1326 log.go:172] (0xc0001166e0) Data frame received for 3\nI0507 13:40:35.933479 1326 log.go:172] (0xc000650320) (3) Data frame handling\nI0507 13:40:35.933492 1326 log.go:172] (0xc000650320) (3) Data frame sent\nI0507 13:40:35.933499 1326 log.go:172] (0xc0001166e0) Data frame received for 3\nI0507 13:40:35.933508 1326 log.go:172] (0xc000650320) (3) Data frame handling\nI0507 13:40:35.933730 1326 log.go:172] (0xc0001166e0) Data frame received for 5\nI0507 13:40:35.933746 1326 log.go:172] (0xc000918000) (5) Data frame handling\nI0507 13:40:35.936140 1326 log.go:172] (0xc0001166e0) Data frame received for 1\nI0507 13:40:35.936203 1326 log.go:172] (0xc0009be6e0) (1) Data frame handling\nI0507 13:40:35.936222 1326 log.go:172] (0xc0009be6e0) (1) Data frame sent\nI0507 13:40:35.936233 1326 log.go:172] (0xc0001166e0) (0xc0009be6e0) Stream removed, broadcasting: 1\nI0507 13:40:35.936256 1326 log.go:172] (0xc0001166e0) Go away received\nI0507 13:40:35.936686 1326 log.go:172] (0xc0001166e0) (0xc0009be6e0) Stream removed, broadcasting: 1\nI0507 13:40:35.936726 1326 log.go:172] (0xc0001166e0) (0xc000650320) Stream removed, broadcasting: 3\nI0507 13:40:35.936736 1326 log.go:172] (0xc0001166e0) (0xc000918000) Stream removed, broadcasting: 5\n" May 7 13:40:35.942: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 7 13:40:35.942: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 7 13:40:35.942: INFO: Waiting for statefulset status.replicas updated to 0 May 7 13:40:35.948: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 7 13:40:45.956: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 7 13:40:45.956: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 7 13:40:45.956: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 7 13:40:45.972: INFO: POD NODE PHASE GRACE CONDITIONS May 7 13:40:45.972: INFO: ss-0 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:39:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:39:54 +0000 UTC }] May 7 13:40:45.972: INFO: ss-1 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:14 +0000 UTC }] May 7 13:40:45.972: INFO: ss-2 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:14 +0000 UTC }] May 7 13:40:45.973: INFO: May 7 13:40:45.973: INFO: StatefulSet ss has not reached scale 0, at 3 May 7 13:40:46.977: INFO: POD NODE PHASE GRACE CONDITIONS May 7 13:40:46.977: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:39:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:39:54 +0000 UTC }] May 7 13:40:46.977: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:14 +0000 UTC }] May 7 13:40:46.977: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:14 +0000 UTC }] May 7 13:40:46.977: INFO: May 7 13:40:46.977: INFO: StatefulSet ss has not reached scale 0, at 3 May 7 13:40:47.984: INFO: POD NODE PHASE GRACE CONDITIONS May 7 13:40:47.984: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:39:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:39:54 +0000 UTC }] May 7 13:40:47.984: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:14 +0000 UTC }] May 7 13:40:47.984: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:14 +0000 UTC }] May 7 13:40:47.984: INFO: May 7 13:40:47.984: INFO: StatefulSet ss has not reached scale 0, at 3 May 7 13:40:48.989: INFO: POD NODE PHASE GRACE CONDITIONS May 7 13:40:48.989: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:39:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:39:54 +0000 UTC }] May 7 13:40:48.989: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:14 +0000 UTC }] May 7 13:40:48.989: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:14 +0000 UTC }] May 7 13:40:48.989: INFO: May 7 13:40:48.989: INFO: StatefulSet ss has not reached scale 0, at 3 May 7 13:40:49.995: INFO: POD NODE PHASE GRACE CONDITIONS May 7 13:40:49.995: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:39:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:39:54 +0000 UTC }] May 7 13:40:49.995: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:14 +0000 UTC }] May 7 13:40:49.995: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:14 +0000 UTC }] May 7 13:40:49.995: INFO: May 7 13:40:49.995: INFO: StatefulSet ss has not reached scale 0, at 3 May 7 13:40:51.001: INFO: POD NODE PHASE GRACE CONDITIONS May 7 13:40:51.001: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:39:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:39:54 +0000 UTC }] May 7 13:40:51.001: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:14 +0000 UTC }] May 7 13:40:51.001: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:14 +0000 UTC }] May 7 13:40:51.001: INFO: May 7 13:40:51.001: INFO: StatefulSet ss has not reached scale 0, at 3 May 7 13:40:52.006: INFO: POD NODE PHASE GRACE CONDITIONS May 7 13:40:52.006: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:39:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:39:54 +0000 UTC }] May 7 13:40:52.006: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 13:40:14 +0000 UTC }] May 7 13:40:52.006: INFO: May 7 13:40:52.006: INFO: StatefulSet ss has not reached scale 0, at 2 May 7 13:40:53.011: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.956258809s May 7 13:40:54.015: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.95201535s May 7 13:40:55.019: INFO: Verifying statefulset ss doesn't scale past 0 for another 947.718321ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4983 May 7 13:40:56.023: INFO: Scaling statefulset ss to 0 May 7 13:40:56.033: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 7 13:40:56.035: INFO: Deleting all statefulset in ns statefulset-4983 May 7 13:40:56.038: INFO: Scaling statefulset ss to 0 May 7 13:40:56.047: INFO: Waiting for statefulset status.replicas updated to 0 May 7 13:40:56.049: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:40:56.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4983" for this suite. May 7 13:41:02.127: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:41:02.215: INFO: namespace statefulset-4983 deletion completed in 6.104546792s • [SLOW TEST:68.225 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:41:02.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info May 7 13:41:02.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 7 13:41:02.419: INFO: stderr: "" May 7 13:41:02.419: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:41:02.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6332" for this suite. May 7 13:41:08.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:41:08.507: INFO: namespace kubectl-6332 deletion completed in 6.083733089s • [SLOW TEST:6.291 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:41:08.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container May 7 13:41:13.124: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9151 pod-service-account-ac8b80af-feeb-4005-8458-33ee65561395 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 7 13:41:13.342: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9151 pod-service-account-ac8b80af-feeb-4005-8458-33ee65561395 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 7 13:41:13.542: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9151 pod-service-account-ac8b80af-feeb-4005-8458-33ee65561395 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:41:13.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9151" for this suite. May 7 13:41:19.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:41:19.846: INFO: namespace svcaccounts-9151 deletion completed in 6.080676288s • [SLOW TEST:11.339 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:41:19.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 May 7 13:41:19.920: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 7 13:41:19.935: INFO: Waiting for terminating namespaces to be deleted... May 7 13:41:19.937: INFO: Logging pods the kubelet thinks is on node iruya-worker before test May 7 13:41:19.943: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 7 13:41:19.943: INFO: Container kube-proxy ready: true, restart count 0 May 7 13:41:19.943: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 7 13:41:19.943: INFO: Container kindnet-cni ready: true, restart count 0 May 7 13:41:19.943: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test May 7 13:41:19.950: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) May 7 13:41:19.950: INFO: Container kube-proxy ready: true, restart count 0 May 7 13:41:19.950: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) May 7 13:41:19.950: INFO: Container kindnet-cni ready: true, restart count 0 May 7 13:41:19.950: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) May 7 13:41:19.950: INFO: Container coredns ready: true, restart count 0 May 7 13:41:19.950: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) May 7 13:41:19.950: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160cc2bbb3f0cbd6], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:41:20.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1844" for this suite. May 7 13:41:27.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:41:27.091: INFO: namespace sched-pred-1844 deletion completed in 6.093851377s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.245 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:41:27.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 7 13:41:31.738: INFO: Successfully updated pod "annotationupdatecf1d93a7-5169-4794-8d5c-7a53f2478cb7" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:41:35.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3543" for this suite. May 7 13:41:57.812: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:41:57.884: INFO: namespace downward-api-3543 deletion completed in 22.088412188s • [SLOW TEST:30.793 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:41:57.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller May 7 13:41:57.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6883' May 7 13:41:58.221: INFO: stderr: "" May 7 13:41:58.221: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 7 13:41:58.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6883' May 7 13:41:58.364: INFO: stderr: "" May 7 13:41:58.364: INFO: stdout: "update-demo-nautilus-kbhml update-demo-nautilus-l7qmt " May 7 13:41:58.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kbhml -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6883' May 7 13:41:58.477: INFO: stderr: "" May 7 13:41:58.477: INFO: stdout: "" May 7 13:41:58.477: INFO: update-demo-nautilus-kbhml is created but not running May 7 13:42:03.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6883' May 7 13:42:03.573: INFO: stderr: "" May 7 13:42:03.573: INFO: stdout: "update-demo-nautilus-kbhml update-demo-nautilus-l7qmt " May 7 13:42:03.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kbhml -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6883' May 7 13:42:03.678: INFO: stderr: "" May 7 13:42:03.678: INFO: stdout: "true" May 7 13:42:03.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kbhml -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6883' May 7 13:42:03.768: INFO: stderr: "" May 7 13:42:03.768: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 7 13:42:03.768: INFO: validating pod update-demo-nautilus-kbhml May 7 13:42:03.771: INFO: got data: { "image": "nautilus.jpg" } May 7 13:42:03.771: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 7 13:42:03.771: INFO: update-demo-nautilus-kbhml is verified up and running May 7 13:42:03.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l7qmt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6883' May 7 13:42:03.872: INFO: stderr: "" May 7 13:42:03.872: INFO: stdout: "true" May 7 13:42:03.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l7qmt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6883' May 7 13:42:03.968: INFO: stderr: "" May 7 13:42:03.968: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 7 13:42:03.968: INFO: validating pod update-demo-nautilus-l7qmt May 7 13:42:03.972: INFO: got data: { "image": "nautilus.jpg" } May 7 13:42:03.973: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 7 13:42:03.973: INFO: update-demo-nautilus-l7qmt is verified up and running STEP: using delete to clean up resources May 7 13:42:03.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6883' May 7 13:42:04.085: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 7 13:42:04.085: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 7 13:42:04.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6883' May 7 13:42:04.193: INFO: stderr: "No resources found.\n" May 7 13:42:04.193: INFO: stdout: "" May 7 13:42:04.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6883 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 7 13:42:04.338: INFO: stderr: "" May 7 13:42:04.338: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:42:04.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6883" for this suite. May 7 13:42:26.367: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:42:26.439: INFO: namespace kubectl-6883 deletion completed in 22.09728994s • [SLOW TEST:28.554 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:42:26.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 7 13:42:26.596: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-7964,SelfLink:/api/v1/namespaces/watch-7964/configmaps/e2e-watch-test-resource-version,UID:cbd2a140-858d-474f-8eaa-89578a5220cf,ResourceVersion:9535680,Generation:0,CreationTimestamp:2020-05-07 13:42:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 7 13:42:26.596: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-7964,SelfLink:/api/v1/namespaces/watch-7964/configmaps/e2e-watch-test-resource-version,UID:cbd2a140-858d-474f-8eaa-89578a5220cf,ResourceVersion:9535681,Generation:0,CreationTimestamp:2020-05-07 13:42:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:42:26.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7964" for this suite. May 7 13:42:32.645: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:42:32.726: INFO: namespace watch-7964 deletion completed in 6.118728001s • [SLOW TEST:6.287 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:42:32.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-2343 STEP: creating a selector STEP: Creating the service pods in kubernetes May 7 13:42:32.810: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 7 13:42:54.933: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.127:8080/dial?request=hostName&protocol=udp&host=10.244.1.126&port=8081&tries=1'] Namespace:pod-network-test-2343 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 13:42:54.933: INFO: >>> kubeConfig: /root/.kube/config I0507 13:42:54.968523 6 log.go:172] (0xc0018dc580) (0xc002e0d9a0) Create stream I0507 13:42:54.968559 6 log.go:172] (0xc0018dc580) (0xc002e0d9a0) Stream added, broadcasting: 1 I0507 13:42:54.970435 6 log.go:172] (0xc0018dc580) Reply frame received for 1 I0507 13:42:54.970479 6 log.go:172] (0xc0018dc580) (0xc002da7ea0) Create stream I0507 13:42:54.970492 6 log.go:172] (0xc0018dc580) (0xc002da7ea0) Stream added, broadcasting: 3 I0507 13:42:54.971339 6 log.go:172] (0xc0018dc580) Reply frame received for 3 I0507 13:42:54.971365 6 log.go:172] (0xc0018dc580) (0xc002e0da40) Create stream I0507 13:42:54.971374 6 log.go:172] (0xc0018dc580) (0xc002e0da40) Stream added, broadcasting: 5 I0507 13:42:54.972210 6 log.go:172] (0xc0018dc580) Reply frame received for 5 I0507 13:42:55.032933 6 log.go:172] (0xc0018dc580) Data frame received for 3 I0507 13:42:55.033049 6 log.go:172] (0xc002da7ea0) (3) Data frame handling I0507 13:42:55.033108 6 log.go:172] (0xc002da7ea0) (3) Data frame sent I0507 13:42:55.033348 6 log.go:172] (0xc0018dc580) Data frame received for 3 I0507 13:42:55.033366 6 log.go:172] (0xc002da7ea0) (3) Data frame handling I0507 13:42:55.033577 6 log.go:172] (0xc0018dc580) Data frame received for 5 I0507 13:42:55.033593 6 log.go:172] (0xc002e0da40) (5) Data frame handling I0507 13:42:55.034787 6 log.go:172] (0xc0018dc580) Data frame received for 1 I0507 13:42:55.034804 6 log.go:172] (0xc002e0d9a0) (1) Data frame handling I0507 13:42:55.034821 6 log.go:172] (0xc002e0d9a0) (1) Data frame sent I0507 13:42:55.034836 6 log.go:172] (0xc0018dc580) (0xc002e0d9a0) Stream removed, broadcasting: 1 I0507 13:42:55.034916 6 log.go:172] (0xc0018dc580) Go away received I0507 13:42:55.034964 6 log.go:172] (0xc0018dc580) (0xc002e0d9a0) Stream removed, broadcasting: 1 I0507 13:42:55.034983 6 log.go:172] (0xc0018dc580) (0xc002da7ea0) Stream removed, broadcasting: 3 I0507 13:42:55.034996 6 log.go:172] (0xc0018dc580) (0xc002e0da40) Stream removed, broadcasting: 5 May 7 13:42:55.035: INFO: Waiting for endpoints: map[] May 7 13:42:55.038: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.127:8080/dial?request=hostName&protocol=udp&host=10.244.2.189&port=8081&tries=1'] Namespace:pod-network-test-2343 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 13:42:55.038: INFO: >>> kubeConfig: /root/.kube/config I0507 13:42:55.075031 6 log.go:172] (0xc001170a50) (0xc001240780) Create stream I0507 13:42:55.075071 6 log.go:172] (0xc001170a50) (0xc001240780) Stream added, broadcasting: 1 I0507 13:42:55.077901 6 log.go:172] (0xc001170a50) Reply frame received for 1 I0507 13:42:55.077955 6 log.go:172] (0xc001170a50) (0xc001240820) Create stream I0507 13:42:55.077969 6 log.go:172] (0xc001170a50) (0xc001240820) Stream added, broadcasting: 3 I0507 13:42:55.079038 6 log.go:172] (0xc001170a50) Reply frame received for 3 I0507 13:42:55.079083 6 log.go:172] (0xc001170a50) (0xc001240aa0) Create stream I0507 13:42:55.079098 6 log.go:172] (0xc001170a50) (0xc001240aa0) Stream added, broadcasting: 5 I0507 13:42:55.080183 6 log.go:172] (0xc001170a50) Reply frame received for 5 I0507 13:42:55.164324 6 log.go:172] (0xc001170a50) Data frame received for 3 I0507 13:42:55.164356 6 log.go:172] (0xc001240820) (3) Data frame handling I0507 13:42:55.164377 6 log.go:172] (0xc001240820) (3) Data frame sent I0507 13:42:55.165053 6 log.go:172] (0xc001170a50) Data frame received for 5 I0507 13:42:55.165090 6 log.go:172] (0xc001240aa0) (5) Data frame handling I0507 13:42:55.165258 6 log.go:172] (0xc001170a50) Data frame received for 3 I0507 13:42:55.165277 6 log.go:172] (0xc001240820) (3) Data frame handling I0507 13:42:55.166984 6 log.go:172] (0xc001170a50) Data frame received for 1 I0507 13:42:55.167001 6 log.go:172] (0xc001240780) (1) Data frame handling I0507 13:42:55.167014 6 log.go:172] (0xc001240780) (1) Data frame sent I0507 13:42:55.167066 6 log.go:172] (0xc001170a50) (0xc001240780) Stream removed, broadcasting: 1 I0507 13:42:55.167107 6 log.go:172] (0xc001170a50) Go away received I0507 13:42:55.167179 6 log.go:172] (0xc001170a50) (0xc001240780) Stream removed, broadcasting: 1 I0507 13:42:55.167190 6 log.go:172] (0xc001170a50) (0xc001240820) Stream removed, broadcasting: 3 I0507 13:42:55.167195 6 log.go:172] (0xc001170a50) (0xc001240aa0) Stream removed, broadcasting: 5 May 7 13:42:55.167: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:42:55.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2343" for this suite. May 7 13:43:19.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:43:19.262: INFO: namespace pod-network-test-2343 deletion completed in 24.092204572s • [SLOW TEST:46.536 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:43:19.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 7 13:43:19.445: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 13:43:19.452: INFO: Number of nodes with available pods: 0 May 7 13:43:19.452: INFO: Node iruya-worker is running more than one daemon pod May 7 13:43:20.458: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 13:43:20.461: INFO: Number of nodes with available pods: 0 May 7 13:43:20.461: INFO: Node iruya-worker is running more than one daemon pod May 7 13:43:21.646: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 13:43:21.650: INFO: Number of nodes with available pods: 0 May 7 13:43:21.650: INFO: Node iruya-worker is running more than one daemon pod May 7 13:43:22.457: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 13:43:22.460: INFO: Number of nodes with available pods: 0 May 7 13:43:22.460: INFO: Node iruya-worker is running more than one daemon pod May 7 13:43:23.457: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 13:43:23.460: INFO: Number of nodes with available pods: 0 May 7 13:43:23.460: INFO: Node iruya-worker is running more than one daemon pod May 7 13:43:24.458: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 13:43:24.461: INFO: Number of nodes with available pods: 2 May 7 13:43:24.461: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 7 13:43:24.555: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 13:43:24.560: INFO: Number of nodes with available pods: 1 May 7 13:43:24.560: INFO: Node iruya-worker is running more than one daemon pod May 7 13:43:25.565: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 13:43:25.569: INFO: Number of nodes with available pods: 1 May 7 13:43:25.569: INFO: Node iruya-worker is running more than one daemon pod May 7 13:43:26.570: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 13:43:26.573: INFO: Number of nodes with available pods: 1 May 7 13:43:26.573: INFO: Node iruya-worker is running more than one daemon pod May 7 13:43:27.564: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 13:43:27.566: INFO: Number of nodes with available pods: 1 May 7 13:43:27.566: INFO: Node iruya-worker is running more than one daemon pod May 7 13:43:28.727: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 13:43:28.731: INFO: Number of nodes with available pods: 1 May 7 13:43:28.731: INFO: Node iruya-worker is running more than one daemon pod May 7 13:43:29.565: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 13:43:29.569: INFO: Number of nodes with available pods: 2 May 7 13:43:29.569: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6374, will wait for the garbage collector to delete the pods May 7 13:43:29.634: INFO: Deleting DaemonSet.extensions daemon-set took: 7.251373ms May 7 13:43:29.934: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.282353ms May 7 13:43:42.237: INFO: Number of nodes with available pods: 0 May 7 13:43:42.237: INFO: Number of running nodes: 0, number of available pods: 0 May 7 13:43:42.240: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6374/daemonsets","resourceVersion":"9535959"},"items":null} May 7 13:43:42.242: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6374/pods","resourceVersion":"9535959"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:43:42.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6374" for this suite. May 7 13:43:48.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:43:48.356: INFO: namespace daemonsets-6374 deletion completed in 6.101957258s • [SLOW TEST:29.093 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:43:48.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 7 13:43:48.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-951' May 7 13:43:48.505: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 7 13:43:48.505: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 May 7 13:43:48.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-951' May 7 13:43:48.661: INFO: stderr: "" May 7 13:43:48.661: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:43:48.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-951" for this suite. May 7 13:44:10.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:44:10.806: INFO: namespace kubectl-951 deletion completed in 22.13182421s • [SLOW TEST:22.450 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:44:10.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 7 13:44:10.843: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:44:17.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8132" for this suite. May 7 13:44:23.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:44:23.897: INFO: namespace init-container-8132 deletion completed in 6.110948021s • [SLOW TEST:13.089 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:44:23.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs May 7 13:44:24.020: INFO: Waiting up to 5m0s for pod "pod-de1eaba9-45d9-4045-9d18-1379b7a5e58a" in namespace "emptydir-5245" to be "success or failure" May 7 13:44:24.039: INFO: Pod "pod-de1eaba9-45d9-4045-9d18-1379b7a5e58a": Phase="Pending", Reason="", readiness=false. Elapsed: 18.603355ms May 7 13:44:26.110: INFO: Pod "pod-de1eaba9-45d9-4045-9d18-1379b7a5e58a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089348326s May 7 13:44:28.114: INFO: Pod "pod-de1eaba9-45d9-4045-9d18-1379b7a5e58a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.093845464s STEP: Saw pod success May 7 13:44:28.114: INFO: Pod "pod-de1eaba9-45d9-4045-9d18-1379b7a5e58a" satisfied condition "success or failure" May 7 13:44:28.117: INFO: Trying to get logs from node iruya-worker2 pod pod-de1eaba9-45d9-4045-9d18-1379b7a5e58a container test-container: STEP: delete the pod May 7 13:44:28.171: INFO: Waiting for pod pod-de1eaba9-45d9-4045-9d18-1379b7a5e58a to disappear May 7 13:44:28.192: INFO: Pod pod-de1eaba9-45d9-4045-9d18-1379b7a5e58a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:44:28.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5245" for this suite. May 7 13:44:34.225: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:44:34.297: INFO: namespace emptydir-5245 deletion completed in 6.100552507s • [SLOW TEST:10.400 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:44:34.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token May 7 13:44:34.948: INFO: created pod pod-service-account-defaultsa May 7 13:44:34.948: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 7 13:44:34.957: INFO: created pod pod-service-account-mountsa May 7 13:44:34.957: INFO: pod pod-service-account-mountsa service account token volume mount: true May 7 13:44:34.963: INFO: created pod pod-service-account-nomountsa May 7 13:44:34.963: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 7 13:44:35.005: INFO: created pod pod-service-account-defaultsa-mountspec May 7 13:44:35.005: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 7 13:44:35.056: INFO: created pod pod-service-account-mountsa-mountspec May 7 13:44:35.056: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 7 13:44:35.071: INFO: created pod pod-service-account-nomountsa-mountspec May 7 13:44:35.071: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 7 13:44:35.152: INFO: created pod pod-service-account-defaultsa-nomountspec May 7 13:44:35.152: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 7 13:44:35.189: INFO: created pod pod-service-account-mountsa-nomountspec May 7 13:44:35.189: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 7 13:44:35.226: INFO: created pod pod-service-account-nomountsa-nomountspec May 7 13:44:35.226: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:44:35.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8450" for this suite. May 7 13:44:49.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:44:49.467: INFO: namespace svcaccounts-8450 deletion completed in 14.17353375s • [SLOW TEST:15.170 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:44:49.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1777.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1777.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 7 13:44:55.596: INFO: DNS probes using dns-1777/dns-test-47280f8f-7c6c-4bac-bb01-bc39123d536c succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:44:55.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1777" for this suite. May 7 13:45:01.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:45:01.785: INFO: namespace dns-1777 deletion completed in 6.150604289s • [SLOW TEST:12.318 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:45:01.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 7 13:45:05.911: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:45:05.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6939" for this suite. May 7 13:45:12.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:45:12.088: INFO: namespace container-runtime-6939 deletion completed in 6.139794701s • [SLOW TEST:10.302 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:45:12.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 7 13:45:20.243: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 7 13:45:20.253: INFO: Pod pod-with-prestop-http-hook still exists May 7 13:45:22.253: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 7 13:45:22.257: INFO: Pod pod-with-prestop-http-hook still exists May 7 13:45:24.253: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 7 13:45:24.258: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:45:24.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5448" for this suite. May 7 13:45:46.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:45:46.391: INFO: namespace container-lifecycle-hook-5448 deletion completed in 22.122297918s • [SLOW TEST:34.302 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:45:46.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-4909 I0507 13:45:46.463771 6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-4909, replica count: 1 I0507 13:45:47.514295 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0507 13:45:48.514617 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0507 13:45:49.517919 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0507 13:45:50.518163 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 7 13:45:50.657: INFO: Created: latency-svc-qkkhn May 7 13:45:50.672: INFO: Got endpoints: latency-svc-qkkhn [54.453458ms] May 7 13:45:50.735: INFO: Created: latency-svc-nsgfp May 7 13:45:50.751: INFO: Got endpoints: latency-svc-nsgfp [78.143476ms] May 7 13:45:50.771: INFO: Created: latency-svc-p62cn May 7 13:45:50.781: INFO: Got endpoints: latency-svc-p62cn [108.351594ms] May 7 13:45:50.807: INFO: Created: latency-svc-s2fqw May 7 13:45:50.853: INFO: Got endpoints: latency-svc-s2fqw [180.055206ms] May 7 13:45:50.867: INFO: Created: latency-svc-5dwgr May 7 13:45:50.884: INFO: Got endpoints: latency-svc-5dwgr [211.664002ms] May 7 13:45:50.910: INFO: Created: latency-svc-l7bcj May 7 13:45:50.926: INFO: Got endpoints: latency-svc-l7bcj [253.907461ms] May 7 13:45:50.951: INFO: Created: latency-svc-gdbxb May 7 13:45:51.003: INFO: Got endpoints: latency-svc-gdbxb [330.53307ms] May 7 13:45:51.041: INFO: Created: latency-svc-kjg7l May 7 13:45:51.059: INFO: Got endpoints: latency-svc-kjg7l [386.208457ms] May 7 13:45:51.083: INFO: Created: latency-svc-bsbtq May 7 13:45:51.095: INFO: Got endpoints: latency-svc-bsbtq [422.413941ms] May 7 13:45:51.143: INFO: Created: latency-svc-q6rss May 7 13:45:51.162: INFO: Got endpoints: latency-svc-q6rss [489.452656ms] May 7 13:45:51.197: INFO: Created: latency-svc-dmbdr May 7 13:45:51.216: INFO: Got endpoints: latency-svc-dmbdr [543.271332ms] May 7 13:45:51.284: INFO: Created: latency-svc-qmgtd May 7 13:45:51.354: INFO: Got endpoints: latency-svc-qmgtd [681.212244ms] May 7 13:45:51.434: INFO: Created: latency-svc-ss9fj May 7 13:45:51.448: INFO: Got endpoints: latency-svc-ss9fj [775.16465ms] May 7 13:45:51.502: INFO: Created: latency-svc-b6dqz May 7 13:45:51.589: INFO: Got endpoints: latency-svc-b6dqz [916.651391ms] May 7 13:45:51.605: INFO: Created: latency-svc-5p48l May 7 13:45:51.616: INFO: Got endpoints: latency-svc-5p48l [942.656324ms] May 7 13:45:51.641: INFO: Created: latency-svc-spwth May 7 13:45:51.658: INFO: Got endpoints: latency-svc-spwth [985.313161ms] May 7 13:45:51.734: INFO: Created: latency-svc-tpg2w May 7 13:45:51.766: INFO: Got endpoints: latency-svc-tpg2w [1.015584737s] May 7 13:45:51.767: INFO: Created: latency-svc-2dvrm May 7 13:45:51.802: INFO: Got endpoints: latency-svc-2dvrm [1.021158714s] May 7 13:45:51.865: INFO: Created: latency-svc-kn8qj May 7 13:45:51.875: INFO: Got endpoints: latency-svc-kn8qj [1.022599545s] May 7 13:45:51.905: INFO: Created: latency-svc-xdhn7 May 7 13:45:51.930: INFO: Got endpoints: latency-svc-xdhn7 [1.046268695s] May 7 13:45:52.027: INFO: Created: latency-svc-2vvcz May 7 13:45:52.030: INFO: Got endpoints: latency-svc-2vvcz [1.103791077s] May 7 13:45:52.066: INFO: Created: latency-svc-4n9s7 May 7 13:45:52.081: INFO: Got endpoints: latency-svc-4n9s7 [1.077440875s] May 7 13:45:52.103: INFO: Created: latency-svc-wdplx May 7 13:45:52.117: INFO: Got endpoints: latency-svc-wdplx [1.058264282s] May 7 13:45:52.164: INFO: Created: latency-svc-zvk6f May 7 13:45:52.171: INFO: Got endpoints: latency-svc-zvk6f [1.076128751s] May 7 13:45:52.206: INFO: Created: latency-svc-qdq4p May 7 13:45:52.232: INFO: Got endpoints: latency-svc-qdq4p [1.069720803s] May 7 13:45:52.314: INFO: Created: latency-svc-q7c5t May 7 13:45:52.318: INFO: Got endpoints: latency-svc-q7c5t [1.101954687s] May 7 13:45:52.361: INFO: Created: latency-svc-zct9l May 7 13:45:52.377: INFO: Got endpoints: latency-svc-zct9l [1.022888054s] May 7 13:45:52.452: INFO: Created: latency-svc-dx5f5 May 7 13:45:52.455: INFO: Got endpoints: latency-svc-dx5f5 [1.00741597s] May 7 13:45:52.516: INFO: Created: latency-svc-qhnb5 May 7 13:45:52.534: INFO: Got endpoints: latency-svc-qhnb5 [944.489054ms] May 7 13:45:52.602: INFO: Created: latency-svc-9kntr May 7 13:45:52.618: INFO: Got endpoints: latency-svc-9kntr [1.002118666s] May 7 13:45:52.654: INFO: Created: latency-svc-n249h May 7 13:45:52.684: INFO: Got endpoints: latency-svc-n249h [1.026s] May 7 13:45:52.745: INFO: Created: latency-svc-lp4kj May 7 13:45:52.780: INFO: Got endpoints: latency-svc-lp4kj [1.013375398s] May 7 13:45:52.816: INFO: Created: latency-svc-25dbh May 7 13:45:52.835: INFO: Got endpoints: latency-svc-25dbh [1.032643281s] May 7 13:45:52.894: INFO: Created: latency-svc-n8nlb May 7 13:45:52.908: INFO: Got endpoints: latency-svc-n8nlb [1.032208372s] May 7 13:45:52.936: INFO: Created: latency-svc-rnjw7 May 7 13:45:52.950: INFO: Got endpoints: latency-svc-rnjw7 [1.019378234s] May 7 13:45:53.021: INFO: Created: latency-svc-r9564 May 7 13:45:53.035: INFO: Got endpoints: latency-svc-r9564 [1.00419792s] May 7 13:45:53.068: INFO: Created: latency-svc-cvg24 May 7 13:45:53.083: INFO: Got endpoints: latency-svc-cvg24 [1.001950222s] May 7 13:45:53.110: INFO: Created: latency-svc-6l242 May 7 13:45:53.164: INFO: Got endpoints: latency-svc-6l242 [1.046983165s] May 7 13:45:53.182: INFO: Created: latency-svc-4s9xm May 7 13:45:53.209: INFO: Got endpoints: latency-svc-4s9xm [1.038166119s] May 7 13:45:53.236: INFO: Created: latency-svc-7sjjk May 7 13:45:53.246: INFO: Got endpoints: latency-svc-7sjjk [1.01456845s] May 7 13:45:53.290: INFO: Created: latency-svc-m26zb May 7 13:45:53.300: INFO: Got endpoints: latency-svc-m26zb [981.846407ms] May 7 13:45:53.338: INFO: Created: latency-svc-8zfrk May 7 13:45:53.361: INFO: Got endpoints: latency-svc-8zfrk [983.707832ms] May 7 13:45:53.415: INFO: Created: latency-svc-f8vg8 May 7 13:45:53.421: INFO: Got endpoints: latency-svc-f8vg8 [965.598293ms] May 7 13:45:53.446: INFO: Created: latency-svc-pg759 May 7 13:45:53.463: INFO: Got endpoints: latency-svc-pg759 [929.369666ms] May 7 13:45:53.512: INFO: Created: latency-svc-n9k8g May 7 13:45:53.577: INFO: Got endpoints: latency-svc-n9k8g [959.595559ms] May 7 13:45:53.607: INFO: Created: latency-svc-92slc May 7 13:45:53.627: INFO: Got endpoints: latency-svc-92slc [942.3261ms] May 7 13:45:53.650: INFO: Created: latency-svc-8j67z May 7 13:45:53.669: INFO: Got endpoints: latency-svc-8j67z [888.885122ms] May 7 13:45:53.722: INFO: Created: latency-svc-25qmh May 7 13:45:53.735: INFO: Got endpoints: latency-svc-25qmh [899.82969ms] May 7 13:45:53.771: INFO: Created: latency-svc-9dql7 May 7 13:45:53.783: INFO: Got endpoints: latency-svc-9dql7 [875.522875ms] May 7 13:45:53.813: INFO: Created: latency-svc-tgknc May 7 13:45:53.859: INFO: Got endpoints: latency-svc-tgknc [908.636919ms] May 7 13:45:53.949: INFO: Created: latency-svc-7gxq5 May 7 13:45:54.027: INFO: Got endpoints: latency-svc-7gxq5 [992.940771ms] May 7 13:45:54.040: INFO: Created: latency-svc-fm9dh May 7 13:45:54.054: INFO: Got endpoints: latency-svc-fm9dh [971.409402ms] May 7 13:45:54.081: INFO: Created: latency-svc-mp2jn May 7 13:45:54.097: INFO: Got endpoints: latency-svc-mp2jn [932.471492ms] May 7 13:45:54.219: INFO: Created: latency-svc-9mwt7 May 7 13:45:54.222: INFO: Got endpoints: latency-svc-9mwt7 [1.012315704s] May 7 13:45:54.268: INFO: Created: latency-svc-wgsjr May 7 13:45:54.278: INFO: Got endpoints: latency-svc-wgsjr [1.031530612s] May 7 13:45:54.304: INFO: Created: latency-svc-v8l4n May 7 13:45:54.317: INFO: Got endpoints: latency-svc-v8l4n [1.016954364s] May 7 13:45:54.386: INFO: Created: latency-svc-sgdfk May 7 13:45:54.389: INFO: Got endpoints: latency-svc-sgdfk [1.028158263s] May 7 13:45:54.441: INFO: Created: latency-svc-fwdbc May 7 13:45:54.459: INFO: Got endpoints: latency-svc-fwdbc [1.038073179s] May 7 13:45:54.484: INFO: Created: latency-svc-lmvvn May 7 13:45:54.529: INFO: Got endpoints: latency-svc-lmvvn [1.065829412s] May 7 13:45:54.543: INFO: Created: latency-svc-bgk4t May 7 13:45:54.556: INFO: Got endpoints: latency-svc-bgk4t [978.654283ms] May 7 13:45:54.580: INFO: Created: latency-svc-6f4w8 May 7 13:45:54.592: INFO: Got endpoints: latency-svc-6f4w8 [965.009541ms] May 7 13:45:54.615: INFO: Created: latency-svc-kgf88 May 7 13:45:54.629: INFO: Got endpoints: latency-svc-kgf88 [960.087284ms] May 7 13:45:54.673: INFO: Created: latency-svc-dntk9 May 7 13:45:54.677: INFO: Got endpoints: latency-svc-dntk9 [941.987572ms] May 7 13:45:54.736: INFO: Created: latency-svc-hzghd May 7 13:45:54.755: INFO: Got endpoints: latency-svc-hzghd [971.675986ms] May 7 13:45:54.823: INFO: Created: latency-svc-pb2jk May 7 13:45:54.827: INFO: Got endpoints: latency-svc-pb2jk [968.768584ms] May 7 13:45:54.855: INFO: Created: latency-svc-cpmhw May 7 13:45:54.885: INFO: Got endpoints: latency-svc-cpmhw [856.986298ms] May 7 13:45:54.922: INFO: Created: latency-svc-s7vqz May 7 13:45:54.960: INFO: Got endpoints: latency-svc-s7vqz [905.882289ms] May 7 13:45:54.975: INFO: Created: latency-svc-kqjmk May 7 13:45:54.991: INFO: Got endpoints: latency-svc-kqjmk [894.570867ms] May 7 13:45:55.035: INFO: Created: latency-svc-6v92z May 7 13:45:55.052: INFO: Got endpoints: latency-svc-6v92z [829.675601ms] May 7 13:45:55.104: INFO: Created: latency-svc-89ggn May 7 13:45:55.107: INFO: Got endpoints: latency-svc-89ggn [828.710016ms] May 7 13:45:55.155: INFO: Created: latency-svc-96qkx May 7 13:45:55.172: INFO: Got endpoints: latency-svc-96qkx [855.304848ms] May 7 13:45:55.203: INFO: Created: latency-svc-xqldw May 7 13:45:55.272: INFO: Got endpoints: latency-svc-xqldw [883.251506ms] May 7 13:45:55.293: INFO: Created: latency-svc-2zj28 May 7 13:45:55.311: INFO: Got endpoints: latency-svc-2zj28 [851.787638ms] May 7 13:45:55.335: INFO: Created: latency-svc-wvkkn May 7 13:45:55.347: INFO: Got endpoints: latency-svc-wvkkn [817.747095ms] May 7 13:45:55.410: INFO: Created: latency-svc-4dw2p May 7 13:45:55.412: INFO: Got endpoints: latency-svc-4dw2p [856.257867ms] May 7 13:45:55.460: INFO: Created: latency-svc-k65dk May 7 13:45:55.474: INFO: Got endpoints: latency-svc-k65dk [882.460267ms] May 7 13:45:55.542: INFO: Created: latency-svc-lzmfm May 7 13:45:55.552: INFO: Got endpoints: latency-svc-lzmfm [923.513234ms] May 7 13:45:55.617: INFO: Created: latency-svc-gf98s May 7 13:45:55.673: INFO: Got endpoints: latency-svc-gf98s [996.488351ms] May 7 13:45:55.712: INFO: Created: latency-svc-bgwb4 May 7 13:45:55.727: INFO: Got endpoints: latency-svc-bgwb4 [972.375779ms] May 7 13:45:55.761: INFO: Created: latency-svc-5fk9d May 7 13:45:55.810: INFO: Got endpoints: latency-svc-5fk9d [982.98703ms] May 7 13:45:55.833: INFO: Created: latency-svc-v6jc6 May 7 13:45:55.848: INFO: Got endpoints: latency-svc-v6jc6 [963.42084ms] May 7 13:45:55.881: INFO: Created: latency-svc-gpjfm May 7 13:45:55.943: INFO: Got endpoints: latency-svc-gpjfm [982.257868ms] May 7 13:45:55.964: INFO: Created: latency-svc-2jdws May 7 13:45:55.987: INFO: Got endpoints: latency-svc-2jdws [995.767681ms] May 7 13:45:56.031: INFO: Created: latency-svc-xhhwg May 7 13:45:56.087: INFO: Got endpoints: latency-svc-xhhwg [1.034947439s] May 7 13:45:56.109: INFO: Created: latency-svc-4kmzb May 7 13:45:56.174: INFO: Got endpoints: latency-svc-4kmzb [1.067478696s] May 7 13:45:56.242: INFO: Created: latency-svc-g6hjv May 7 13:45:56.272: INFO: Got endpoints: latency-svc-g6hjv [1.099934758s] May 7 13:45:56.337: INFO: Created: latency-svc-chmgs May 7 13:45:56.391: INFO: Got endpoints: latency-svc-chmgs [1.119060321s] May 7 13:45:56.420: INFO: Created: latency-svc-9f4sg May 7 13:45:56.438: INFO: Got endpoints: latency-svc-9f4sg [1.12751736s] May 7 13:45:56.462: INFO: Created: latency-svc-xv59g May 7 13:45:56.481: INFO: Got endpoints: latency-svc-xv59g [1.133971964s] May 7 13:45:56.529: INFO: Created: latency-svc-vf96t May 7 13:45:56.558: INFO: Got endpoints: latency-svc-vf96t [1.14608564s] May 7 13:45:56.560: INFO: Created: latency-svc-h6f9h May 7 13:45:56.572: INFO: Got endpoints: latency-svc-h6f9h [1.097629762s] May 7 13:45:56.601: INFO: Created: latency-svc-rpsdt May 7 13:45:56.615: INFO: Got endpoints: latency-svc-rpsdt [1.062097373s] May 7 13:45:56.680: INFO: Created: latency-svc-2nqhk May 7 13:45:56.682: INFO: Got endpoints: latency-svc-2nqhk [1.008764007s] May 7 13:45:56.708: INFO: Created: latency-svc-h5tpc May 7 13:45:56.729: INFO: Got endpoints: latency-svc-h5tpc [1.001636436s] May 7 13:45:56.756: INFO: Created: latency-svc-b2nm5 May 7 13:45:56.771: INFO: Got endpoints: latency-svc-b2nm5 [960.905905ms] May 7 13:45:56.829: INFO: Created: latency-svc-s9wrw May 7 13:45:56.832: INFO: Got endpoints: latency-svc-s9wrw [983.56759ms] May 7 13:45:56.900: INFO: Created: latency-svc-9schm May 7 13:45:56.911: INFO: Got endpoints: latency-svc-9schm [968.289466ms] May 7 13:45:56.984: INFO: Created: latency-svc-9w48s May 7 13:45:57.007: INFO: Got endpoints: latency-svc-9w48s [1.019618116s] May 7 13:45:57.044: INFO: Created: latency-svc-7tgnh May 7 13:45:57.067: INFO: Got endpoints: latency-svc-7tgnh [980.612048ms] May 7 13:45:57.116: INFO: Created: latency-svc-kmlwf May 7 13:45:57.120: INFO: Got endpoints: latency-svc-kmlwf [945.258605ms] May 7 13:45:57.176: INFO: Created: latency-svc-t54gm May 7 13:45:57.188: INFO: Got endpoints: latency-svc-t54gm [915.525772ms] May 7 13:45:57.212: INFO: Created: latency-svc-f6kpp May 7 13:45:57.260: INFO: Got endpoints: latency-svc-f6kpp [868.288655ms] May 7 13:45:57.290: INFO: Created: latency-svc-lbfsn May 7 13:45:57.303: INFO: Got endpoints: latency-svc-lbfsn [864.345931ms] May 7 13:45:57.326: INFO: Created: latency-svc-7fl7m May 7 13:45:57.339: INFO: Got endpoints: latency-svc-7fl7m [858.220878ms] May 7 13:45:57.404: INFO: Created: latency-svc-sp95z May 7 13:45:57.413: INFO: Got endpoints: latency-svc-sp95z [854.478841ms] May 7 13:45:57.440: INFO: Created: latency-svc-d5mt6 May 7 13:45:57.455: INFO: Got endpoints: latency-svc-d5mt6 [882.874571ms] May 7 13:45:57.476: INFO: Created: latency-svc-7gtkk May 7 13:45:57.490: INFO: Got endpoints: latency-svc-7gtkk [875.758296ms] May 7 13:45:57.535: INFO: Created: latency-svc-xp7lg May 7 13:45:57.545: INFO: Got endpoints: latency-svc-xp7lg [862.70334ms] May 7 13:45:57.566: INFO: Created: latency-svc-vvkzv May 7 13:45:57.582: INFO: Got endpoints: latency-svc-vvkzv [852.649276ms] May 7 13:45:57.601: INFO: Created: latency-svc-9xdpb May 7 13:45:57.618: INFO: Got endpoints: latency-svc-9xdpb [846.603739ms] May 7 13:45:57.674: INFO: Created: latency-svc-dlh96 May 7 13:45:57.708: INFO: Got endpoints: latency-svc-dlh96 [876.447803ms] May 7 13:45:57.740: INFO: Created: latency-svc-v4z88 May 7 13:45:57.847: INFO: Got endpoints: latency-svc-v4z88 [936.270183ms] May 7 13:45:57.878: INFO: Created: latency-svc-rvljp May 7 13:45:57.907: INFO: Got endpoints: latency-svc-rvljp [900.080728ms] May 7 13:45:57.932: INFO: Created: latency-svc-ftz9s May 7 13:45:57.979: INFO: Got endpoints: latency-svc-ftz9s [911.384259ms] May 7 13:45:58.015: INFO: Created: latency-svc-72rcs May 7 13:45:58.034: INFO: Got endpoints: latency-svc-72rcs [914.475599ms] May 7 13:45:58.058: INFO: Created: latency-svc-lbqh4 May 7 13:45:58.077: INFO: Got endpoints: latency-svc-lbqh4 [888.907411ms] May 7 13:45:58.123: INFO: Created: latency-svc-f9jgs May 7 13:45:58.126: INFO: Got endpoints: latency-svc-f9jgs [865.654457ms] May 7 13:45:58.178: INFO: Created: latency-svc-x86tb May 7 13:45:58.215: INFO: Got endpoints: latency-svc-x86tb [912.547311ms] May 7 13:45:58.290: INFO: Created: latency-svc-w9xfl May 7 13:45:58.293: INFO: Got endpoints: latency-svc-w9xfl [953.301433ms] May 7 13:45:58.388: INFO: Created: latency-svc-g68fz May 7 13:45:58.434: INFO: Got endpoints: latency-svc-g68fz [1.020925089s] May 7 13:45:58.460: INFO: Created: latency-svc-ntrdl May 7 13:45:58.489: INFO: Got endpoints: latency-svc-ntrdl [1.034344802s] May 7 13:45:58.525: INFO: Created: latency-svc-z45jm May 7 13:45:58.570: INFO: Got endpoints: latency-svc-z45jm [1.079907934s] May 7 13:45:58.604: INFO: Created: latency-svc-qlblw May 7 13:45:58.619: INFO: Got endpoints: latency-svc-qlblw [1.073858648s] May 7 13:45:58.640: INFO: Created: latency-svc-7r58n May 7 13:45:58.650: INFO: Got endpoints: latency-svc-7r58n [1.068624774s] May 7 13:45:58.697: INFO: Created: latency-svc-hl24x May 7 13:45:58.700: INFO: Got endpoints: latency-svc-hl24x [1.082392501s] May 7 13:45:58.742: INFO: Created: latency-svc-w5gl9 May 7 13:45:58.758: INFO: Got endpoints: latency-svc-w5gl9 [1.050158654s] May 7 13:45:58.789: INFO: Created: latency-svc-8ngpq May 7 13:45:58.835: INFO: Got endpoints: latency-svc-8ngpq [987.364892ms] May 7 13:45:58.850: INFO: Created: latency-svc-4zk8n May 7 13:45:58.867: INFO: Got endpoints: latency-svc-4zk8n [959.696992ms] May 7 13:45:58.891: INFO: Created: latency-svc-5ghd5 May 7 13:45:58.903: INFO: Got endpoints: latency-svc-5ghd5 [924.395009ms] May 7 13:45:58.928: INFO: Created: latency-svc-77zgj May 7 13:45:58.967: INFO: Got endpoints: latency-svc-77zgj [932.424083ms] May 7 13:45:58.981: INFO: Created: latency-svc-2lt8q May 7 13:45:59.000: INFO: Got endpoints: latency-svc-2lt8q [923.363092ms] May 7 13:45:59.023: INFO: Created: latency-svc-45qtq May 7 13:45:59.036: INFO: Got endpoints: latency-svc-45qtq [910.933746ms] May 7 13:45:59.059: INFO: Created: latency-svc-9n6mr May 7 13:45:59.105: INFO: Got endpoints: latency-svc-9n6mr [889.122793ms] May 7 13:45:59.121: INFO: Created: latency-svc-wd4hx May 7 13:45:59.156: INFO: Got endpoints: latency-svc-wd4hx [862.791483ms] May 7 13:45:59.198: INFO: Created: latency-svc-2h9kj May 7 13:45:59.236: INFO: Got endpoints: latency-svc-2h9kj [802.422431ms] May 7 13:45:59.251: INFO: Created: latency-svc-npzfj May 7 13:45:59.267: INFO: Got endpoints: latency-svc-npzfj [777.512379ms] May 7 13:45:59.311: INFO: Created: latency-svc-pcdrd May 7 13:45:59.386: INFO: Got endpoints: latency-svc-pcdrd [815.723334ms] May 7 13:45:59.402: INFO: Created: latency-svc-hcjgt May 7 13:45:59.417: INFO: Got endpoints: latency-svc-hcjgt [798.361784ms] May 7 13:45:59.450: INFO: Created: latency-svc-2v6pm May 7 13:45:59.484: INFO: Got endpoints: latency-svc-2v6pm [833.312128ms] May 7 13:45:59.542: INFO: Created: latency-svc-zswvv May 7 13:45:59.550: INFO: Got endpoints: latency-svc-zswvv [849.598021ms] May 7 13:45:59.588: INFO: Created: latency-svc-czbw9 May 7 13:45:59.604: INFO: Got endpoints: latency-svc-czbw9 [845.700498ms] May 7 13:45:59.680: INFO: Created: latency-svc-5ml8p May 7 13:45:59.682: INFO: Got endpoints: latency-svc-5ml8p [847.709781ms] May 7 13:45:59.743: INFO: Created: latency-svc-v5bz5 May 7 13:45:59.767: INFO: Got endpoints: latency-svc-v5bz5 [900.342933ms] May 7 13:45:59.818: INFO: Created: latency-svc-7lhwk May 7 13:45:59.845: INFO: Got endpoints: latency-svc-7lhwk [942.225113ms] May 7 13:45:59.895: INFO: Created: latency-svc-z4xqw May 7 13:45:59.954: INFO: Got endpoints: latency-svc-z4xqw [987.610408ms] May 7 13:45:59.970: INFO: Created: latency-svc-g7fzr May 7 13:45:59.996: INFO: Got endpoints: latency-svc-g7fzr [995.839321ms] May 7 13:46:00.025: INFO: Created: latency-svc-g6qmq May 7 13:46:00.039: INFO: Got endpoints: latency-svc-g6qmq [1.001993154s] May 7 13:46:00.093: INFO: Created: latency-svc-4562q May 7 13:46:00.097: INFO: Got endpoints: latency-svc-4562q [991.684223ms] May 7 13:46:00.127: INFO: Created: latency-svc-rz4kw May 7 13:46:00.142: INFO: Got endpoints: latency-svc-rz4kw [985.845777ms] May 7 13:46:00.169: INFO: Created: latency-svc-gnclr May 7 13:46:00.224: INFO: Got endpoints: latency-svc-gnclr [987.678697ms] May 7 13:46:00.307: INFO: Created: latency-svc-7dvtd May 7 13:46:00.322: INFO: Got endpoints: latency-svc-7dvtd [1.055482382s] May 7 13:46:00.368: INFO: Created: latency-svc-56pz8 May 7 13:46:00.371: INFO: Got endpoints: latency-svc-56pz8 [984.639435ms] May 7 13:46:00.427: INFO: Created: latency-svc-hvqkm May 7 13:46:00.437: INFO: Got endpoints: latency-svc-hvqkm [1.01964301s] May 7 13:46:00.463: INFO: Created: latency-svc-99zqm May 7 13:46:00.499: INFO: Got endpoints: latency-svc-99zqm [1.015572648s] May 7 13:46:00.511: INFO: Created: latency-svc-wgfft May 7 13:46:00.540: INFO: Got endpoints: latency-svc-wgfft [990.027298ms] May 7 13:46:00.591: INFO: Created: latency-svc-8brzk May 7 13:46:00.637: INFO: Got endpoints: latency-svc-8brzk [1.033275041s] May 7 13:46:00.638: INFO: Created: latency-svc-pv9cm May 7 13:46:00.642: INFO: Got endpoints: latency-svc-pv9cm [959.847903ms] May 7 13:46:00.679: INFO: Created: latency-svc-wgltw May 7 13:46:00.697: INFO: Got endpoints: latency-svc-wgltw [930.097786ms] May 7 13:46:00.775: INFO: Created: latency-svc-z8mf6 May 7 13:46:00.778: INFO: Got endpoints: latency-svc-z8mf6 [932.680943ms] May 7 13:46:00.817: INFO: Created: latency-svc-7tzn4 May 7 13:46:00.842: INFO: Got endpoints: latency-svc-7tzn4 [887.511106ms] May 7 13:46:00.907: INFO: Created: latency-svc-xbp5v May 7 13:46:00.910: INFO: Got endpoints: latency-svc-xbp5v [913.253429ms] May 7 13:46:00.943: INFO: Created: latency-svc-hgcsr May 7 13:46:00.957: INFO: Got endpoints: latency-svc-hgcsr [918.19896ms] May 7 13:46:00.985: INFO: Created: latency-svc-bsr9n May 7 13:46:01.005: INFO: Got endpoints: latency-svc-bsr9n [908.210903ms] May 7 13:46:01.056: INFO: Created: latency-svc-js6cm May 7 13:46:01.092: INFO: Got endpoints: latency-svc-js6cm [950.722953ms] May 7 13:46:01.093: INFO: Created: latency-svc-csb96 May 7 13:46:01.108: INFO: Got endpoints: latency-svc-csb96 [883.402288ms] May 7 13:46:01.142: INFO: Created: latency-svc-lqhzx May 7 13:46:01.194: INFO: Got endpoints: latency-svc-lqhzx [871.650963ms] May 7 13:46:01.195: INFO: Created: latency-svc-g49x5 May 7 13:46:01.211: INFO: Got endpoints: latency-svc-g49x5 [839.569092ms] May 7 13:46:01.236: INFO: Created: latency-svc-9ppk8 May 7 13:46:01.253: INFO: Got endpoints: latency-svc-9ppk8 [816.377187ms] May 7 13:46:01.274: INFO: Created: latency-svc-7pvw9 May 7 13:46:01.338: INFO: Got endpoints: latency-svc-7pvw9 [838.253195ms] May 7 13:46:01.364: INFO: Created: latency-svc-h5nxv May 7 13:46:01.405: INFO: Got endpoints: latency-svc-h5nxv [864.272824ms] May 7 13:46:01.458: INFO: Created: latency-svc-z8g6j May 7 13:46:01.470: INFO: Got endpoints: latency-svc-z8g6j [832.633993ms] May 7 13:46:01.508: INFO: Created: latency-svc-zh2rr May 7 13:46:01.525: INFO: Got endpoints: latency-svc-zh2rr [882.867971ms] May 7 13:46:01.549: INFO: Created: latency-svc-jv2k7 May 7 13:46:01.580: INFO: Got endpoints: latency-svc-jv2k7 [882.450592ms] May 7 13:46:01.614: INFO: Created: latency-svc-hvjvc May 7 13:46:01.628: INFO: Got endpoints: latency-svc-hvjvc [849.815836ms] May 7 13:46:01.658: INFO: Created: latency-svc-tgqxt May 7 13:46:01.670: INFO: Got endpoints: latency-svc-tgqxt [827.95478ms] May 7 13:46:01.704: INFO: Created: latency-svc-wljkj May 7 13:46:01.718: INFO: Got endpoints: latency-svc-wljkj [808.639491ms] May 7 13:46:01.771: INFO: Created: latency-svc-pxw62 May 7 13:46:01.835: INFO: Got endpoints: latency-svc-pxw62 [877.7064ms] May 7 13:46:01.861: INFO: Created: latency-svc-lk84v May 7 13:46:01.888: INFO: Got endpoints: latency-svc-lk84v [882.539676ms] May 7 13:46:01.914: INFO: Created: latency-svc-hnzv5 May 7 13:46:01.966: INFO: Got endpoints: latency-svc-hnzv5 [873.967805ms] May 7 13:46:01.999: INFO: Created: latency-svc-wtxfc May 7 13:46:02.015: INFO: Got endpoints: latency-svc-wtxfc [907.159926ms] May 7 13:46:02.042: INFO: Created: latency-svc-kfr6k May 7 13:46:02.050: INFO: Got endpoints: latency-svc-kfr6k [856.441238ms] May 7 13:46:02.123: INFO: Created: latency-svc-jmx9w May 7 13:46:02.126: INFO: Got endpoints: latency-svc-jmx9w [915.16761ms] May 7 13:46:02.154: INFO: Created: latency-svc-k49c6 May 7 13:46:02.171: INFO: Got endpoints: latency-svc-k49c6 [917.558439ms] May 7 13:46:02.266: INFO: Created: latency-svc-j8b7f May 7 13:46:02.269: INFO: Got endpoints: latency-svc-j8b7f [931.281658ms] May 7 13:46:02.328: INFO: Created: latency-svc-867rm May 7 13:46:02.346: INFO: Got endpoints: latency-svc-867rm [941.640652ms] May 7 13:46:02.399: INFO: Created: latency-svc-jgt4p May 7 13:46:02.402: INFO: Got endpoints: latency-svc-jgt4p [931.417655ms] May 7 13:46:02.430: INFO: Created: latency-svc-ggkwr May 7 13:46:02.455: INFO: Got endpoints: latency-svc-ggkwr [929.62758ms] May 7 13:46:02.484: INFO: Created: latency-svc-9lxpr May 7 13:46:02.529: INFO: Got endpoints: latency-svc-9lxpr [949.274105ms] May 7 13:46:02.544: INFO: Created: latency-svc-hdntf May 7 13:46:02.558: INFO: Got endpoints: latency-svc-hdntf [929.697386ms] May 7 13:46:02.623: INFO: Created: latency-svc-77dkq May 7 13:46:02.661: INFO: Got endpoints: latency-svc-77dkq [991.049911ms] May 7 13:46:02.679: INFO: Created: latency-svc-7b45m May 7 13:46:02.690: INFO: Got endpoints: latency-svc-7b45m [972.051086ms] May 7 13:46:02.718: INFO: Created: latency-svc-cmhpp May 7 13:46:02.733: INFO: Got endpoints: latency-svc-cmhpp [898.7823ms] May 7 13:46:02.800: INFO: Created: latency-svc-wtntf May 7 13:46:02.802: INFO: Got endpoints: latency-svc-wtntf [914.238144ms] May 7 13:46:02.857: INFO: Created: latency-svc-b5x4t May 7 13:46:02.872: INFO: Got endpoints: latency-svc-b5x4t [905.375017ms] May 7 13:46:02.931: INFO: Created: latency-svc-l46sl May 7 13:46:02.934: INFO: Got endpoints: latency-svc-l46sl [918.663544ms] May 7 13:46:02.964: INFO: Created: latency-svc-nb72x May 7 13:46:02.981: INFO: Got endpoints: latency-svc-nb72x [930.060647ms] May 7 13:46:03.001: INFO: Created: latency-svc-jxgsl May 7 13:46:03.018: INFO: Got endpoints: latency-svc-jxgsl [892.557806ms] May 7 13:46:03.076: INFO: Created: latency-svc-vmddq May 7 13:46:03.076: INFO: Got endpoints: latency-svc-vmddq [905.125569ms] May 7 13:46:03.151: INFO: Created: latency-svc-m7tr6 May 7 13:46:03.206: INFO: Got endpoints: latency-svc-m7tr6 [937.074428ms] May 7 13:46:03.232: INFO: Created: latency-svc-z75b5 May 7 13:46:03.270: INFO: Got endpoints: latency-svc-z75b5 [923.500958ms] May 7 13:46:03.374: INFO: Created: latency-svc-bl5zg May 7 13:46:03.376: INFO: Got endpoints: latency-svc-bl5zg [974.370928ms] May 7 13:46:03.376: INFO: Latencies: [78.143476ms 108.351594ms 180.055206ms 211.664002ms 253.907461ms 330.53307ms 386.208457ms 422.413941ms 489.452656ms 543.271332ms 681.212244ms 775.16465ms 777.512379ms 798.361784ms 802.422431ms 808.639491ms 815.723334ms 816.377187ms 817.747095ms 827.95478ms 828.710016ms 829.675601ms 832.633993ms 833.312128ms 838.253195ms 839.569092ms 845.700498ms 846.603739ms 847.709781ms 849.598021ms 849.815836ms 851.787638ms 852.649276ms 854.478841ms 855.304848ms 856.257867ms 856.441238ms 856.986298ms 858.220878ms 862.70334ms 862.791483ms 864.272824ms 864.345931ms 865.654457ms 868.288655ms 871.650963ms 873.967805ms 875.522875ms 875.758296ms 876.447803ms 877.7064ms 882.450592ms 882.460267ms 882.539676ms 882.867971ms 882.874571ms 883.251506ms 883.402288ms 887.511106ms 888.885122ms 888.907411ms 889.122793ms 892.557806ms 894.570867ms 898.7823ms 899.82969ms 900.080728ms 900.342933ms 905.125569ms 905.375017ms 905.882289ms 907.159926ms 908.210903ms 908.636919ms 910.933746ms 911.384259ms 912.547311ms 913.253429ms 914.238144ms 914.475599ms 915.16761ms 915.525772ms 916.651391ms 917.558439ms 918.19896ms 918.663544ms 923.363092ms 923.500958ms 923.513234ms 924.395009ms 929.369666ms 929.62758ms 929.697386ms 930.060647ms 930.097786ms 931.281658ms 931.417655ms 932.424083ms 932.471492ms 932.680943ms 936.270183ms 937.074428ms 941.640652ms 941.987572ms 942.225113ms 942.3261ms 942.656324ms 944.489054ms 945.258605ms 949.274105ms 950.722953ms 953.301433ms 959.595559ms 959.696992ms 959.847903ms 960.087284ms 960.905905ms 963.42084ms 965.009541ms 965.598293ms 968.289466ms 968.768584ms 971.409402ms 971.675986ms 972.051086ms 972.375779ms 974.370928ms 978.654283ms 980.612048ms 981.846407ms 982.257868ms 982.98703ms 983.56759ms 983.707832ms 984.639435ms 985.313161ms 985.845777ms 987.364892ms 987.610408ms 987.678697ms 990.027298ms 991.049911ms 991.684223ms 992.940771ms 995.767681ms 995.839321ms 996.488351ms 1.001636436s 1.001950222s 1.001993154s 1.002118666s 1.00419792s 1.00741597s 1.008764007s 1.012315704s 1.013375398s 1.01456845s 1.015572648s 1.015584737s 1.016954364s 1.019378234s 1.019618116s 1.01964301s 1.020925089s 1.021158714s 1.022599545s 1.022888054s 1.026s 1.028158263s 1.031530612s 1.032208372s 1.032643281s 1.033275041s 1.034344802s 1.034947439s 1.038073179s 1.038166119s 1.046268695s 1.046983165s 1.050158654s 1.055482382s 1.058264282s 1.062097373s 1.065829412s 1.067478696s 1.068624774s 1.069720803s 1.073858648s 1.076128751s 1.077440875s 1.079907934s 1.082392501s 1.097629762s 1.099934758s 1.101954687s 1.103791077s 1.119060321s 1.12751736s 1.133971964s 1.14608564s] May 7 13:46:03.376: INFO: 50 %ile: 936.270183ms May 7 13:46:03.376: INFO: 90 %ile: 1.055482382s May 7 13:46:03.376: INFO: 99 %ile: 1.133971964s May 7 13:46:03.376: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:46:03.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-4909" for this suite. May 7 13:46:41.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:46:41.570: INFO: namespace svc-latency-4909 deletion completed in 38.181103169s • [SLOW TEST:55.178 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:46:41.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server May 7 13:46:41.724: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:46:41.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4163" for this suite. May 7 13:46:47.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:46:47.935: INFO: namespace kubectl-4163 deletion completed in 6.115710712s • [SLOW TEST:6.364 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:46:47.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes May 7 13:46:52.744: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 7 13:47:02.856: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:47:02.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8951" for this suite. May 7 13:47:08.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:47:08.950: INFO: namespace pods-8951 deletion completed in 6.086326605s • [SLOW TEST:21.015 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:47:08.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 7 13:47:09.029: INFO: Waiting up to 5m0s for pod "downwardapi-volume-08d70eb7-b7ca-470f-bad7-00383886cf62" in namespace "projected-4846" to be "success or failure" May 7 13:47:09.044: INFO: Pod "downwardapi-volume-08d70eb7-b7ca-470f-bad7-00383886cf62": Phase="Pending", Reason="", readiness=false. Elapsed: 14.745056ms May 7 13:47:11.049: INFO: Pod "downwardapi-volume-08d70eb7-b7ca-470f-bad7-00383886cf62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019758725s May 7 13:47:13.053: INFO: Pod "downwardapi-volume-08d70eb7-b7ca-470f-bad7-00383886cf62": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024069774s May 7 13:47:15.058: INFO: Pod "downwardapi-volume-08d70eb7-b7ca-470f-bad7-00383886cf62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028790347s STEP: Saw pod success May 7 13:47:15.058: INFO: Pod "downwardapi-volume-08d70eb7-b7ca-470f-bad7-00383886cf62" satisfied condition "success or failure" May 7 13:47:15.061: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-08d70eb7-b7ca-470f-bad7-00383886cf62 container client-container: STEP: delete the pod May 7 13:47:15.086: INFO: Waiting for pod downwardapi-volume-08d70eb7-b7ca-470f-bad7-00383886cf62 to disappear May 7 13:47:15.091: INFO: Pod downwardapi-volume-08d70eb7-b7ca-470f-bad7-00383886cf62 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:47:15.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4846" for this suite. May 7 13:47:21.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:47:21.198: INFO: namespace projected-4846 deletion completed in 6.103499921s • [SLOW TEST:12.248 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:47:21.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod May 7 13:47:21.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4469' May 7 13:47:21.582: INFO: stderr: "" May 7 13:47:21.582: INFO: stdout: "pod/pause created\n" May 7 13:47:21.582: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 7 13:47:21.582: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-4469" to be "running and ready" May 7 13:47:21.625: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 43.777862ms May 7 13:47:23.629: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047792041s May 7 13:47:25.633: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.051761625s May 7 13:47:25.633: INFO: Pod "pause" satisfied condition "running and ready" May 7 13:47:25.633: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod May 7 13:47:25.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-4469' May 7 13:47:25.729: INFO: stderr: "" May 7 13:47:25.730: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 7 13:47:25.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4469' May 7 13:47:25.818: INFO: stderr: "" May 7 13:47:25.818: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod May 7 13:47:25.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-4469' May 7 13:47:25.926: INFO: stderr: "" May 7 13:47:25.927: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 7 13:47:25.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4469' May 7 13:47:26.010: INFO: stderr: "" May 7 13:47:26.010: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources May 7 13:47:26.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4469' May 7 13:47:26.192: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 7 13:47:26.192: INFO: stdout: "pod \"pause\" force deleted\n" May 7 13:47:26.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-4469' May 7 13:47:26.293: INFO: stderr: "No resources found.\n" May 7 13:47:26.293: INFO: stdout: "" May 7 13:47:26.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-4469 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 7 13:47:26.387: INFO: stderr: "" May 7 13:47:26.387: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:47:26.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4469" for this suite. May 7 13:47:32.581: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:47:32.678: INFO: namespace kubectl-4469 deletion completed in 6.288190327s • [SLOW TEST:11.480 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:47:32.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0507 13:47:33.848378 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 7 13:47:33.848: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:47:33.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9037" for this suite. May 7 13:47:39.862: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:47:39.932: INFO: namespace gc-9037 deletion completed in 6.081879729s • [SLOW TEST:7.254 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:47:39.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-42183d1c-5937-4e0f-9d2b-8fc04155241e STEP: Creating a pod to test consume secrets May 7 13:47:40.079: INFO: Waiting up to 5m0s for pod "pod-secrets-ce920538-4bfd-43c1-8fe5-9ecc4d2e60dc" in namespace "secrets-2748" to be "success or failure" May 7 13:47:40.092: INFO: Pod "pod-secrets-ce920538-4bfd-43c1-8fe5-9ecc4d2e60dc": Phase="Pending", Reason="", readiness=false. Elapsed: 13.096189ms May 7 13:47:42.096: INFO: Pod "pod-secrets-ce920538-4bfd-43c1-8fe5-9ecc4d2e60dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01719299s May 7 13:47:44.100: INFO: Pod "pod-secrets-ce920538-4bfd-43c1-8fe5-9ecc4d2e60dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020716762s STEP: Saw pod success May 7 13:47:44.100: INFO: Pod "pod-secrets-ce920538-4bfd-43c1-8fe5-9ecc4d2e60dc" satisfied condition "success or failure" May 7 13:47:44.103: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-ce920538-4bfd-43c1-8fe5-9ecc4d2e60dc container secret-env-test: STEP: delete the pod May 7 13:47:44.137: INFO: Waiting for pod pod-secrets-ce920538-4bfd-43c1-8fe5-9ecc4d2e60dc to disappear May 7 13:47:44.146: INFO: Pod pod-secrets-ce920538-4bfd-43c1-8fe5-9ecc4d2e60dc no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:47:44.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2748" for this suite. May 7 13:47:50.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:47:50.254: INFO: namespace secrets-2748 deletion completed in 6.104221966s • [SLOW TEST:10.322 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:47:50.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 7 13:47:50.357: INFO: Waiting up to 5m0s for pod "downwardapi-volume-555c29fe-78a7-43ff-a7b2-9e1ca013457e" in namespace "downward-api-677" to be "success or failure" May 7 13:47:50.386: INFO: Pod "downwardapi-volume-555c29fe-78a7-43ff-a7b2-9e1ca013457e": Phase="Pending", Reason="", readiness=false. Elapsed: 28.667728ms May 7 13:47:52.390: INFO: Pod "downwardapi-volume-555c29fe-78a7-43ff-a7b2-9e1ca013457e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033319133s May 7 13:47:54.395: INFO: Pod "downwardapi-volume-555c29fe-78a7-43ff-a7b2-9e1ca013457e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037695986s STEP: Saw pod success May 7 13:47:54.395: INFO: Pod "downwardapi-volume-555c29fe-78a7-43ff-a7b2-9e1ca013457e" satisfied condition "success or failure" May 7 13:47:54.399: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-555c29fe-78a7-43ff-a7b2-9e1ca013457e container client-container: STEP: delete the pod May 7 13:47:54.563: INFO: Waiting for pod downwardapi-volume-555c29fe-78a7-43ff-a7b2-9e1ca013457e to disappear May 7 13:47:54.589: INFO: Pod downwardapi-volume-555c29fe-78a7-43ff-a7b2-9e1ca013457e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:47:54.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-677" for this suite. May 7 13:48:00.616: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:48:00.692: INFO: namespace downward-api-677 deletion completed in 6.100632962s • [SLOW TEST:10.437 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:48:00.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 7 13:48:05.802: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:48:06.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3336" for this suite. May 7 13:48:28.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:48:29.065: INFO: namespace replicaset-3336 deletion completed in 22.239571697s • [SLOW TEST:28.371 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:48:29.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 7 13:48:33.717: INFO: Successfully updated pod "labelsupdate3a3495e9-0f77-4e06-b655-4db4409f6f52" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:48:37.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2408" for this suite. May 7 13:48:59.773: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:48:59.858: INFO: namespace projected-2408 deletion completed in 22.098072985s • [SLOW TEST:30.793 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:48:59.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium May 7 13:48:59.941: INFO: Waiting up to 5m0s for pod "pod-01ca2ae8-04e0-4aa9-a01b-13375a3bad1a" in namespace "emptydir-7771" to be "success or failure" May 7 13:48:59.944: INFO: Pod "pod-01ca2ae8-04e0-4aa9-a01b-13375a3bad1a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.040915ms May 7 13:49:01.948: INFO: Pod "pod-01ca2ae8-04e0-4aa9-a01b-13375a3bad1a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007373313s May 7 13:49:03.952: INFO: Pod "pod-01ca2ae8-04e0-4aa9-a01b-13375a3bad1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011049529s STEP: Saw pod success May 7 13:49:03.952: INFO: Pod "pod-01ca2ae8-04e0-4aa9-a01b-13375a3bad1a" satisfied condition "success or failure" May 7 13:49:03.955: INFO: Trying to get logs from node iruya-worker pod pod-01ca2ae8-04e0-4aa9-a01b-13375a3bad1a container test-container: STEP: delete the pod May 7 13:49:03.992: INFO: Waiting for pod pod-01ca2ae8-04e0-4aa9-a01b-13375a3bad1a to disappear May 7 13:49:03.998: INFO: Pod pod-01ca2ae8-04e0-4aa9-a01b-13375a3bad1a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:49:03.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7771" for this suite. May 7 13:49:10.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:49:10.104: INFO: namespace emptydir-7771 deletion completed in 6.103155609s • [SLOW TEST:10.246 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:49:10.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium May 7 13:49:10.168: INFO: Waiting up to 5m0s for pod "pod-6d3aff14-d0c2-44f9-9a34-6c65dafb102f" in namespace "emptydir-5892" to be "success or failure" May 7 13:49:10.172: INFO: Pod "pod-6d3aff14-d0c2-44f9-9a34-6c65dafb102f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.583478ms May 7 13:49:12.176: INFO: Pod "pod-6d3aff14-d0c2-44f9-9a34-6c65dafb102f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007920573s May 7 13:49:14.180: INFO: Pod "pod-6d3aff14-d0c2-44f9-9a34-6c65dafb102f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011918858s STEP: Saw pod success May 7 13:49:14.180: INFO: Pod "pod-6d3aff14-d0c2-44f9-9a34-6c65dafb102f" satisfied condition "success or failure" May 7 13:49:14.183: INFO: Trying to get logs from node iruya-worker2 pod pod-6d3aff14-d0c2-44f9-9a34-6c65dafb102f container test-container: STEP: delete the pod May 7 13:49:14.203: INFO: Waiting for pod pod-6d3aff14-d0c2-44f9-9a34-6c65dafb102f to disappear May 7 13:49:14.207: INFO: Pod pod-6d3aff14-d0c2-44f9-9a34-6c65dafb102f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:49:14.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5892" for this suite. May 7 13:49:20.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:49:20.292: INFO: namespace emptydir-5892 deletion completed in 6.080903222s • [SLOW TEST:10.188 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:49:20.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller May 7 13:49:20.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2620' May 7 13:49:24.324: INFO: stderr: "" May 7 13:49:24.324: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 7 13:49:24.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2620' May 7 13:49:24.437: INFO: stderr: "" May 7 13:49:24.437: INFO: stdout: "update-demo-nautilus-bj88l update-demo-nautilus-nrmsb " May 7 13:49:24.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bj88l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2620' May 7 13:49:24.534: INFO: stderr: "" May 7 13:49:24.534: INFO: stdout: "" May 7 13:49:24.534: INFO: update-demo-nautilus-bj88l is created but not running May 7 13:49:29.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2620' May 7 13:49:29.641: INFO: stderr: "" May 7 13:49:29.641: INFO: stdout: "update-demo-nautilus-bj88l update-demo-nautilus-nrmsb " May 7 13:49:29.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bj88l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2620' May 7 13:49:29.732: INFO: stderr: "" May 7 13:49:29.732: INFO: stdout: "true" May 7 13:49:29.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bj88l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2620' May 7 13:49:29.828: INFO: stderr: "" May 7 13:49:29.828: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 7 13:49:29.828: INFO: validating pod update-demo-nautilus-bj88l May 7 13:49:29.832: INFO: got data: { "image": "nautilus.jpg" } May 7 13:49:29.832: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 7 13:49:29.832: INFO: update-demo-nautilus-bj88l is verified up and running May 7 13:49:29.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nrmsb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2620' May 7 13:49:29.924: INFO: stderr: "" May 7 13:49:29.924: INFO: stdout: "true" May 7 13:49:29.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nrmsb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2620' May 7 13:49:30.016: INFO: stderr: "" May 7 13:49:30.016: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 7 13:49:30.016: INFO: validating pod update-demo-nautilus-nrmsb May 7 13:49:30.020: INFO: got data: { "image": "nautilus.jpg" } May 7 13:49:30.020: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 7 13:49:30.020: INFO: update-demo-nautilus-nrmsb is verified up and running STEP: rolling-update to new replication controller May 7 13:49:30.022: INFO: scanned /root for discovery docs: May 7 13:49:30.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-2620' May 7 13:49:52.783: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 7 13:49:52.783: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 7 13:49:52.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2620' May 7 13:49:52.878: INFO: stderr: "" May 7 13:49:52.878: INFO: stdout: "update-demo-kitten-jnp5b update-demo-kitten-qg4p5 " May 7 13:49:52.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-jnp5b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2620' May 7 13:49:52.970: INFO: stderr: "" May 7 13:49:52.970: INFO: stdout: "true" May 7 13:49:52.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-jnp5b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2620' May 7 13:49:53.069: INFO: stderr: "" May 7 13:49:53.069: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 7 13:49:53.069: INFO: validating pod update-demo-kitten-jnp5b May 7 13:49:53.073: INFO: got data: { "image": "kitten.jpg" } May 7 13:49:53.073: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 7 13:49:53.073: INFO: update-demo-kitten-jnp5b is verified up and running May 7 13:49:53.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-qg4p5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2620' May 7 13:49:53.169: INFO: stderr: "" May 7 13:49:53.169: INFO: stdout: "true" May 7 13:49:53.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-qg4p5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2620' May 7 13:49:53.268: INFO: stderr: "" May 7 13:49:53.268: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 7 13:49:53.268: INFO: validating pod update-demo-kitten-qg4p5 May 7 13:49:53.272: INFO: got data: { "image": "kitten.jpg" } May 7 13:49:53.272: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 7 13:49:53.272: INFO: update-demo-kitten-qg4p5 is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:49:53.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2620" for this suite. May 7 13:50:17.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:50:17.377: INFO: namespace kubectl-2620 deletion completed in 24.101535199s • [SLOW TEST:57.084 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:50:17.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs May 7 13:50:17.439: INFO: Waiting up to 5m0s for pod "pod-6709fdee-e92f-4e35-b3fb-7ac6c170074b" in namespace "emptydir-6616" to be "success or failure" May 7 13:50:17.443: INFO: Pod "pod-6709fdee-e92f-4e35-b3fb-7ac6c170074b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.146583ms May 7 13:50:19.455: INFO: Pod "pod-6709fdee-e92f-4e35-b3fb-7ac6c170074b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016436876s May 7 13:50:21.460: INFO: Pod "pod-6709fdee-e92f-4e35-b3fb-7ac6c170074b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020729592s STEP: Saw pod success May 7 13:50:21.460: INFO: Pod "pod-6709fdee-e92f-4e35-b3fb-7ac6c170074b" satisfied condition "success or failure" May 7 13:50:21.463: INFO: Trying to get logs from node iruya-worker pod pod-6709fdee-e92f-4e35-b3fb-7ac6c170074b container test-container: STEP: delete the pod May 7 13:50:21.498: INFO: Waiting for pod pod-6709fdee-e92f-4e35-b3fb-7ac6c170074b to disappear May 7 13:50:21.551: INFO: Pod pod-6709fdee-e92f-4e35-b3fb-7ac6c170074b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:50:21.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6616" for this suite. May 7 13:50:27.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:50:27.681: INFO: namespace emptydir-6616 deletion completed in 6.126302796s • [SLOW TEST:10.304 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:50:27.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-4403fa6f-90fb-4cb3-b8cf-f883b4c0ca9b STEP: Creating a pod to test consume configMaps May 7 13:50:27.787: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f2d031c2-6c18-4620-acdb-84e549cac8a6" in namespace "projected-2455" to be "success or failure" May 7 13:50:27.807: INFO: Pod "pod-projected-configmaps-f2d031c2-6c18-4620-acdb-84e549cac8a6": Phase="Pending", Reason="", readiness=false. Elapsed: 19.846626ms May 7 13:50:29.811: INFO: Pod "pod-projected-configmaps-f2d031c2-6c18-4620-acdb-84e549cac8a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024389058s May 7 13:50:31.824: INFO: Pod "pod-projected-configmaps-f2d031c2-6c18-4620-acdb-84e549cac8a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037120465s STEP: Saw pod success May 7 13:50:31.824: INFO: Pod "pod-projected-configmaps-f2d031c2-6c18-4620-acdb-84e549cac8a6" satisfied condition "success or failure" May 7 13:50:31.827: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-f2d031c2-6c18-4620-acdb-84e549cac8a6 container projected-configmap-volume-test: STEP: delete the pod May 7 13:50:31.906: INFO: Waiting for pod pod-projected-configmaps-f2d031c2-6c18-4620-acdb-84e549cac8a6 to disappear May 7 13:50:32.024: INFO: Pod pod-projected-configmaps-f2d031c2-6c18-4620-acdb-84e549cac8a6 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:50:32.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2455" for this suite. May 7 13:50:38.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:50:38.110: INFO: namespace projected-2455 deletion completed in 6.082944878s • [SLOW TEST:10.428 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:50:38.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-03b4b675-eb59-4029-8b7b-6aeddf16e22a STEP: Creating a pod to test consume configMaps May 7 13:50:38.186: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a5d484e8-aa91-44dd-ae1c-273d13b4eece" in namespace "projected-7261" to be "success or failure" May 7 13:50:38.222: INFO: Pod "pod-projected-configmaps-a5d484e8-aa91-44dd-ae1c-273d13b4eece": Phase="Pending", Reason="", readiness=false. Elapsed: 35.797118ms May 7 13:50:40.246: INFO: Pod "pod-projected-configmaps-a5d484e8-aa91-44dd-ae1c-273d13b4eece": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059668255s May 7 13:50:42.251: INFO: Pod "pod-projected-configmaps-a5d484e8-aa91-44dd-ae1c-273d13b4eece": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0641309s STEP: Saw pod success May 7 13:50:42.251: INFO: Pod "pod-projected-configmaps-a5d484e8-aa91-44dd-ae1c-273d13b4eece" satisfied condition "success or failure" May 7 13:50:42.254: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-a5d484e8-aa91-44dd-ae1c-273d13b4eece container projected-configmap-volume-test: STEP: delete the pod May 7 13:50:42.271: INFO: Waiting for pod pod-projected-configmaps-a5d484e8-aa91-44dd-ae1c-273d13b4eece to disappear May 7 13:50:42.276: INFO: Pod pod-projected-configmaps-a5d484e8-aa91-44dd-ae1c-273d13b4eece no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:50:42.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7261" for this suite. May 7 13:50:48.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:50:48.428: INFO: namespace projected-7261 deletion completed in 6.149161156s • [SLOW TEST:10.318 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:50:48.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0507 13:51:19.076404 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 7 13:51:19.076: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:51:19.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4040" for this suite. May 7 13:51:27.094: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:51:27.180: INFO: namespace gc-4040 deletion completed in 8.10058972s • [SLOW TEST:38.751 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:51:27.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller May 7 13:51:27.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7027' May 7 13:51:27.535: INFO: stderr: "" May 7 13:51:27.535: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 7 13:51:27.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7027' May 7 13:51:27.628: INFO: stderr: "" May 7 13:51:27.628: INFO: stdout: "" STEP: Replicas for name=update-demo: expected=2 actual=0 May 7 13:51:32.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7027' May 7 13:51:32.727: INFO: stderr: "" May 7 13:51:32.727: INFO: stdout: "update-demo-nautilus-8hjvj update-demo-nautilus-rbwt7 " May 7 13:51:32.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8hjvj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7027' May 7 13:51:32.824: INFO: stderr: "" May 7 13:51:32.824: INFO: stdout: "true" May 7 13:51:32.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8hjvj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7027' May 7 13:51:32.924: INFO: stderr: "" May 7 13:51:32.924: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 7 13:51:32.924: INFO: validating pod update-demo-nautilus-8hjvj May 7 13:51:32.929: INFO: got data: { "image": "nautilus.jpg" } May 7 13:51:32.929: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 7 13:51:32.929: INFO: update-demo-nautilus-8hjvj is verified up and running May 7 13:51:32.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rbwt7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7027' May 7 13:51:33.032: INFO: stderr: "" May 7 13:51:33.032: INFO: stdout: "true" May 7 13:51:33.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rbwt7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7027' May 7 13:51:33.121: INFO: stderr: "" May 7 13:51:33.121: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 7 13:51:33.121: INFO: validating pod update-demo-nautilus-rbwt7 May 7 13:51:33.125: INFO: got data: { "image": "nautilus.jpg" } May 7 13:51:33.125: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 7 13:51:33.125: INFO: update-demo-nautilus-rbwt7 is verified up and running STEP: scaling down the replication controller May 7 13:51:33.127: INFO: scanned /root for discovery docs: May 7 13:51:33.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-7027' May 7 13:51:34.274: INFO: stderr: "" May 7 13:51:34.274: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 7 13:51:34.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7027' May 7 13:51:34.374: INFO: stderr: "" May 7 13:51:34.374: INFO: stdout: "update-demo-nautilus-8hjvj update-demo-nautilus-rbwt7 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 7 13:51:39.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7027' May 7 13:51:39.479: INFO: stderr: "" May 7 13:51:39.479: INFO: stdout: "update-demo-nautilus-8hjvj update-demo-nautilus-rbwt7 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 7 13:51:44.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7027' May 7 13:51:44.572: INFO: stderr: "" May 7 13:51:44.572: INFO: stdout: "update-demo-nautilus-8hjvj " May 7 13:51:44.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8hjvj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7027' May 7 13:51:44.663: INFO: stderr: "" May 7 13:51:44.663: INFO: stdout: "true" May 7 13:51:44.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8hjvj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7027' May 7 13:51:44.764: INFO: stderr: "" May 7 13:51:44.764: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 7 13:51:44.764: INFO: validating pod update-demo-nautilus-8hjvj May 7 13:51:44.768: INFO: got data: { "image": "nautilus.jpg" } May 7 13:51:44.768: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 7 13:51:44.768: INFO: update-demo-nautilus-8hjvj is verified up and running STEP: scaling up the replication controller May 7 13:51:44.770: INFO: scanned /root for discovery docs: May 7 13:51:44.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-7027' May 7 13:51:45.936: INFO: stderr: "" May 7 13:51:45.936: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 7 13:51:45.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7027' May 7 13:51:46.034: INFO: stderr: "" May 7 13:51:46.034: INFO: stdout: "update-demo-nautilus-8hjvj update-demo-nautilus-zdgt8 " May 7 13:51:46.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8hjvj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7027' May 7 13:51:46.125: INFO: stderr: "" May 7 13:51:46.125: INFO: stdout: "true" May 7 13:51:46.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8hjvj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7027' May 7 13:51:46.209: INFO: stderr: "" May 7 13:51:46.209: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 7 13:51:46.209: INFO: validating pod update-demo-nautilus-8hjvj May 7 13:51:46.212: INFO: got data: { "image": "nautilus.jpg" } May 7 13:51:46.212: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 7 13:51:46.212: INFO: update-demo-nautilus-8hjvj is verified up and running May 7 13:51:46.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zdgt8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7027' May 7 13:51:46.296: INFO: stderr: "" May 7 13:51:46.296: INFO: stdout: "" May 7 13:51:46.296: INFO: update-demo-nautilus-zdgt8 is created but not running May 7 13:51:51.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7027' May 7 13:51:51.402: INFO: stderr: "" May 7 13:51:51.402: INFO: stdout: "update-demo-nautilus-8hjvj update-demo-nautilus-zdgt8 " May 7 13:51:51.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8hjvj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7027' May 7 13:51:51.501: INFO: stderr: "" May 7 13:51:51.501: INFO: stdout: "true" May 7 13:51:51.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8hjvj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7027' May 7 13:51:51.603: INFO: stderr: "" May 7 13:51:51.603: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 7 13:51:51.603: INFO: validating pod update-demo-nautilus-8hjvj May 7 13:51:51.606: INFO: got data: { "image": "nautilus.jpg" } May 7 13:51:51.606: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 7 13:51:51.606: INFO: update-demo-nautilus-8hjvj is verified up and running May 7 13:51:51.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zdgt8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7027' May 7 13:51:51.700: INFO: stderr: "" May 7 13:51:51.700: INFO: stdout: "true" May 7 13:51:51.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zdgt8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7027' May 7 13:51:51.791: INFO: stderr: "" May 7 13:51:51.791: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 7 13:51:51.791: INFO: validating pod update-demo-nautilus-zdgt8 May 7 13:51:51.795: INFO: got data: { "image": "nautilus.jpg" } May 7 13:51:51.795: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 7 13:51:51.795: INFO: update-demo-nautilus-zdgt8 is verified up and running STEP: using delete to clean up resources May 7 13:51:51.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7027' May 7 13:51:51.898: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 7 13:51:51.898: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 7 13:51:51.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7027' May 7 13:51:51.990: INFO: stderr: "No resources found.\n" May 7 13:51:51.991: INFO: stdout: "" May 7 13:51:51.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7027 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 7 13:51:52.074: INFO: stderr: "" May 7 13:51:52.075: INFO: stdout: "update-demo-nautilus-8hjvj\nupdate-demo-nautilus-zdgt8\n" May 7 13:51:52.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7027' May 7 13:51:52.753: INFO: stderr: "No resources found.\n" May 7 13:51:52.753: INFO: stdout: "" May 7 13:51:52.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7027 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 7 13:51:52.872: INFO: stderr: "" May 7 13:51:52.872: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:51:52.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7027" for this suite. May 7 13:52:14.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:52:14.943: INFO: namespace kubectl-7027 deletion completed in 22.068762047s • [SLOW TEST:47.763 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:52:14.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs May 7 13:52:15.089: INFO: Waiting up to 5m0s for pod "pod-d3d47e55-0d49-4e3f-ad5d-301030b6ac56" in namespace "emptydir-3966" to be "success or failure" May 7 13:52:15.092: INFO: Pod "pod-d3d47e55-0d49-4e3f-ad5d-301030b6ac56": Phase="Pending", Reason="", readiness=false. Elapsed: 3.448996ms May 7 13:52:17.129: INFO: Pod "pod-d3d47e55-0d49-4e3f-ad5d-301030b6ac56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040297179s May 7 13:52:19.132: INFO: Pod "pod-d3d47e55-0d49-4e3f-ad5d-301030b6ac56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043255099s STEP: Saw pod success May 7 13:52:19.132: INFO: Pod "pod-d3d47e55-0d49-4e3f-ad5d-301030b6ac56" satisfied condition "success or failure" May 7 13:52:19.135: INFO: Trying to get logs from node iruya-worker pod pod-d3d47e55-0d49-4e3f-ad5d-301030b6ac56 container test-container: STEP: delete the pod May 7 13:52:19.189: INFO: Waiting for pod pod-d3d47e55-0d49-4e3f-ad5d-301030b6ac56 to disappear May 7 13:52:19.200: INFO: Pod pod-d3d47e55-0d49-4e3f-ad5d-301030b6ac56 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:52:19.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3966" for this suite. May 7 13:52:25.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:52:25.290: INFO: namespace emptydir-3966 deletion completed in 6.086495592s • [SLOW TEST:10.346 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:52:25.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-72f05bd1-795c-425e-b438-8f451bc7cfc6 in namespace container-probe-8110 May 7 13:52:29.443: INFO: Started pod liveness-72f05bd1-795c-425e-b438-8f451bc7cfc6 in namespace container-probe-8110 STEP: checking the pod's current state and verifying that restartCount is present May 7 13:52:29.447: INFO: Initial restart count of pod liveness-72f05bd1-795c-425e-b438-8f451bc7cfc6 is 0 May 7 13:52:49.532: INFO: Restart count of pod container-probe-8110/liveness-72f05bd1-795c-425e-b438-8f451bc7cfc6 is now 1 (20.08535754s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:52:49.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8110" for this suite. May 7 13:52:55.616: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:52:55.692: INFO: namespace container-probe-8110 deletion completed in 6.137539098s • [SLOW TEST:30.401 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:52:55.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 7 13:52:55.792: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:53:02.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8556" for this suite. May 7 13:53:08.864: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:53:08.945: INFO: namespace init-container-8556 deletion completed in 6.099837696s • [SLOW TEST:13.252 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:53:08.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-2b736d8d-f12f-4c21-aa73-e4b217970f4f STEP: Creating a pod to test consume secrets May 7 13:53:09.078: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f307037f-d3f1-4155-be21-149a28e6c94b" in namespace "projected-9796" to be "success or failure" May 7 13:53:09.082: INFO: Pod "pod-projected-secrets-f307037f-d3f1-4155-be21-149a28e6c94b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.270821ms May 7 13:53:11.086: INFO: Pod "pod-projected-secrets-f307037f-d3f1-4155-be21-149a28e6c94b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007655311s May 7 13:53:13.090: INFO: Pod "pod-projected-secrets-f307037f-d3f1-4155-be21-149a28e6c94b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011867557s STEP: Saw pod success May 7 13:53:13.090: INFO: Pod "pod-projected-secrets-f307037f-d3f1-4155-be21-149a28e6c94b" satisfied condition "success or failure" May 7 13:53:13.093: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-f307037f-d3f1-4155-be21-149a28e6c94b container projected-secret-volume-test: STEP: delete the pod May 7 13:53:13.136: INFO: Waiting for pod pod-projected-secrets-f307037f-d3f1-4155-be21-149a28e6c94b to disappear May 7 13:53:13.206: INFO: Pod pod-projected-secrets-f307037f-d3f1-4155-be21-149a28e6c94b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:53:13.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9796" for this suite. May 7 13:53:19.229: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:53:19.335: INFO: namespace projected-9796 deletion completed in 6.125016793s • [SLOW TEST:10.389 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:53:19.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-6a2d2f7a-7f7b-4a61-804a-0bb22cc3b286 STEP: Creating configMap with name cm-test-opt-upd-328d90a7-03ba-4591-894d-0f23ab50c61d STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-6a2d2f7a-7f7b-4a61-804a-0bb22cc3b286 STEP: Updating configmap cm-test-opt-upd-328d90a7-03ba-4591-894d-0f23ab50c61d STEP: Creating configMap with name cm-test-opt-create-b18ee89a-3b5e-428b-b15a-f64871862422 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:54:55.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2604" for this suite. May 7 13:55:19.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:55:20.060: INFO: namespace projected-2604 deletion completed in 24.095518928s • [SLOW TEST:120.725 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:55:20.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-223.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-223.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-223.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-223.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 7 13:55:26.151: INFO: DNS probes using dns-test-5da2acd5-a686-45cf-b330-48b6fb7bc0f5 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-223.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-223.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-223.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-223.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 7 13:55:32.444: INFO: File wheezy_udp@dns-test-service-3.dns-223.svc.cluster.local from pod dns-223/dns-test-89529bd5-d895-4e30-a2f4-4850df38a73e contains 'foo.example.com. ' instead of 'bar.example.com.' May 7 13:55:32.446: INFO: File jessie_udp@dns-test-service-3.dns-223.svc.cluster.local from pod dns-223/dns-test-89529bd5-d895-4e30-a2f4-4850df38a73e contains 'foo.example.com. ' instead of 'bar.example.com.' May 7 13:55:32.447: INFO: Lookups using dns-223/dns-test-89529bd5-d895-4e30-a2f4-4850df38a73e failed for: [wheezy_udp@dns-test-service-3.dns-223.svc.cluster.local jessie_udp@dns-test-service-3.dns-223.svc.cluster.local] May 7 13:55:37.453: INFO: File wheezy_udp@dns-test-service-3.dns-223.svc.cluster.local from pod dns-223/dns-test-89529bd5-d895-4e30-a2f4-4850df38a73e contains 'foo.example.com. ' instead of 'bar.example.com.' May 7 13:55:37.456: INFO: File jessie_udp@dns-test-service-3.dns-223.svc.cluster.local from pod dns-223/dns-test-89529bd5-d895-4e30-a2f4-4850df38a73e contains 'foo.example.com. ' instead of 'bar.example.com.' May 7 13:55:37.456: INFO: Lookups using dns-223/dns-test-89529bd5-d895-4e30-a2f4-4850df38a73e failed for: [wheezy_udp@dns-test-service-3.dns-223.svc.cluster.local jessie_udp@dns-test-service-3.dns-223.svc.cluster.local] May 7 13:55:42.451: INFO: File wheezy_udp@dns-test-service-3.dns-223.svc.cluster.local from pod dns-223/dns-test-89529bd5-d895-4e30-a2f4-4850df38a73e contains 'foo.example.com. ' instead of 'bar.example.com.' May 7 13:55:42.454: INFO: File jessie_udp@dns-test-service-3.dns-223.svc.cluster.local from pod dns-223/dns-test-89529bd5-d895-4e30-a2f4-4850df38a73e contains 'foo.example.com. ' instead of 'bar.example.com.' May 7 13:55:42.454: INFO: Lookups using dns-223/dns-test-89529bd5-d895-4e30-a2f4-4850df38a73e failed for: [wheezy_udp@dns-test-service-3.dns-223.svc.cluster.local jessie_udp@dns-test-service-3.dns-223.svc.cluster.local] May 7 13:55:47.451: INFO: File wheezy_udp@dns-test-service-3.dns-223.svc.cluster.local from pod dns-223/dns-test-89529bd5-d895-4e30-a2f4-4850df38a73e contains 'foo.example.com. ' instead of 'bar.example.com.' May 7 13:55:47.455: INFO: File jessie_udp@dns-test-service-3.dns-223.svc.cluster.local from pod dns-223/dns-test-89529bd5-d895-4e30-a2f4-4850df38a73e contains 'foo.example.com. ' instead of 'bar.example.com.' May 7 13:55:47.455: INFO: Lookups using dns-223/dns-test-89529bd5-d895-4e30-a2f4-4850df38a73e failed for: [wheezy_udp@dns-test-service-3.dns-223.svc.cluster.local jessie_udp@dns-test-service-3.dns-223.svc.cluster.local] May 7 13:55:52.452: INFO: File wheezy_udp@dns-test-service-3.dns-223.svc.cluster.local from pod dns-223/dns-test-89529bd5-d895-4e30-a2f4-4850df38a73e contains 'foo.example.com. ' instead of 'bar.example.com.' May 7 13:55:52.456: INFO: File jessie_udp@dns-test-service-3.dns-223.svc.cluster.local from pod dns-223/dns-test-89529bd5-d895-4e30-a2f4-4850df38a73e contains 'foo.example.com. ' instead of 'bar.example.com.' May 7 13:55:52.456: INFO: Lookups using dns-223/dns-test-89529bd5-d895-4e30-a2f4-4850df38a73e failed for: [wheezy_udp@dns-test-service-3.dns-223.svc.cluster.local jessie_udp@dns-test-service-3.dns-223.svc.cluster.local] May 7 13:55:57.454: INFO: DNS probes using dns-test-89529bd5-d895-4e30-a2f4-4850df38a73e succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-223.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-223.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-223.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-223.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 7 13:56:04.057: INFO: DNS probes using dns-test-0af2a07b-a279-485a-87a7-f5858a50ce27 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:56:04.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-223" for this suite. May 7 13:56:10.237: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:56:10.316: INFO: namespace dns-223 deletion completed in 6.129434155s • [SLOW TEST:50.255 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:56:10.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 7 13:56:10.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-8599' May 7 13:56:10.470: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 7 13:56:10.470: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc May 7 13:56:10.526: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-4mtkh] May 7 13:56:10.526: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-4mtkh" in namespace "kubectl-8599" to be "running and ready" May 7 13:56:10.530: INFO: Pod "e2e-test-nginx-rc-4mtkh": Phase="Pending", Reason="", readiness=false. Elapsed: 3.63295ms May 7 13:56:12.535: INFO: Pod "e2e-test-nginx-rc-4mtkh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009289627s May 7 13:56:14.539: INFO: Pod "e2e-test-nginx-rc-4mtkh": Phase="Running", Reason="", readiness=true. Elapsed: 4.013358499s May 7 13:56:14.539: INFO: Pod "e2e-test-nginx-rc-4mtkh" satisfied condition "running and ready" May 7 13:56:14.540: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-4mtkh] May 7 13:56:14.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-8599' May 7 13:56:14.665: INFO: stderr: "" May 7 13:56:14.665: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 May 7 13:56:14.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-8599' May 7 13:56:14.777: INFO: stderr: "" May 7 13:56:14.777: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:56:14.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8599" for this suite. May 7 13:56:36.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:56:36.879: INFO: namespace kubectl-8599 deletion completed in 22.098358028s • [SLOW TEST:26.562 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:56:36.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-8b7232a0-5910-4644-9ac2-b15d307fc3ab STEP: Creating a pod to test consume secrets May 7 13:56:37.012: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ec2b36eb-61cf-4a16-9fbc-5d12290b6ed2" in namespace "projected-9354" to be "success or failure" May 7 13:56:37.051: INFO: Pod "pod-projected-secrets-ec2b36eb-61cf-4a16-9fbc-5d12290b6ed2": Phase="Pending", Reason="", readiness=false. Elapsed: 39.638138ms May 7 13:56:39.138: INFO: Pod "pod-projected-secrets-ec2b36eb-61cf-4a16-9fbc-5d12290b6ed2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126003764s May 7 13:56:41.143: INFO: Pod "pod-projected-secrets-ec2b36eb-61cf-4a16-9fbc-5d12290b6ed2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.130677678s STEP: Saw pod success May 7 13:56:41.143: INFO: Pod "pod-projected-secrets-ec2b36eb-61cf-4a16-9fbc-5d12290b6ed2" satisfied condition "success or failure" May 7 13:56:41.146: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-ec2b36eb-61cf-4a16-9fbc-5d12290b6ed2 container secret-volume-test: STEP: delete the pod May 7 13:56:41.210: INFO: Waiting for pod pod-projected-secrets-ec2b36eb-61cf-4a16-9fbc-5d12290b6ed2 to disappear May 7 13:56:41.222: INFO: Pod pod-projected-secrets-ec2b36eb-61cf-4a16-9fbc-5d12290b6ed2 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:56:41.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9354" for this suite. May 7 13:56:47.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:56:47.313: INFO: namespace projected-9354 deletion completed in 6.085563548s • [SLOW TEST:10.434 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:56:47.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-7255 STEP: Creating a pod to test atomic-volume-subpath May 7 13:56:47.430: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-7255" in namespace "subpath-8774" to be "success or failure" May 7 13:56:47.474: INFO: Pod "pod-subpath-test-downwardapi-7255": Phase="Pending", Reason="", readiness=false. Elapsed: 43.774721ms May 7 13:56:49.477: INFO: Pod "pod-subpath-test-downwardapi-7255": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047552437s May 7 13:56:51.481: INFO: Pod "pod-subpath-test-downwardapi-7255": Phase="Running", Reason="", readiness=true. Elapsed: 4.050936447s May 7 13:56:53.486: INFO: Pod "pod-subpath-test-downwardapi-7255": Phase="Running", Reason="", readiness=true. Elapsed: 6.055672893s May 7 13:56:55.490: INFO: Pod "pod-subpath-test-downwardapi-7255": Phase="Running", Reason="", readiness=true. Elapsed: 8.060209926s May 7 13:56:57.493: INFO: Pod "pod-subpath-test-downwardapi-7255": Phase="Running", Reason="", readiness=true. Elapsed: 10.063280555s May 7 13:56:59.498: INFO: Pod "pod-subpath-test-downwardapi-7255": Phase="Running", Reason="", readiness=true. Elapsed: 12.067624157s May 7 13:57:01.502: INFO: Pod "pod-subpath-test-downwardapi-7255": Phase="Running", Reason="", readiness=true. Elapsed: 14.071669707s May 7 13:57:03.521: INFO: Pod "pod-subpath-test-downwardapi-7255": Phase="Running", Reason="", readiness=true. Elapsed: 16.091231246s May 7 13:57:05.551: INFO: Pod "pod-subpath-test-downwardapi-7255": Phase="Running", Reason="", readiness=true. Elapsed: 18.120925956s May 7 13:57:07.562: INFO: Pod "pod-subpath-test-downwardapi-7255": Phase="Running", Reason="", readiness=true. Elapsed: 20.132496312s May 7 13:57:09.566: INFO: Pod "pod-subpath-test-downwardapi-7255": Phase="Running", Reason="", readiness=true. Elapsed: 22.136101028s May 7 13:57:11.575: INFO: Pod "pod-subpath-test-downwardapi-7255": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.144909347s STEP: Saw pod success May 7 13:57:11.575: INFO: Pod "pod-subpath-test-downwardapi-7255" satisfied condition "success or failure" May 7 13:57:11.578: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-downwardapi-7255 container test-container-subpath-downwardapi-7255: STEP: delete the pod May 7 13:57:11.639: INFO: Waiting for pod pod-subpath-test-downwardapi-7255 to disappear May 7 13:57:11.650: INFO: Pod pod-subpath-test-downwardapi-7255 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-7255 May 7 13:57:11.650: INFO: Deleting pod "pod-subpath-test-downwardapi-7255" in namespace "subpath-8774" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:57:11.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8774" for this suite. May 7 13:57:17.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:57:17.748: INFO: namespace subpath-8774 deletion completed in 6.093114394s • [SLOW TEST:30.435 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:57:17.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 7 13:57:17.801: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:57:25.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-661" for this suite. May 7 13:57:47.533: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:57:47.604: INFO: namespace init-container-661 deletion completed in 22.116810008s • [SLOW TEST:29.855 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:57:47.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:57:47.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3198" for this suite. May 7 13:58:09.793: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:58:09.900: INFO: namespace pods-3198 deletion completed in 22.173077248s • [SLOW TEST:22.296 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:58:09.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 7 13:58:10.064: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"08dc1220-18d4-4d13-b52d-40861e9ca0bd", Controller:(*bool)(0xc003296622), BlockOwnerDeletion:(*bool)(0xc003296623)}} May 7 13:58:10.078: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"096f6b82-30ae-4839-a9fc-031c2dd45678", Controller:(*bool)(0xc002f4b4ba), BlockOwnerDeletion:(*bool)(0xc002f4b4bb)}} May 7 13:58:10.135: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"5cc00558-b752-44ca-9363-26150701f8fa", Controller:(*bool)(0xc0032967ca), BlockOwnerDeletion:(*bool)(0xc0032967cb)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:58:15.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6741" for this suite. May 7 13:58:21.191: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:58:21.264: INFO: namespace gc-6741 deletion completed in 6.112186306s • [SLOW TEST:11.363 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:58:21.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-d393ce9c-ba05-43f8-b636-ff190f1a1eb3 STEP: Creating a pod to test consume secrets May 7 13:58:21.321: INFO: Waiting up to 5m0s for pod "pod-secrets-71c093c6-db1a-48ba-8f51-96580279839e" in namespace "secrets-3319" to be "success or failure" May 7 13:58:21.343: INFO: Pod "pod-secrets-71c093c6-db1a-48ba-8f51-96580279839e": Phase="Pending", Reason="", readiness=false. Elapsed: 22.135665ms May 7 13:58:23.347: INFO: Pod "pod-secrets-71c093c6-db1a-48ba-8f51-96580279839e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025889741s May 7 13:58:25.351: INFO: Pod "pod-secrets-71c093c6-db1a-48ba-8f51-96580279839e": Phase="Running", Reason="", readiness=true. Elapsed: 4.029683038s May 7 13:58:27.354: INFO: Pod "pod-secrets-71c093c6-db1a-48ba-8f51-96580279839e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.033028056s STEP: Saw pod success May 7 13:58:27.354: INFO: Pod "pod-secrets-71c093c6-db1a-48ba-8f51-96580279839e" satisfied condition "success or failure" May 7 13:58:27.358: INFO: Trying to get logs from node iruya-worker pod pod-secrets-71c093c6-db1a-48ba-8f51-96580279839e container secret-volume-test: STEP: delete the pod May 7 13:58:27.407: INFO: Waiting for pod pod-secrets-71c093c6-db1a-48ba-8f51-96580279839e to disappear May 7 13:58:27.421: INFO: Pod pod-secrets-71c093c6-db1a-48ba-8f51-96580279839e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 13:58:27.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3319" for this suite. May 7 13:58:33.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 13:58:33.525: INFO: namespace secrets-3319 deletion completed in 6.100272618s • [SLOW TEST:12.261 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 13:58:33.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 7 13:58:34.485: INFO: Pod name wrapped-volume-race-392b0ada-6bb0-40ef-a232-56c583718ef6: Found 0 pods out of 5 May 7 13:58:39.494: INFO: Pod name wrapped-volume-race-392b0ada-6bb0-40ef-a232-56c583718ef6: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-392b0ada-6bb0-40ef-a232-56c583718ef6 in namespace emptydir-wrapper-7355, will wait for the garbage collector to delete the pods May 7 13:58:53.585: INFO: Deleting ReplicationController wrapped-volume-race-392b0ada-6bb0-40ef-a232-56c583718ef6 took: 7.680657ms May 7 13:58:53.985: INFO: Terminating ReplicationController wrapped-volume-race-392b0ada-6bb0-40ef-a232-56c583718ef6 pods took: 400.370125ms STEP: Creating RC which spawns configmap-volume pods May 7 13:59:33.223: INFO: Pod name wrapped-volume-race-8f8f1b4b-3628-4662-984c-0ff6910a1156: Found 0 pods out of 5 May 7 13:59:38.230: INFO: Pod name wrapped-volume-race-8f8f1b4b-3628-4662-984c-0ff6910a1156: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-8f8f1b4b-3628-4662-984c-0ff6910a1156 in namespace emptydir-wrapper-7355, will wait for the garbage collector to delete the pods May 7 13:59:52.315: INFO: Deleting ReplicationController wrapped-volume-race-8f8f1b4b-3628-4662-984c-0ff6910a1156 took: 6.90768ms May 7 13:59:52.615: INFO: Terminating ReplicationController wrapped-volume-race-8f8f1b4b-3628-4662-984c-0ff6910a1156 pods took: 300.365401ms STEP: Creating RC which spawns configmap-volume pods May 7 14:00:32.653: INFO: Pod name wrapped-volume-race-c64da296-7b3b-4e6f-9c68-ccb5f6a8f8e0: Found 0 pods out of 5 May 7 14:00:37.664: INFO: Pod name wrapped-volume-race-c64da296-7b3b-4e6f-9c68-ccb5f6a8f8e0: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-c64da296-7b3b-4e6f-9c68-ccb5f6a8f8e0 in namespace emptydir-wrapper-7355, will wait for the garbage collector to delete the pods May 7 14:00:53.753: INFO: Deleting ReplicationController wrapped-volume-race-c64da296-7b3b-4e6f-9c68-ccb5f6a8f8e0 took: 7.331839ms May 7 14:00:54.053: INFO: Terminating ReplicationController wrapped-volume-race-c64da296-7b3b-4e6f-9c68-ccb5f6a8f8e0 pods took: 300.351018ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:01:33.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7355" for this suite. May 7 14:01:41.905: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:01:41.985: INFO: namespace emptydir-wrapper-7355 deletion completed in 8.094221086s • [SLOW TEST:188.460 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:01:41.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-606bc97d-f073-426b-a27d-26ad72443d6c STEP: Creating a pod to test consume secrets May 7 14:01:42.073: INFO: Waiting up to 5m0s for pod "pod-secrets-fb850b2b-183d-4811-b136-d1d9871fa295" in namespace "secrets-9081" to be "success or failure" May 7 14:01:42.100: INFO: Pod "pod-secrets-fb850b2b-183d-4811-b136-d1d9871fa295": Phase="Pending", Reason="", readiness=false. Elapsed: 27.100471ms May 7 14:01:44.104: INFO: Pod "pod-secrets-fb850b2b-183d-4811-b136-d1d9871fa295": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031158225s May 7 14:01:46.109: INFO: Pod "pod-secrets-fb850b2b-183d-4811-b136-d1d9871fa295": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035576872s STEP: Saw pod success May 7 14:01:46.109: INFO: Pod "pod-secrets-fb850b2b-183d-4811-b136-d1d9871fa295" satisfied condition "success or failure" May 7 14:01:46.112: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-fb850b2b-183d-4811-b136-d1d9871fa295 container secret-volume-test: STEP: delete the pod May 7 14:01:46.216: INFO: Waiting for pod pod-secrets-fb850b2b-183d-4811-b136-d1d9871fa295 to disappear May 7 14:01:46.238: INFO: Pod pod-secrets-fb850b2b-183d-4811-b136-d1d9871fa295 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:01:46.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9081" for this suite. May 7 14:01:52.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:01:52.372: INFO: namespace secrets-9081 deletion completed in 6.130738451s • [SLOW TEST:10.386 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:01:52.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium May 7 14:01:52.498: INFO: Waiting up to 5m0s for pod "pod-3a8567d8-9934-4509-a27a-080b95ebc443" in namespace "emptydir-9315" to be "success or failure" May 7 14:01:52.502: INFO: Pod "pod-3a8567d8-9934-4509-a27a-080b95ebc443": Phase="Pending", Reason="", readiness=false. Elapsed: 3.805578ms May 7 14:01:54.506: INFO: Pod "pod-3a8567d8-9934-4509-a27a-080b95ebc443": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008002582s May 7 14:01:56.510: INFO: Pod "pod-3a8567d8-9934-4509-a27a-080b95ebc443": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011462448s STEP: Saw pod success May 7 14:01:56.510: INFO: Pod "pod-3a8567d8-9934-4509-a27a-080b95ebc443" satisfied condition "success or failure" May 7 14:01:56.512: INFO: Trying to get logs from node iruya-worker pod pod-3a8567d8-9934-4509-a27a-080b95ebc443 container test-container: STEP: delete the pod May 7 14:01:56.573: INFO: Waiting for pod pod-3a8567d8-9934-4509-a27a-080b95ebc443 to disappear May 7 14:01:56.591: INFO: Pod pod-3a8567d8-9934-4509-a27a-080b95ebc443 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:01:56.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9315" for this suite. May 7 14:02:02.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:02:02.705: INFO: namespace emptydir-9315 deletion completed in 6.110556724s • [SLOW TEST:10.333 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:02:02.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions May 7 14:02:02.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 7 14:02:02.967: INFO: stderr: "" May 7 14:02:02.967: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:02:02.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1763" for this suite. May 7 14:02:09.031: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:02:09.111: INFO: namespace kubectl-1763 deletion completed in 6.100592254s • [SLOW TEST:6.406 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:02:09.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 7 14:02:09.217: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 7 14:02:09.237: INFO: Number of nodes with available pods: 0 May 7 14:02:09.237: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 7 14:02:09.345: INFO: Number of nodes with available pods: 0 May 7 14:02:09.345: INFO: Node iruya-worker is running more than one daemon pod May 7 14:02:10.350: INFO: Number of nodes with available pods: 0 May 7 14:02:10.350: INFO: Node iruya-worker is running more than one daemon pod May 7 14:02:11.350: INFO: Number of nodes with available pods: 0 May 7 14:02:11.350: INFO: Node iruya-worker is running more than one daemon pod May 7 14:02:12.350: INFO: Number of nodes with available pods: 1 May 7 14:02:12.350: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 7 14:02:12.382: INFO: Number of nodes with available pods: 1 May 7 14:02:12.382: INFO: Number of running nodes: 0, number of available pods: 1 May 7 14:02:13.387: INFO: Number of nodes with available pods: 0 May 7 14:02:13.387: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 7 14:02:13.400: INFO: Number of nodes with available pods: 0 May 7 14:02:13.400: INFO: Node iruya-worker is running more than one daemon pod May 7 14:02:14.404: INFO: Number of nodes with available pods: 0 May 7 14:02:14.404: INFO: Node iruya-worker is running more than one daemon pod May 7 14:02:15.405: INFO: Number of nodes with available pods: 0 May 7 14:02:15.405: INFO: Node iruya-worker is running more than one daemon pod May 7 14:02:16.406: INFO: Number of nodes with available pods: 0 May 7 14:02:16.406: INFO: Node iruya-worker is running more than one daemon pod May 7 14:02:17.412: INFO: Number of nodes with available pods: 0 May 7 14:02:17.412: INFO: Node iruya-worker is running more than one daemon pod May 7 14:02:18.405: INFO: Number of nodes with available pods: 0 May 7 14:02:18.405: INFO: Node iruya-worker is running more than one daemon pod May 7 14:02:19.405: INFO: Number of nodes with available pods: 0 May 7 14:02:19.405: INFO: Node iruya-worker is running more than one daemon pod May 7 14:02:20.404: INFO: Number of nodes with available pods: 0 May 7 14:02:20.404: INFO: Node iruya-worker is running more than one daemon pod May 7 14:02:21.405: INFO: Number of nodes with available pods: 0 May 7 14:02:21.405: INFO: Node iruya-worker is running more than one daemon pod May 7 14:02:22.404: INFO: Number of nodes with available pods: 0 May 7 14:02:22.404: INFO: Node iruya-worker is running more than one daemon pod May 7 14:02:23.405: INFO: Number of nodes with available pods: 0 May 7 14:02:23.405: INFO: Node iruya-worker is running more than one daemon pod May 7 14:02:24.404: INFO: Number of nodes with available pods: 0 May 7 14:02:24.404: INFO: Node iruya-worker is running more than one daemon pod May 7 14:02:25.404: INFO: Number of nodes with available pods: 1 May 7 14:02:25.404: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4385, will wait for the garbage collector to delete the pods May 7 14:02:25.468: INFO: Deleting DaemonSet.extensions daemon-set took: 6.689281ms May 7 14:02:25.768: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.296748ms May 7 14:02:32.286: INFO: Number of nodes with available pods: 0 May 7 14:02:32.286: INFO: Number of running nodes: 0, number of available pods: 0 May 7 14:02:32.288: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4385/daemonsets","resourceVersion":"9542096"},"items":null} May 7 14:02:32.291: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4385/pods","resourceVersion":"9542096"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:02:32.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4385" for this suite. May 7 14:02:38.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:02:38.468: INFO: namespace daemonsets-4385 deletion completed in 6.133515152s • [SLOW TEST:29.355 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:02:38.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 7 14:02:38.552: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 7 14:02:40.640: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:02:41.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4335" for this suite. May 7 14:02:47.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:02:47.827: INFO: namespace replication-controller-4335 deletion completed in 6.167329693s • [SLOW TEST:9.359 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:02:47.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 7 14:02:58.054: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-759 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 14:02:58.054: INFO: >>> kubeConfig: /root/.kube/config I0507 14:02:58.080706 6 log.go:172] (0xc002be06e0) (0xc002bab2c0) Create stream I0507 14:02:58.080734 6 log.go:172] (0xc002be06e0) (0xc002bab2c0) Stream added, broadcasting: 1 I0507 14:02:58.082755 6 log.go:172] (0xc002be06e0) Reply frame received for 1 I0507 14:02:58.082802 6 log.go:172] (0xc002be06e0) (0xc00223ca00) Create stream I0507 14:02:58.082816 6 log.go:172] (0xc002be06e0) (0xc00223ca00) Stream added, broadcasting: 3 I0507 14:02:58.084474 6 log.go:172] (0xc002be06e0) Reply frame received for 3 I0507 14:02:58.084512 6 log.go:172] (0xc002be06e0) (0xc0012840a0) Create stream I0507 14:02:58.084525 6 log.go:172] (0xc002be06e0) (0xc0012840a0) Stream added, broadcasting: 5 I0507 14:02:58.085725 6 log.go:172] (0xc002be06e0) Reply frame received for 5 I0507 14:02:58.153795 6 log.go:172] (0xc002be06e0) Data frame received for 5 I0507 14:02:58.153836 6 log.go:172] (0xc0012840a0) (5) Data frame handling I0507 14:02:58.153860 6 log.go:172] (0xc002be06e0) Data frame received for 3 I0507 14:02:58.153873 6 log.go:172] (0xc00223ca00) (3) Data frame handling I0507 14:02:58.153889 6 log.go:172] (0xc00223ca00) (3) Data frame sent I0507 14:02:58.153903 6 log.go:172] (0xc002be06e0) Data frame received for 3 I0507 14:02:58.153914 6 log.go:172] (0xc00223ca00) (3) Data frame handling I0507 14:02:58.155072 6 log.go:172] (0xc002be06e0) Data frame received for 1 I0507 14:02:58.155105 6 log.go:172] (0xc002bab2c0) (1) Data frame handling I0507 14:02:58.155134 6 log.go:172] (0xc002bab2c0) (1) Data frame sent I0507 14:02:58.155166 6 log.go:172] (0xc002be06e0) (0xc002bab2c0) Stream removed, broadcasting: 1 I0507 14:02:58.155189 6 log.go:172] (0xc002be06e0) Go away received I0507 14:02:58.155283 6 log.go:172] (0xc002be06e0) (0xc002bab2c0) Stream removed, broadcasting: 1 I0507 14:02:58.155312 6 log.go:172] (0xc002be06e0) (0xc00223ca00) Stream removed, broadcasting: 3 I0507 14:02:58.155323 6 log.go:172] (0xc002be06e0) (0xc0012840a0) Stream removed, broadcasting: 5 May 7 14:02:58.155: INFO: Exec stderr: "" May 7 14:02:58.155: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-759 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 14:02:58.155: INFO: >>> kubeConfig: /root/.kube/config I0507 14:02:58.189876 6 log.go:172] (0xc001b48840) (0xc001284640) Create stream I0507 14:02:58.189909 6 log.go:172] (0xc001b48840) (0xc001284640) Stream added, broadcasting: 1 I0507 14:02:58.192361 6 log.go:172] (0xc001b48840) Reply frame received for 1 I0507 14:02:58.192401 6 log.go:172] (0xc001b48840) (0xc0012846e0) Create stream I0507 14:02:58.192416 6 log.go:172] (0xc001b48840) (0xc0012846e0) Stream added, broadcasting: 3 I0507 14:02:58.193514 6 log.go:172] (0xc001b48840) Reply frame received for 3 I0507 14:02:58.193552 6 log.go:172] (0xc001b48840) (0xc001284820) Create stream I0507 14:02:58.193561 6 log.go:172] (0xc001b48840) (0xc001284820) Stream added, broadcasting: 5 I0507 14:02:58.194611 6 log.go:172] (0xc001b48840) Reply frame received for 5 I0507 14:02:58.267907 6 log.go:172] (0xc001b48840) Data frame received for 5 I0507 14:02:58.267953 6 log.go:172] (0xc001284820) (5) Data frame handling I0507 14:02:58.267979 6 log.go:172] (0xc001b48840) Data frame received for 3 I0507 14:02:58.267994 6 log.go:172] (0xc0012846e0) (3) Data frame handling I0507 14:02:58.268008 6 log.go:172] (0xc0012846e0) (3) Data frame sent I0507 14:02:58.268022 6 log.go:172] (0xc001b48840) Data frame received for 3 I0507 14:02:58.268034 6 log.go:172] (0xc0012846e0) (3) Data frame handling I0507 14:02:58.269702 6 log.go:172] (0xc001b48840) Data frame received for 1 I0507 14:02:58.269734 6 log.go:172] (0xc001284640) (1) Data frame handling I0507 14:02:58.269755 6 log.go:172] (0xc001284640) (1) Data frame sent I0507 14:02:58.269939 6 log.go:172] (0xc001b48840) (0xc001284640) Stream removed, broadcasting: 1 I0507 14:02:58.270098 6 log.go:172] (0xc001b48840) (0xc001284640) Stream removed, broadcasting: 1 I0507 14:02:58.270182 6 log.go:172] (0xc001b48840) (0xc0012846e0) Stream removed, broadcasting: 3 I0507 14:02:58.270352 6 log.go:172] (0xc001b48840) Go away received I0507 14:02:58.270529 6 log.go:172] (0xc001b48840) (0xc001284820) Stream removed, broadcasting: 5 May 7 14:02:58.270: INFO: Exec stderr: "" May 7 14:02:58.270: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-759 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 14:02:58.270: INFO: >>> kubeConfig: /root/.kube/config I0507 14:02:58.302075 6 log.go:172] (0xc0012e76b0) (0xc00223ce60) Create stream I0507 14:02:58.302101 6 log.go:172] (0xc0012e76b0) (0xc00223ce60) Stream added, broadcasting: 1 I0507 14:02:58.304424 6 log.go:172] (0xc0012e76b0) Reply frame received for 1 I0507 14:02:58.304477 6 log.go:172] (0xc0012e76b0) (0xc000da3220) Create stream I0507 14:02:58.304496 6 log.go:172] (0xc0012e76b0) (0xc000da3220) Stream added, broadcasting: 3 I0507 14:02:58.305765 6 log.go:172] (0xc0012e76b0) Reply frame received for 3 I0507 14:02:58.305797 6 log.go:172] (0xc0012e76b0) (0xc00223cf00) Create stream I0507 14:02:58.305811 6 log.go:172] (0xc0012e76b0) (0xc00223cf00) Stream added, broadcasting: 5 I0507 14:02:58.306700 6 log.go:172] (0xc0012e76b0) Reply frame received for 5 I0507 14:02:58.362331 6 log.go:172] (0xc0012e76b0) Data frame received for 5 I0507 14:02:58.362374 6 log.go:172] (0xc00223cf00) (5) Data frame handling I0507 14:02:58.362411 6 log.go:172] (0xc0012e76b0) Data frame received for 3 I0507 14:02:58.362443 6 log.go:172] (0xc000da3220) (3) Data frame handling I0507 14:02:58.362488 6 log.go:172] (0xc000da3220) (3) Data frame sent I0507 14:02:58.362503 6 log.go:172] (0xc0012e76b0) Data frame received for 3 I0507 14:02:58.362513 6 log.go:172] (0xc000da3220) (3) Data frame handling I0507 14:02:58.363857 6 log.go:172] (0xc0012e76b0) Data frame received for 1 I0507 14:02:58.363885 6 log.go:172] (0xc00223ce60) (1) Data frame handling I0507 14:02:58.363900 6 log.go:172] (0xc00223ce60) (1) Data frame sent I0507 14:02:58.363925 6 log.go:172] (0xc0012e76b0) (0xc00223ce60) Stream removed, broadcasting: 1 I0507 14:02:58.363962 6 log.go:172] (0xc0012e76b0) Go away received I0507 14:02:58.364131 6 log.go:172] (0xc0012e76b0) (0xc00223ce60) Stream removed, broadcasting: 1 I0507 14:02:58.364160 6 log.go:172] (0xc0012e76b0) (0xc000da3220) Stream removed, broadcasting: 3 I0507 14:02:58.364170 6 log.go:172] (0xc0012e76b0) (0xc00223cf00) Stream removed, broadcasting: 5 May 7 14:02:58.364: INFO: Exec stderr: "" May 7 14:02:58.364: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-759 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 14:02:58.364: INFO: >>> kubeConfig: /root/.kube/config I0507 14:02:58.414700 6 log.go:172] (0xc001d5c580) (0xc002e96fa0) Create stream I0507 14:02:58.414750 6 log.go:172] (0xc001d5c580) (0xc002e96fa0) Stream added, broadcasting: 1 I0507 14:02:58.418178 6 log.go:172] (0xc001d5c580) Reply frame received for 1 I0507 14:02:58.418227 6 log.go:172] (0xc001d5c580) (0xc00223cfa0) Create stream I0507 14:02:58.418241 6 log.go:172] (0xc001d5c580) (0xc00223cfa0) Stream added, broadcasting: 3 I0507 14:02:58.419199 6 log.go:172] (0xc001d5c580) Reply frame received for 3 I0507 14:02:58.419242 6 log.go:172] (0xc001d5c580) (0xc001284960) Create stream I0507 14:02:58.419258 6 log.go:172] (0xc001d5c580) (0xc001284960) Stream added, broadcasting: 5 I0507 14:02:58.420338 6 log.go:172] (0xc001d5c580) Reply frame received for 5 I0507 14:02:58.490927 6 log.go:172] (0xc001d5c580) Data frame received for 5 I0507 14:02:58.490969 6 log.go:172] (0xc001284960) (5) Data frame handling I0507 14:02:58.491017 6 log.go:172] (0xc001d5c580) Data frame received for 3 I0507 14:02:58.491059 6 log.go:172] (0xc00223cfa0) (3) Data frame handling I0507 14:02:58.491098 6 log.go:172] (0xc00223cfa0) (3) Data frame sent I0507 14:02:58.491119 6 log.go:172] (0xc001d5c580) Data frame received for 3 I0507 14:02:58.491134 6 log.go:172] (0xc00223cfa0) (3) Data frame handling I0507 14:02:58.492728 6 log.go:172] (0xc001d5c580) Data frame received for 1 I0507 14:02:58.492759 6 log.go:172] (0xc002e96fa0) (1) Data frame handling I0507 14:02:58.492787 6 log.go:172] (0xc002e96fa0) (1) Data frame sent I0507 14:02:58.492812 6 log.go:172] (0xc001d5c580) (0xc002e96fa0) Stream removed, broadcasting: 1 I0507 14:02:58.492829 6 log.go:172] (0xc001d5c580) Go away received I0507 14:02:58.492970 6 log.go:172] (0xc001d5c580) (0xc002e96fa0) Stream removed, broadcasting: 1 I0507 14:02:58.492991 6 log.go:172] (0xc001d5c580) (0xc00223cfa0) Stream removed, broadcasting: 3 I0507 14:02:58.493003 6 log.go:172] (0xc001d5c580) (0xc001284960) Stream removed, broadcasting: 5 May 7 14:02:58.493: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 7 14:02:58.493: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-759 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 14:02:58.493: INFO: >>> kubeConfig: /root/.kube/config I0507 14:02:58.540774 6 log.go:172] (0xc001bfcc60) (0xc000da3900) Create stream I0507 14:02:58.540811 6 log.go:172] (0xc001bfcc60) (0xc000da3900) Stream added, broadcasting: 1 I0507 14:02:58.543451 6 log.go:172] (0xc001bfcc60) Reply frame received for 1 I0507 14:02:58.543487 6 log.go:172] (0xc001bfcc60) (0xc00223d040) Create stream I0507 14:02:58.543499 6 log.go:172] (0xc001bfcc60) (0xc00223d040) Stream added, broadcasting: 3 I0507 14:02:58.544337 6 log.go:172] (0xc001bfcc60) Reply frame received for 3 I0507 14:02:58.544365 6 log.go:172] (0xc001bfcc60) (0xc002e97040) Create stream I0507 14:02:58.544376 6 log.go:172] (0xc001bfcc60) (0xc002e97040) Stream added, broadcasting: 5 I0507 14:02:58.545284 6 log.go:172] (0xc001bfcc60) Reply frame received for 5 I0507 14:02:58.593500 6 log.go:172] (0xc001bfcc60) Data frame received for 5 I0507 14:02:58.593527 6 log.go:172] (0xc002e97040) (5) Data frame handling I0507 14:02:58.593546 6 log.go:172] (0xc001bfcc60) Data frame received for 3 I0507 14:02:58.593572 6 log.go:172] (0xc00223d040) (3) Data frame handling I0507 14:02:58.593595 6 log.go:172] (0xc00223d040) (3) Data frame sent I0507 14:02:58.593612 6 log.go:172] (0xc001bfcc60) Data frame received for 3 I0507 14:02:58.593626 6 log.go:172] (0xc00223d040) (3) Data frame handling I0507 14:02:58.594705 6 log.go:172] (0xc001bfcc60) Data frame received for 1 I0507 14:02:58.594735 6 log.go:172] (0xc000da3900) (1) Data frame handling I0507 14:02:58.594749 6 log.go:172] (0xc000da3900) (1) Data frame sent I0507 14:02:58.594785 6 log.go:172] (0xc001bfcc60) (0xc000da3900) Stream removed, broadcasting: 1 I0507 14:02:58.594828 6 log.go:172] (0xc001bfcc60) Go away received I0507 14:02:58.594988 6 log.go:172] (0xc001bfcc60) (0xc000da3900) Stream removed, broadcasting: 1 I0507 14:02:58.595018 6 log.go:172] (0xc001bfcc60) (0xc00223d040) Stream removed, broadcasting: 3 I0507 14:02:58.595030 6 log.go:172] (0xc001bfcc60) (0xc002e97040) Stream removed, broadcasting: 5 May 7 14:02:58.595: INFO: Exec stderr: "" May 7 14:02:58.595: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-759 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 14:02:58.595: INFO: >>> kubeConfig: /root/.kube/config I0507 14:02:58.630861 6 log.go:172] (0xc0024b54a0) (0xc00223d400) Create stream I0507 14:02:58.630895 6 log.go:172] (0xc0024b54a0) (0xc00223d400) Stream added, broadcasting: 1 I0507 14:02:58.634279 6 log.go:172] (0xc0024b54a0) Reply frame received for 1 I0507 14:02:58.634318 6 log.go:172] (0xc0024b54a0) (0xc002bab360) Create stream I0507 14:02:58.634334 6 log.go:172] (0xc0024b54a0) (0xc002bab360) Stream added, broadcasting: 3 I0507 14:02:58.635273 6 log.go:172] (0xc0024b54a0) Reply frame received for 3 I0507 14:02:58.635323 6 log.go:172] (0xc0024b54a0) (0xc000da39a0) Create stream I0507 14:02:58.635336 6 log.go:172] (0xc0024b54a0) (0xc000da39a0) Stream added, broadcasting: 5 I0507 14:02:58.636178 6 log.go:172] (0xc0024b54a0) Reply frame received for 5 I0507 14:02:58.686839 6 log.go:172] (0xc0024b54a0) Data frame received for 5 I0507 14:02:58.686874 6 log.go:172] (0xc000da39a0) (5) Data frame handling I0507 14:02:58.686906 6 log.go:172] (0xc0024b54a0) Data frame received for 3 I0507 14:02:58.686965 6 log.go:172] (0xc002bab360) (3) Data frame handling I0507 14:02:58.687006 6 log.go:172] (0xc002bab360) (3) Data frame sent I0507 14:02:58.687114 6 log.go:172] (0xc0024b54a0) Data frame received for 3 I0507 14:02:58.687140 6 log.go:172] (0xc002bab360) (3) Data frame handling I0507 14:02:58.688596 6 log.go:172] (0xc0024b54a0) Data frame received for 1 I0507 14:02:58.688617 6 log.go:172] (0xc00223d400) (1) Data frame handling I0507 14:02:58.688635 6 log.go:172] (0xc00223d400) (1) Data frame sent I0507 14:02:58.688895 6 log.go:172] (0xc0024b54a0) (0xc00223d400) Stream removed, broadcasting: 1 I0507 14:02:58.688969 6 log.go:172] (0xc0024b54a0) Go away received I0507 14:02:58.689016 6 log.go:172] (0xc0024b54a0) (0xc00223d400) Stream removed, broadcasting: 1 I0507 14:02:58.689057 6 log.go:172] (0xc0024b54a0) (0xc002bab360) Stream removed, broadcasting: 3 I0507 14:02:58.689079 6 log.go:172] (0xc0024b54a0) (0xc000da39a0) Stream removed, broadcasting: 5 May 7 14:02:58.689: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 7 14:02:58.689: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-759 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 14:02:58.689: INFO: >>> kubeConfig: /root/.kube/config I0507 14:02:58.726595 6 log.go:172] (0xc001d5d600) (0xc002e972c0) Create stream I0507 14:02:58.726636 6 log.go:172] (0xc001d5d600) (0xc002e972c0) Stream added, broadcasting: 1 I0507 14:02:58.728788 6 log.go:172] (0xc001d5d600) Reply frame received for 1 I0507 14:02:58.728864 6 log.go:172] (0xc001d5d600) (0xc00223d4a0) Create stream I0507 14:02:58.728889 6 log.go:172] (0xc001d5d600) (0xc00223d4a0) Stream added, broadcasting: 3 I0507 14:02:58.730081 6 log.go:172] (0xc001d5d600) Reply frame received for 3 I0507 14:02:58.730126 6 log.go:172] (0xc001d5d600) (0xc00223d540) Create stream I0507 14:02:58.730145 6 log.go:172] (0xc001d5d600) (0xc00223d540) Stream added, broadcasting: 5 I0507 14:02:58.731073 6 log.go:172] (0xc001d5d600) Reply frame received for 5 I0507 14:02:58.797672 6 log.go:172] (0xc001d5d600) Data frame received for 3 I0507 14:02:58.797712 6 log.go:172] (0xc00223d4a0) (3) Data frame handling I0507 14:02:58.797733 6 log.go:172] (0xc00223d4a0) (3) Data frame sent I0507 14:02:58.797815 6 log.go:172] (0xc001d5d600) Data frame received for 5 I0507 14:02:58.797864 6 log.go:172] (0xc00223d540) (5) Data frame handling I0507 14:02:58.797885 6 log.go:172] (0xc001d5d600) Data frame received for 3 I0507 14:02:58.797892 6 log.go:172] (0xc00223d4a0) (3) Data frame handling I0507 14:02:58.799895 6 log.go:172] (0xc001d5d600) Data frame received for 1 I0507 14:02:58.799910 6 log.go:172] (0xc002e972c0) (1) Data frame handling I0507 14:02:58.799924 6 log.go:172] (0xc002e972c0) (1) Data frame sent I0507 14:02:58.799934 6 log.go:172] (0xc001d5d600) (0xc002e972c0) Stream removed, broadcasting: 1 I0507 14:02:58.799947 6 log.go:172] (0xc001d5d600) Go away received I0507 14:02:58.800165 6 log.go:172] (0xc001d5d600) (0xc002e972c0) Stream removed, broadcasting: 1 I0507 14:02:58.800195 6 log.go:172] (0xc001d5d600) (0xc00223d4a0) Stream removed, broadcasting: 3 I0507 14:02:58.800220 6 log.go:172] (0xc001d5d600) (0xc00223d540) Stream removed, broadcasting: 5 May 7 14:02:58.800: INFO: Exec stderr: "" May 7 14:02:58.800: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-759 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 14:02:58.800: INFO: >>> kubeConfig: /root/.kube/config I0507 14:02:58.833337 6 log.go:172] (0xc001bfdad0) (0xc0018361e0) Create stream I0507 14:02:58.833373 6 log.go:172] (0xc001bfdad0) (0xc0018361e0) Stream added, broadcasting: 1 I0507 14:02:58.835687 6 log.go:172] (0xc001bfdad0) Reply frame received for 1 I0507 14:02:58.835729 6 log.go:172] (0xc001bfdad0) (0xc001284b40) Create stream I0507 14:02:58.835743 6 log.go:172] (0xc001bfdad0) (0xc001284b40) Stream added, broadcasting: 3 I0507 14:02:58.836715 6 log.go:172] (0xc001bfdad0) Reply frame received for 3 I0507 14:02:58.836771 6 log.go:172] (0xc001bfdad0) (0xc002bab400) Create stream I0507 14:02:58.836793 6 log.go:172] (0xc001bfdad0) (0xc002bab400) Stream added, broadcasting: 5 I0507 14:02:58.837917 6 log.go:172] (0xc001bfdad0) Reply frame received for 5 I0507 14:02:58.917462 6 log.go:172] (0xc001bfdad0) Data frame received for 3 I0507 14:02:58.917489 6 log.go:172] (0xc001284b40) (3) Data frame handling I0507 14:02:58.917502 6 log.go:172] (0xc001284b40) (3) Data frame sent I0507 14:02:58.917512 6 log.go:172] (0xc001bfdad0) Data frame received for 3 I0507 14:02:58.917517 6 log.go:172] (0xc001284b40) (3) Data frame handling I0507 14:02:58.917525 6 log.go:172] (0xc001bfdad0) Data frame received for 5 I0507 14:02:58.917540 6 log.go:172] (0xc002bab400) (5) Data frame handling I0507 14:02:58.918917 6 log.go:172] (0xc001bfdad0) Data frame received for 1 I0507 14:02:58.918935 6 log.go:172] (0xc0018361e0) (1) Data frame handling I0507 14:02:58.918958 6 log.go:172] (0xc0018361e0) (1) Data frame sent I0507 14:02:58.918976 6 log.go:172] (0xc001bfdad0) (0xc0018361e0) Stream removed, broadcasting: 1 I0507 14:02:58.919027 6 log.go:172] (0xc001bfdad0) Go away received I0507 14:02:58.919094 6 log.go:172] (0xc001bfdad0) (0xc0018361e0) Stream removed, broadcasting: 1 I0507 14:02:58.919108 6 log.go:172] (0xc001bfdad0) (0xc001284b40) Stream removed, broadcasting: 3 I0507 14:02:58.919116 6 log.go:172] (0xc001bfdad0) (0xc002bab400) Stream removed, broadcasting: 5 May 7 14:02:58.919: INFO: Exec stderr: "" May 7 14:02:58.919: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-759 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 14:02:58.919: INFO: >>> kubeConfig: /root/.kube/config I0507 14:02:58.958220 6 log.go:172] (0xc002d74210) (0xc002bab860) Create stream I0507 14:02:58.958256 6 log.go:172] (0xc002d74210) (0xc002bab860) Stream added, broadcasting: 1 I0507 14:02:58.960420 6 log.go:172] (0xc002d74210) Reply frame received for 1 I0507 14:02:58.960469 6 log.go:172] (0xc002d74210) (0xc001284be0) Create stream I0507 14:02:58.960489 6 log.go:172] (0xc002d74210) (0xc001284be0) Stream added, broadcasting: 3 I0507 14:02:58.961786 6 log.go:172] (0xc002d74210) Reply frame received for 3 I0507 14:02:58.961829 6 log.go:172] (0xc002d74210) (0xc001836460) Create stream I0507 14:02:58.961843 6 log.go:172] (0xc002d74210) (0xc001836460) Stream added, broadcasting: 5 I0507 14:02:58.963367 6 log.go:172] (0xc002d74210) Reply frame received for 5 I0507 14:02:59.036802 6 log.go:172] (0xc002d74210) Data frame received for 5 I0507 14:02:59.036841 6 log.go:172] (0xc001836460) (5) Data frame handling I0507 14:02:59.036863 6 log.go:172] (0xc002d74210) Data frame received for 3 I0507 14:02:59.036876 6 log.go:172] (0xc001284be0) (3) Data frame handling I0507 14:02:59.036892 6 log.go:172] (0xc001284be0) (3) Data frame sent I0507 14:02:59.036905 6 log.go:172] (0xc002d74210) Data frame received for 3 I0507 14:02:59.036919 6 log.go:172] (0xc001284be0) (3) Data frame handling I0507 14:02:59.038221 6 log.go:172] (0xc002d74210) Data frame received for 1 I0507 14:02:59.038243 6 log.go:172] (0xc002bab860) (1) Data frame handling I0507 14:02:59.038268 6 log.go:172] (0xc002bab860) (1) Data frame sent I0507 14:02:59.038306 6 log.go:172] (0xc002d74210) (0xc002bab860) Stream removed, broadcasting: 1 I0507 14:02:59.038343 6 log.go:172] (0xc002d74210) Go away received I0507 14:02:59.038407 6 log.go:172] (0xc002d74210) (0xc002bab860) Stream removed, broadcasting: 1 I0507 14:02:59.038438 6 log.go:172] (0xc002d74210) (0xc001284be0) Stream removed, broadcasting: 3 I0507 14:02:59.038459 6 log.go:172] (0xc002d74210) (0xc001836460) Stream removed, broadcasting: 5 May 7 14:02:59.038: INFO: Exec stderr: "" May 7 14:02:59.038: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-759 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 14:02:59.038: INFO: >>> kubeConfig: /root/.kube/config I0507 14:02:59.078500 6 log.go:172] (0xc002d16bb0) (0xc001285220) Create stream I0507 14:02:59.078535 6 log.go:172] (0xc002d16bb0) (0xc001285220) Stream added, broadcasting: 1 I0507 14:02:59.080731 6 log.go:172] (0xc002d16bb0) Reply frame received for 1 I0507 14:02:59.080772 6 log.go:172] (0xc002d16bb0) (0xc0012852c0) Create stream I0507 14:02:59.080786 6 log.go:172] (0xc002d16bb0) (0xc0012852c0) Stream added, broadcasting: 3 I0507 14:02:59.081985 6 log.go:172] (0xc002d16bb0) Reply frame received for 3 I0507 14:02:59.082032 6 log.go:172] (0xc002d16bb0) (0xc001836500) Create stream I0507 14:02:59.082053 6 log.go:172] (0xc002d16bb0) (0xc001836500) Stream added, broadcasting: 5 I0507 14:02:59.082950 6 log.go:172] (0xc002d16bb0) Reply frame received for 5 I0507 14:02:59.149319 6 log.go:172] (0xc002d16bb0) Data frame received for 3 I0507 14:02:59.149412 6 log.go:172] (0xc0012852c0) (3) Data frame handling I0507 14:02:59.149440 6 log.go:172] (0xc0012852c0) (3) Data frame sent I0507 14:02:59.149456 6 log.go:172] (0xc002d16bb0) Data frame received for 3 I0507 14:02:59.149476 6 log.go:172] (0xc0012852c0) (3) Data frame handling I0507 14:02:59.149506 6 log.go:172] (0xc002d16bb0) Data frame received for 5 I0507 14:02:59.149539 6 log.go:172] (0xc001836500) (5) Data frame handling I0507 14:02:59.150986 6 log.go:172] (0xc002d16bb0) Data frame received for 1 I0507 14:02:59.151024 6 log.go:172] (0xc001285220) (1) Data frame handling I0507 14:02:59.151054 6 log.go:172] (0xc001285220) (1) Data frame sent I0507 14:02:59.151122 6 log.go:172] (0xc002d16bb0) (0xc001285220) Stream removed, broadcasting: 1 I0507 14:02:59.151170 6 log.go:172] (0xc002d16bb0) Go away received I0507 14:02:59.151297 6 log.go:172] (0xc002d16bb0) (0xc001285220) Stream removed, broadcasting: 1 I0507 14:02:59.151329 6 log.go:172] (0xc002d16bb0) (0xc0012852c0) Stream removed, broadcasting: 3 I0507 14:02:59.151354 6 log.go:172] (0xc002d16bb0) (0xc001836500) Stream removed, broadcasting: 5 May 7 14:02:59.151: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:02:59.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-759" for this suite. May 7 14:03:45.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:03:45.250: INFO: namespace e2e-kubelet-etc-hosts-759 deletion completed in 46.093965253s • [SLOW TEST:57.422 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:03:45.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 7 14:03:45.336: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:03:49.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5028" for this suite. May 7 14:04:35.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:04:35.489: INFO: namespace pods-5028 deletion completed in 46.109971733s • [SLOW TEST:50.239 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:04:35.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6103.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-6103.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6103.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6103.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-6103.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6103.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 7 14:04:41.619: INFO: DNS probes using dns-6103/dns-test-d3c07d4c-56bf-4800-a1ce-73091d594746 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:04:41.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6103" for this suite. May 7 14:04:47.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:04:47.798: INFO: namespace dns-6103 deletion completed in 6.136473512s • [SLOW TEST:12.308 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:04:47.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin May 7 14:04:47.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1591 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 7 14:04:53.411: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0507 14:04:53.344893 2820 log.go:172] (0xc000b0c630) (0xc000712820) Create stream\nI0507 14:04:53.344932 2820 log.go:172] (0xc000b0c630) (0xc000712820) Stream added, broadcasting: 1\nI0507 14:04:53.347357 2820 log.go:172] (0xc000b0c630) Reply frame received for 1\nI0507 14:04:53.347437 2820 log.go:172] (0xc000b0c630) (0xc0006b8000) Create stream\nI0507 14:04:53.347465 2820 log.go:172] (0xc000b0c630) (0xc0006b8000) Stream added, broadcasting: 3\nI0507 14:04:53.348652 2820 log.go:172] (0xc000b0c630) Reply frame received for 3\nI0507 14:04:53.348689 2820 log.go:172] (0xc000b0c630) (0xc0007128c0) Create stream\nI0507 14:04:53.348702 2820 log.go:172] (0xc000b0c630) (0xc0007128c0) Stream added, broadcasting: 5\nI0507 14:04:53.350245 2820 log.go:172] (0xc000b0c630) Reply frame received for 5\nI0507 14:04:53.350310 2820 log.go:172] (0xc000b0c630) (0xc000706000) Create stream\nI0507 14:04:53.350330 2820 log.go:172] (0xc000b0c630) (0xc000706000) Stream added, broadcasting: 7\nI0507 14:04:53.351323 2820 log.go:172] (0xc000b0c630) Reply frame received for 7\nI0507 14:04:53.351513 2820 log.go:172] (0xc0006b8000) (3) Writing data frame\nI0507 14:04:53.351613 2820 log.go:172] (0xc0006b8000) (3) Writing data frame\nI0507 14:04:53.352581 2820 log.go:172] (0xc000b0c630) Data frame received for 5\nI0507 14:04:53.352607 2820 log.go:172] (0xc0007128c0) (5) Data frame handling\nI0507 14:04:53.352632 2820 log.go:172] (0xc0007128c0) (5) Data frame sent\nI0507 14:04:53.353481 2820 log.go:172] (0xc000b0c630) Data frame received for 5\nI0507 14:04:53.353499 2820 log.go:172] (0xc0007128c0) (5) Data frame handling\nI0507 14:04:53.353515 2820 log.go:172] (0xc0007128c0) (5) Data frame sent\nI0507 14:04:53.386481 2820 log.go:172] (0xc000b0c630) Data frame received for 7\nI0507 14:04:53.386509 2820 log.go:172] (0xc000706000) (7) Data frame handling\nI0507 14:04:53.386563 2820 log.go:172] (0xc000b0c630) Data frame received for 5\nI0507 14:04:53.386611 2820 log.go:172] (0xc0007128c0) (5) Data frame handling\nI0507 14:04:53.387005 2820 log.go:172] (0xc000b0c630) Data frame received for 1\nI0507 14:04:53.387052 2820 log.go:172] (0xc000b0c630) (0xc0006b8000) Stream removed, broadcasting: 3\nI0507 14:04:53.387108 2820 log.go:172] (0xc000712820) (1) Data frame handling\nI0507 14:04:53.387138 2820 log.go:172] (0xc000712820) (1) Data frame sent\nI0507 14:04:53.387161 2820 log.go:172] (0xc000b0c630) (0xc000712820) Stream removed, broadcasting: 1\nI0507 14:04:53.387187 2820 log.go:172] (0xc000b0c630) Go away received\nI0507 14:04:53.387366 2820 log.go:172] (0xc000b0c630) (0xc000712820) Stream removed, broadcasting: 1\nI0507 14:04:53.387391 2820 log.go:172] (0xc000b0c630) (0xc0006b8000) Stream removed, broadcasting: 3\nI0507 14:04:53.387402 2820 log.go:172] (0xc000b0c630) (0xc0007128c0) Stream removed, broadcasting: 5\nI0507 14:04:53.387415 2820 log.go:172] (0xc000b0c630) (0xc000706000) Stream removed, broadcasting: 7\n" May 7 14:04:53.411: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:04:55.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1591" for this suite. May 7 14:05:03.438: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:05:03.516: INFO: namespace kubectl-1591 deletion completed in 8.095501865s • [SLOW TEST:15.717 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:05:03.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC May 7 14:05:03.621: INFO: namespace kubectl-114 May 7 14:05:03.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-114' May 7 14:05:03.921: INFO: stderr: "" May 7 14:05:03.921: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 7 14:05:04.926: INFO: Selector matched 1 pods for map[app:redis] May 7 14:05:04.926: INFO: Found 0 / 1 May 7 14:05:05.926: INFO: Selector matched 1 pods for map[app:redis] May 7 14:05:05.926: INFO: Found 0 / 1 May 7 14:05:06.926: INFO: Selector matched 1 pods for map[app:redis] May 7 14:05:06.926: INFO: Found 0 / 1 May 7 14:05:07.925: INFO: Selector matched 1 pods for map[app:redis] May 7 14:05:07.925: INFO: Found 1 / 1 May 7 14:05:07.925: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 7 14:05:07.928: INFO: Selector matched 1 pods for map[app:redis] May 7 14:05:07.928: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 7 14:05:07.928: INFO: wait on redis-master startup in kubectl-114 May 7 14:05:07.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-8znrc redis-master --namespace=kubectl-114' May 7 14:05:08.039: INFO: stderr: "" May 7 14:05:08.039: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 07 May 14:05:06.925 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 07 May 14:05:06.925 # Server started, Redis version 3.2.12\n1:M 07 May 14:05:06.925 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 07 May 14:05:06.925 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC May 7 14:05:08.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-114' May 7 14:05:08.182: INFO: stderr: "" May 7 14:05:08.182: INFO: stdout: "service/rm2 exposed\n" May 7 14:05:08.189: INFO: Service rm2 in namespace kubectl-114 found. STEP: exposing service May 7 14:05:10.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-114' May 7 14:05:10.343: INFO: stderr: "" May 7 14:05:10.343: INFO: stdout: "service/rm3 exposed\n" May 7 14:05:10.357: INFO: Service rm3 in namespace kubectl-114 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:05:12.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-114" for this suite. May 7 14:05:36.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:05:36.460: INFO: namespace kubectl-114 deletion completed in 24.090419135s • [SLOW TEST:32.943 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:05:36.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium May 7 14:05:36.522: INFO: Waiting up to 5m0s for pod "pod-1678f279-fd50-460e-8b22-c44e3e0c28fe" in namespace "emptydir-6339" to be "success or failure" May 7 14:05:36.543: INFO: Pod "pod-1678f279-fd50-460e-8b22-c44e3e0c28fe": Phase="Pending", Reason="", readiness=false. Elapsed: 21.36246ms May 7 14:05:38.548: INFO: Pod "pod-1678f279-fd50-460e-8b22-c44e3e0c28fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026181245s May 7 14:05:40.553: INFO: Pod "pod-1678f279-fd50-460e-8b22-c44e3e0c28fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031611452s STEP: Saw pod success May 7 14:05:40.553: INFO: Pod "pod-1678f279-fd50-460e-8b22-c44e3e0c28fe" satisfied condition "success or failure" May 7 14:05:40.557: INFO: Trying to get logs from node iruya-worker2 pod pod-1678f279-fd50-460e-8b22-c44e3e0c28fe container test-container: STEP: delete the pod May 7 14:05:40.597: INFO: Waiting for pod pod-1678f279-fd50-460e-8b22-c44e3e0c28fe to disappear May 7 14:05:40.609: INFO: Pod pod-1678f279-fd50-460e-8b22-c44e3e0c28fe no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:05:40.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6339" for this suite. May 7 14:05:46.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:05:46.701: INFO: namespace emptydir-6339 deletion completed in 6.087139806s • [SLOW TEST:10.241 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:05:46.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-94f1b9a8-930c-45f1-a8d8-1837464f1171 STEP: Creating secret with name s-test-opt-upd-aef04274-4b3a-4f99-a226-4830677ffee2 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-94f1b9a8-930c-45f1-a8d8-1837464f1171 STEP: Updating secret s-test-opt-upd-aef04274-4b3a-4f99-a226-4830677ffee2 STEP: Creating secret with name s-test-opt-create-5fc4e92d-c019-4c76-ab03-bd2b9d888adf STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:07:13.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1993" for this suite. May 7 14:07:35.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:07:35.374: INFO: namespace secrets-1993 deletion completed in 22.087426195s • [SLOW TEST:108.673 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:07:35.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-634f8e07-d9ee-45b3-bf72-b9946e11791e STEP: Creating a pod to test consume configMaps May 7 14:07:35.446: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-849686c8-a1e1-4475-84cd-024b794ca6d3" in namespace "projected-193" to be "success or failure" May 7 14:07:35.450: INFO: Pod "pod-projected-configmaps-849686c8-a1e1-4475-84cd-024b794ca6d3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.960951ms May 7 14:07:37.454: INFO: Pod "pod-projected-configmaps-849686c8-a1e1-4475-84cd-024b794ca6d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007764064s May 7 14:07:39.458: INFO: Pod "pod-projected-configmaps-849686c8-a1e1-4475-84cd-024b794ca6d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012309512s STEP: Saw pod success May 7 14:07:39.458: INFO: Pod "pod-projected-configmaps-849686c8-a1e1-4475-84cd-024b794ca6d3" satisfied condition "success or failure" May 7 14:07:39.462: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-849686c8-a1e1-4475-84cd-024b794ca6d3 container projected-configmap-volume-test: STEP: delete the pod May 7 14:07:39.488: INFO: Waiting for pod pod-projected-configmaps-849686c8-a1e1-4475-84cd-024b794ca6d3 to disappear May 7 14:07:39.522: INFO: Pod pod-projected-configmaps-849686c8-a1e1-4475-84cd-024b794ca6d3 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:07:39.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-193" for this suite. May 7 14:07:45.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:07:45.620: INFO: namespace projected-193 deletion completed in 6.093702212s • [SLOW TEST:10.246 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:07:45.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args May 7 14:07:45.720: INFO: Waiting up to 5m0s for pod "var-expansion-24224980-aad0-490c-85cd-fa3cfdd99fc6" in namespace "var-expansion-796" to be "success or failure" May 7 14:07:45.726: INFO: Pod "var-expansion-24224980-aad0-490c-85cd-fa3cfdd99fc6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.715248ms May 7 14:07:47.799: INFO: Pod "var-expansion-24224980-aad0-490c-85cd-fa3cfdd99fc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079252013s May 7 14:07:49.803: INFO: Pod "var-expansion-24224980-aad0-490c-85cd-fa3cfdd99fc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.083452678s STEP: Saw pod success May 7 14:07:49.803: INFO: Pod "var-expansion-24224980-aad0-490c-85cd-fa3cfdd99fc6" satisfied condition "success or failure" May 7 14:07:49.806: INFO: Trying to get logs from node iruya-worker pod var-expansion-24224980-aad0-490c-85cd-fa3cfdd99fc6 container dapi-container: STEP: delete the pod May 7 14:07:49.913: INFO: Waiting for pod var-expansion-24224980-aad0-490c-85cd-fa3cfdd99fc6 to disappear May 7 14:07:49.929: INFO: Pod var-expansion-24224980-aad0-490c-85cd-fa3cfdd99fc6 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:07:49.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-796" for this suite. May 7 14:07:55.944: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:07:56.063: INFO: namespace var-expansion-796 deletion completed in 6.129945301s • [SLOW TEST:10.441 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:07:56.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-cl2m STEP: Creating a pod to test atomic-volume-subpath May 7 14:07:56.192: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-cl2m" in namespace "subpath-4832" to be "success or failure" May 7 14:07:56.216: INFO: Pod "pod-subpath-test-secret-cl2m": Phase="Pending", Reason="", readiness=false. Elapsed: 24.163684ms May 7 14:07:58.237: INFO: Pod "pod-subpath-test-secret-cl2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045552047s May 7 14:08:00.242: INFO: Pod "pod-subpath-test-secret-cl2m": Phase="Running", Reason="", readiness=true. Elapsed: 4.050074634s May 7 14:08:02.246: INFO: Pod "pod-subpath-test-secret-cl2m": Phase="Running", Reason="", readiness=true. Elapsed: 6.054430265s May 7 14:08:04.250: INFO: Pod "pod-subpath-test-secret-cl2m": Phase="Running", Reason="", readiness=true. Elapsed: 8.05863743s May 7 14:08:06.254: INFO: Pod "pod-subpath-test-secret-cl2m": Phase="Running", Reason="", readiness=true. Elapsed: 10.062443951s May 7 14:08:08.259: INFO: Pod "pod-subpath-test-secret-cl2m": Phase="Running", Reason="", readiness=true. Elapsed: 12.067033592s May 7 14:08:10.263: INFO: Pod "pod-subpath-test-secret-cl2m": Phase="Running", Reason="", readiness=true. Elapsed: 14.071341619s May 7 14:08:12.268: INFO: Pod "pod-subpath-test-secret-cl2m": Phase="Running", Reason="", readiness=true. Elapsed: 16.076007288s May 7 14:08:14.272: INFO: Pod "pod-subpath-test-secret-cl2m": Phase="Running", Reason="", readiness=true. Elapsed: 18.080663559s May 7 14:08:16.277: INFO: Pod "pod-subpath-test-secret-cl2m": Phase="Running", Reason="", readiness=true. Elapsed: 20.085059871s May 7 14:08:18.281: INFO: Pod "pod-subpath-test-secret-cl2m": Phase="Running", Reason="", readiness=true. Elapsed: 22.089198031s May 7 14:08:20.284: INFO: Pod "pod-subpath-test-secret-cl2m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.092673244s STEP: Saw pod success May 7 14:08:20.284: INFO: Pod "pod-subpath-test-secret-cl2m" satisfied condition "success or failure" May 7 14:08:20.287: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-secret-cl2m container test-container-subpath-secret-cl2m: STEP: delete the pod May 7 14:08:20.314: INFO: Waiting for pod pod-subpath-test-secret-cl2m to disappear May 7 14:08:20.318: INFO: Pod pod-subpath-test-secret-cl2m no longer exists STEP: Deleting pod pod-subpath-test-secret-cl2m May 7 14:08:20.318: INFO: Deleting pod "pod-subpath-test-secret-cl2m" in namespace "subpath-4832" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:08:20.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4832" for this suite. May 7 14:08:26.340: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:08:26.418: INFO: namespace subpath-4832 deletion completed in 6.095760973s • [SLOW TEST:30.354 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:08:26.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:08:26.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8757" for this suite. May 7 14:08:32.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:08:32.630: INFO: namespace services-8757 deletion completed in 6.079096122s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.212 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:08:32.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults May 7 14:08:32.723: INFO: Waiting up to 5m0s for pod "client-containers-28223ad8-5707-4a1e-8a94-d39be4fe9e01" in namespace "containers-8839" to be "success or failure" May 7 14:08:32.726: INFO: Pod "client-containers-28223ad8-5707-4a1e-8a94-d39be4fe9e01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.918866ms May 7 14:08:34.835: INFO: Pod "client-containers-28223ad8-5707-4a1e-8a94-d39be4fe9e01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111519775s May 7 14:08:36.839: INFO: Pod "client-containers-28223ad8-5707-4a1e-8a94-d39be4fe9e01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.115207428s STEP: Saw pod success May 7 14:08:36.839: INFO: Pod "client-containers-28223ad8-5707-4a1e-8a94-d39be4fe9e01" satisfied condition "success or failure" May 7 14:08:36.841: INFO: Trying to get logs from node iruya-worker pod client-containers-28223ad8-5707-4a1e-8a94-d39be4fe9e01 container test-container: STEP: delete the pod May 7 14:08:36.887: INFO: Waiting for pod client-containers-28223ad8-5707-4a1e-8a94-d39be4fe9e01 to disappear May 7 14:08:36.894: INFO: Pod client-containers-28223ad8-5707-4a1e-8a94-d39be4fe9e01 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:08:36.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8839" for this suite. May 7 14:08:42.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:08:43.032: INFO: namespace containers-8839 deletion completed in 6.135973814s • [SLOW TEST:10.402 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:08:43.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 7 14:08:43.231: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 7 14:08:43.249: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 14:08:43.253: INFO: Number of nodes with available pods: 0 May 7 14:08:43.253: INFO: Node iruya-worker is running more than one daemon pod May 7 14:08:44.259: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 14:08:44.262: INFO: Number of nodes with available pods: 0 May 7 14:08:44.262: INFO: Node iruya-worker is running more than one daemon pod May 7 14:08:45.383: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 14:08:45.386: INFO: Number of nodes with available pods: 0 May 7 14:08:45.386: INFO: Node iruya-worker is running more than one daemon pod May 7 14:08:46.258: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 14:08:46.262: INFO: Number of nodes with available pods: 0 May 7 14:08:46.262: INFO: Node iruya-worker is running more than one daemon pod May 7 14:08:47.257: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 14:08:47.259: INFO: Number of nodes with available pods: 2 May 7 14:08:47.259: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 7 14:08:47.288: INFO: Wrong image for pod: daemon-set-lqppb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 7 14:08:47.288: INFO: Wrong image for pod: daemon-set-wrjg4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 7 14:08:47.309: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 14:08:48.314: INFO: Wrong image for pod: daemon-set-lqppb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 7 14:08:48.314: INFO: Wrong image for pod: daemon-set-wrjg4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 7 14:08:48.317: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 14:08:49.314: INFO: Wrong image for pod: daemon-set-lqppb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 7 14:08:49.314: INFO: Wrong image for pod: daemon-set-wrjg4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 7 14:08:49.318: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 14:08:50.313: INFO: Wrong image for pod: daemon-set-lqppb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 7 14:08:50.313: INFO: Wrong image for pod: daemon-set-wrjg4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 7 14:08:50.317: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 14:08:51.314: INFO: Wrong image for pod: daemon-set-lqppb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 7 14:08:51.314: INFO: Pod daemon-set-lqppb is not available May 7 14:08:51.314: INFO: Wrong image for pod: daemon-set-wrjg4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 7 14:08:51.318: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 14:08:52.314: INFO: Wrong image for pod: daemon-set-lqppb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 7 14:08:52.314: INFO: Pod daemon-set-lqppb is not available May 7 14:08:52.314: INFO: Wrong image for pod: daemon-set-wrjg4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 7 14:08:52.317: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 14:08:53.313: INFO: Wrong image for pod: daemon-set-lqppb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 7 14:08:53.313: INFO: Pod daemon-set-lqppb is not available May 7 14:08:53.313: INFO: Wrong image for pod: daemon-set-wrjg4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 7 14:08:53.317: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 14:08:54.314: INFO: Wrong image for pod: daemon-set-lqppb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 7 14:08:54.314: INFO: Pod daemon-set-lqppb is not available May 7 14:08:54.314: INFO: Wrong image for pod: daemon-set-wrjg4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 7 14:08:54.318: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 14:08:55.314: INFO: Wrong image for pod: daemon-set-lqppb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 7 14:08:55.314: INFO: Pod daemon-set-lqppb is not available May 7 14:08:55.314: INFO: Wrong image for pod: daemon-set-wrjg4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 7 14:08:55.317: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 14:08:56.313: INFO: Wrong image for pod: daemon-set-lqppb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 7 14:08:56.313: INFO: Pod daemon-set-lqppb is not available May 7 14:08:56.313: INFO: Wrong image for pod: daemon-set-wrjg4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 7 14:08:56.318: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 14:08:57.322: INFO: Wrong image for pod: daemon-set-lqppb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 7 14:08:57.322: INFO: Pod daemon-set-lqppb is not available May 7 14:08:57.322: INFO: Wrong image for pod: daemon-set-wrjg4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 7 14:08:57.326: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 14:08:58.314: INFO: Wrong image for pod: daemon-set-lqppb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 7 14:08:58.314: INFO: Pod daemon-set-lqppb is not available May 7 14:08:58.314: INFO: Wrong image for pod: daemon-set-wrjg4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 7 14:08:58.318: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 14:08:59.313: INFO: Wrong image for pod: daemon-set-lqppb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 7 14:08:59.313: INFO: Pod daemon-set-lqppb is not available May 7 14:08:59.313: INFO: Wrong image for pod: daemon-set-wrjg4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 7 14:08:59.316: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 14:09:00.314: INFO: Wrong image for pod: daemon-set-lqppb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 7 14:09:00.315: INFO: Pod daemon-set-lqppb is not available May 7 14:09:00.315: INFO: Wrong image for pod: daemon-set-wrjg4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 7 14:09:00.318: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 14:09:01.312: INFO: Wrong image for pod: daemon-set-lqppb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 7 14:09:01.312: INFO: Pod daemon-set-lqppb is not available May 7 14:09:01.312: INFO: Wrong image for pod: daemon-set-wrjg4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 7 14:09:01.315: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 14:09:02.314: INFO: Pod daemon-set-qpm5j is not available May 7 14:09:02.314: INFO: Wrong image for pod: daemon-set-wrjg4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 7 14:09:02.318: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 14:09:03.346: INFO: Pod daemon-set-qpm5j is not available May 7 14:09:03.346: INFO: Wrong image for pod: daemon-set-wrjg4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 7 14:09:03.350: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 14:09:04.317: INFO: Pod daemon-set-qpm5j is not available May 7 14:09:04.317: INFO: Wrong image for pod: daemon-set-wrjg4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 7 14:09:04.320: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 14:09:05.315: INFO: Wrong image for pod: daemon-set-wrjg4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 7 14:09:05.318: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 14:09:06.315: INFO: Wrong image for pod: daemon-set-wrjg4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 7 14:09:06.315: INFO: Pod daemon-set-wrjg4 is not available May 7 14:09:06.319: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 14:09:07.313: INFO: Pod daemon-set-q5pgm is not available May 7 14:09:07.317: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 7 14:09:07.320: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 14:09:07.323: INFO: Number of nodes with available pods: 1 May 7 14:09:07.323: INFO: Node iruya-worker is running more than one daemon pod May 7 14:09:08.328: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 14:09:08.332: INFO: Number of nodes with available pods: 1 May 7 14:09:08.332: INFO: Node iruya-worker is running more than one daemon pod May 7 14:09:09.334: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 14:09:09.337: INFO: Number of nodes with available pods: 1 May 7 14:09:09.337: INFO: Node iruya-worker is running more than one daemon pod May 7 14:09:10.328: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 7 14:09:10.332: INFO: Number of nodes with available pods: 2 May 7 14:09:10.332: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2042, will wait for the garbage collector to delete the pods May 7 14:09:10.408: INFO: Deleting DaemonSet.extensions daemon-set took: 7.610046ms May 7 14:09:10.708: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.303933ms May 7 14:09:22.382: INFO: Number of nodes with available pods: 0 May 7 14:09:22.382: INFO: Number of running nodes: 0, number of available pods: 0 May 7 14:09:22.385: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2042/daemonsets","resourceVersion":"9543441"},"items":null} May 7 14:09:22.387: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2042/pods","resourceVersion":"9543441"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:09:22.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2042" for this suite. May 7 14:09:28.414: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:09:28.506: INFO: namespace daemonsets-2042 deletion completed in 6.108354515s • [SLOW TEST:45.473 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:09:28.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 7 14:09:32.735: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:09:32.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2871" for this suite. May 7 14:09:38.778: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:09:38.859: INFO: namespace container-runtime-2871 deletion completed in 6.105769127s • [SLOW TEST:10.352 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:09:38.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 7 14:09:38.911: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 7 14:09:38.943: INFO: Pod name sample-pod: Found 0 pods out of 1 May 7 14:09:43.948: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 7 14:09:43.948: INFO: Creating deployment "test-rolling-update-deployment" May 7 14:09:43.951: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 7 14:09:44.014: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 7 14:09:46.022: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 7 14:09:46.025: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724457384, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724457384, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724457384, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724457384, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 7 14:09:48.034: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724457384, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724457384, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724457384, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724457384, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 7 14:09:50.030: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 7 14:09:50.040: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-8333,SelfLink:/apis/apps/v1/namespaces/deployment-8333/deployments/test-rolling-update-deployment,UID:a5209d59-cde7-4a86-8970-adcbe0492cd3,ResourceVersion:9543592,Generation:1,CreationTimestamp:2020-05-07 14:09:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-07 14:09:44 +0000 UTC 2020-05-07 14:09:44 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-07 14:09:48 +0000 UTC 2020-05-07 14:09:44 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 7 14:09:50.043: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-8333,SelfLink:/apis/apps/v1/namespaces/deployment-8333/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:a5ef9fbc-8c26-4d54-b32b-be181d63eb76,ResourceVersion:9543581,Generation:1,CreationTimestamp:2020-05-07 14:09:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment a5209d59-cde7-4a86-8970-adcbe0492cd3 0xc0027ac987 0xc0027ac988}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 7 14:09:50.043: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 7 14:09:50.043: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-8333,SelfLink:/apis/apps/v1/namespaces/deployment-8333/replicasets/test-rolling-update-controller,UID:c25fd1b3-f544-4a06-bc37-30a63d0102cd,ResourceVersion:9543590,Generation:2,CreationTimestamp:2020-05-07 14:09:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment a5209d59-cde7-4a86-8970-adcbe0492cd3 0xc0027ac8b7 0xc0027ac8b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 7 14:09:50.046: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-qjjxd" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-qjjxd,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-8333,SelfLink:/api/v1/namespaces/deployment-8333/pods/test-rolling-update-deployment-79f6b9d75c-qjjxd,UID:55c56977-0615-412e-b485-df5c8125bd9a,ResourceVersion:9543580,Generation:0,CreationTimestamp:2020-05-07 14:09:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c a5ef9fbc-8c26-4d54-b32b-be181d63eb76 0xc0027ad2b7 0xc0027ad2b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w7jdd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w7jdd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-w7jdd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027ad330} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027ad350}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 14:09:44 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 14:09:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 14:09:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 14:09:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.247,StartTime:2020-05-07 14:09:44 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-07 14:09:47 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://e0efb333899b9e467c54cca218199977cbcefe4f6075bb72d60062778636f5c1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:09:50.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8333" for this suite. May 7 14:09:56.081: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:09:56.163: INFO: namespace deployment-8333 deletion completed in 6.113624863s • [SLOW TEST:17.304 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:09:56.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-3638 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-3638 STEP: Deleting pre-stop pod May 7 14:10:09.268: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:10:09.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-3638" for this suite. May 7 14:10:47.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:10:47.402: INFO: namespace prestop-3638 deletion completed in 38.123475436s • [SLOW TEST:51.238 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:10:47.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command May 7 14:10:47.475: INFO: Waiting up to 5m0s for pod "var-expansion-681696bc-6438-4a6d-be5f-30cc02adec21" in namespace "var-expansion-4275" to be "success or failure" May 7 14:10:47.508: INFO: Pod "var-expansion-681696bc-6438-4a6d-be5f-30cc02adec21": Phase="Pending", Reason="", readiness=false. Elapsed: 33.374572ms May 7 14:10:49.581: INFO: Pod "var-expansion-681696bc-6438-4a6d-be5f-30cc02adec21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106010069s May 7 14:10:51.585: INFO: Pod "var-expansion-681696bc-6438-4a6d-be5f-30cc02adec21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.110177442s STEP: Saw pod success May 7 14:10:51.585: INFO: Pod "var-expansion-681696bc-6438-4a6d-be5f-30cc02adec21" satisfied condition "success or failure" May 7 14:10:51.588: INFO: Trying to get logs from node iruya-worker pod var-expansion-681696bc-6438-4a6d-be5f-30cc02adec21 container dapi-container: STEP: delete the pod May 7 14:10:51.625: INFO: Waiting for pod var-expansion-681696bc-6438-4a6d-be5f-30cc02adec21 to disappear May 7 14:10:51.638: INFO: Pod var-expansion-681696bc-6438-4a6d-be5f-30cc02adec21 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:10:51.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4275" for this suite. May 7 14:10:57.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:10:57.733: INFO: namespace var-expansion-4275 deletion completed in 6.092781333s • [SLOW TEST:10.331 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:10:57.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs May 7 14:10:57.837: INFO: Waiting up to 5m0s for pod "pod-bd7829cc-9471-4cc6-aa70-c90b72ffcf1e" in namespace "emptydir-4842" to be "success or failure" May 7 14:10:57.900: INFO: Pod "pod-bd7829cc-9471-4cc6-aa70-c90b72ffcf1e": Phase="Pending", Reason="", readiness=false. Elapsed: 62.983518ms May 7 14:10:59.904: INFO: Pod "pod-bd7829cc-9471-4cc6-aa70-c90b72ffcf1e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066616489s May 7 14:11:01.907: INFO: Pod "pod-bd7829cc-9471-4cc6-aa70-c90b72ffcf1e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070034738s STEP: Saw pod success May 7 14:11:01.908: INFO: Pod "pod-bd7829cc-9471-4cc6-aa70-c90b72ffcf1e" satisfied condition "success or failure" May 7 14:11:01.910: INFO: Trying to get logs from node iruya-worker pod pod-bd7829cc-9471-4cc6-aa70-c90b72ffcf1e container test-container: STEP: delete the pod May 7 14:11:01.949: INFO: Waiting for pod pod-bd7829cc-9471-4cc6-aa70-c90b72ffcf1e to disappear May 7 14:11:01.967: INFO: Pod pod-bd7829cc-9471-4cc6-aa70-c90b72ffcf1e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:11:01.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4842" for this suite. May 7 14:11:07.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:11:08.098: INFO: namespace emptydir-4842 deletion completed in 6.127730566s • [SLOW TEST:10.365 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:11:08.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 7 14:11:08.196: INFO: Waiting up to 5m0s for pod "downward-api-d68be38b-636b-4782-8d95-c46777c2bc1f" in namespace "downward-api-7840" to be "success or failure" May 7 14:11:08.214: INFO: Pod "downward-api-d68be38b-636b-4782-8d95-c46777c2bc1f": Phase="Pending", Reason="", readiness=false. Elapsed: 18.152677ms May 7 14:11:10.218: INFO: Pod "downward-api-d68be38b-636b-4782-8d95-c46777c2bc1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02251024s May 7 14:11:12.222: INFO: Pod "downward-api-d68be38b-636b-4782-8d95-c46777c2bc1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02671049s STEP: Saw pod success May 7 14:11:12.222: INFO: Pod "downward-api-d68be38b-636b-4782-8d95-c46777c2bc1f" satisfied condition "success or failure" May 7 14:11:12.225: INFO: Trying to get logs from node iruya-worker pod downward-api-d68be38b-636b-4782-8d95-c46777c2bc1f container dapi-container: STEP: delete the pod May 7 14:11:12.288: INFO: Waiting for pod downward-api-d68be38b-636b-4782-8d95-c46777c2bc1f to disappear May 7 14:11:12.300: INFO: Pod downward-api-d68be38b-636b-4782-8d95-c46777c2bc1f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:11:12.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7840" for this suite. May 7 14:11:18.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:11:18.392: INFO: namespace downward-api-7840 deletion completed in 6.088582501s • [SLOW TEST:10.294 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:11:18.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-26221477-a01a-4f0a-9654-4fc9cf0ffb2c STEP: Creating secret with name secret-projected-all-test-volume-0bc7abce-03f4-4240-aad0-9b6915027cf2 STEP: Creating a pod to test Check all projections for projected volume plugin May 7 14:11:18.470: INFO: Waiting up to 5m0s for pod "projected-volume-ac96e25b-0a92-4c2d-a408-b699079e7358" in namespace "projected-9912" to be "success or failure" May 7 14:11:18.474: INFO: Pod "projected-volume-ac96e25b-0a92-4c2d-a408-b699079e7358": Phase="Pending", Reason="", readiness=false. Elapsed: 3.755023ms May 7 14:11:20.476: INFO: Pod "projected-volume-ac96e25b-0a92-4c2d-a408-b699079e7358": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00607849s May 7 14:11:22.481: INFO: Pod "projected-volume-ac96e25b-0a92-4c2d-a408-b699079e7358": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011188011s STEP: Saw pod success May 7 14:11:22.481: INFO: Pod "projected-volume-ac96e25b-0a92-4c2d-a408-b699079e7358" satisfied condition "success or failure" May 7 14:11:22.484: INFO: Trying to get logs from node iruya-worker2 pod projected-volume-ac96e25b-0a92-4c2d-a408-b699079e7358 container projected-all-volume-test: STEP: delete the pod May 7 14:11:22.519: INFO: Waiting for pod projected-volume-ac96e25b-0a92-4c2d-a408-b699079e7358 to disappear May 7 14:11:22.534: INFO: Pod projected-volume-ac96e25b-0a92-4c2d-a408-b699079e7358 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:11:22.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9912" for this suite. May 7 14:11:28.549: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:11:28.625: INFO: namespace projected-9912 deletion completed in 6.088129309s • [SLOW TEST:10.233 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:11:28.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-c212768f-29fd-4472-8fc6-c505133d50b6 STEP: Creating a pod to test consume secrets May 7 14:11:28.725: INFO: Waiting up to 5m0s for pod "pod-secrets-76b8e028-284a-4e05-9046-1740e67dce0d" in namespace "secrets-9491" to be "success or failure" May 7 14:11:28.747: INFO: Pod "pod-secrets-76b8e028-284a-4e05-9046-1740e67dce0d": Phase="Pending", Reason="", readiness=false. Elapsed: 21.795029ms May 7 14:11:30.751: INFO: Pod "pod-secrets-76b8e028-284a-4e05-9046-1740e67dce0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026107894s May 7 14:11:32.757: INFO: Pod "pod-secrets-76b8e028-284a-4e05-9046-1740e67dce0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032223483s STEP: Saw pod success May 7 14:11:32.757: INFO: Pod "pod-secrets-76b8e028-284a-4e05-9046-1740e67dce0d" satisfied condition "success or failure" May 7 14:11:32.761: INFO: Trying to get logs from node iruya-worker pod pod-secrets-76b8e028-284a-4e05-9046-1740e67dce0d container secret-volume-test: STEP: delete the pod May 7 14:11:32.796: INFO: Waiting for pod pod-secrets-76b8e028-284a-4e05-9046-1740e67dce0d to disappear May 7 14:11:32.839: INFO: Pod pod-secrets-76b8e028-284a-4e05-9046-1740e67dce0d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:11:32.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9491" for this suite. May 7 14:11:38.858: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:11:38.938: INFO: namespace secrets-9491 deletion completed in 6.094844765s • [SLOW TEST:10.312 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:11:38.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:11:43.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2900" for this suite. May 7 14:12:33.086: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:12:33.255: INFO: namespace kubelet-test-2900 deletion completed in 50.19242319s • [SLOW TEST:54.316 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:12:33.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 7 14:12:33.319: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d3911824-7830-4b2d-a687-6c801a88e404" in namespace "projected-5615" to be "success or failure" May 7 14:12:33.335: INFO: Pod "downwardapi-volume-d3911824-7830-4b2d-a687-6c801a88e404": Phase="Pending", Reason="", readiness=false. Elapsed: 16.040242ms May 7 14:12:35.458: INFO: Pod "downwardapi-volume-d3911824-7830-4b2d-a687-6c801a88e404": Phase="Pending", Reason="", readiness=false. Elapsed: 2.139313597s May 7 14:12:37.462: INFO: Pod "downwardapi-volume-d3911824-7830-4b2d-a687-6c801a88e404": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.142857672s STEP: Saw pod success May 7 14:12:37.462: INFO: Pod "downwardapi-volume-d3911824-7830-4b2d-a687-6c801a88e404" satisfied condition "success or failure" May 7 14:12:37.464: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-d3911824-7830-4b2d-a687-6c801a88e404 container client-container: STEP: delete the pod May 7 14:12:37.524: INFO: Waiting for pod downwardapi-volume-d3911824-7830-4b2d-a687-6c801a88e404 to disappear May 7 14:12:37.538: INFO: Pod downwardapi-volume-d3911824-7830-4b2d-a687-6c801a88e404 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:12:37.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5615" for this suite. May 7 14:12:43.576: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:12:43.654: INFO: namespace projected-5615 deletion completed in 6.113278973s • [SLOW TEST:10.399 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:12:43.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-a42305ac-0857-4109-ab22-75fe7bf17fb3 STEP: Creating a pod to test consume configMaps May 7 14:12:43.751: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e6d08197-457a-489e-bb00-cafa861665f7" in namespace "projected-5103" to be "success or failure" May 7 14:12:43.769: INFO: Pod "pod-projected-configmaps-e6d08197-457a-489e-bb00-cafa861665f7": Phase="Pending", Reason="", readiness=false. Elapsed: 18.340081ms May 7 14:12:45.774: INFO: Pod "pod-projected-configmaps-e6d08197-457a-489e-bb00-cafa861665f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022693734s May 7 14:12:47.778: INFO: Pod "pod-projected-configmaps-e6d08197-457a-489e-bb00-cafa861665f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027034964s STEP: Saw pod success May 7 14:12:47.778: INFO: Pod "pod-projected-configmaps-e6d08197-457a-489e-bb00-cafa861665f7" satisfied condition "success or failure" May 7 14:12:47.781: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-e6d08197-457a-489e-bb00-cafa861665f7 container projected-configmap-volume-test: STEP: delete the pod May 7 14:12:47.804: INFO: Waiting for pod pod-projected-configmaps-e6d08197-457a-489e-bb00-cafa861665f7 to disappear May 7 14:12:47.824: INFO: Pod pod-projected-configmaps-e6d08197-457a-489e-bb00-cafa861665f7 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:12:47.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5103" for this suite. May 7 14:12:53.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:12:53.939: INFO: namespace projected-5103 deletion completed in 6.110637505s • [SLOW TEST:10.284 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:12:53.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-8e39b355-dea2-42ea-b2b8-d70ad27ae17b STEP: Creating a pod to test consume secrets May 7 14:12:54.028: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-63dca1b3-6c99-4d72-81a3-8b8e960a4e14" in namespace "projected-7117" to be "success or failure" May 7 14:12:54.051: INFO: Pod "pod-projected-secrets-63dca1b3-6c99-4d72-81a3-8b8e960a4e14": Phase="Pending", Reason="", readiness=false. Elapsed: 23.144137ms May 7 14:12:56.055: INFO: Pod "pod-projected-secrets-63dca1b3-6c99-4d72-81a3-8b8e960a4e14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027204704s May 7 14:12:58.060: INFO: Pod "pod-projected-secrets-63dca1b3-6c99-4d72-81a3-8b8e960a4e14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032058402s STEP: Saw pod success May 7 14:12:58.060: INFO: Pod "pod-projected-secrets-63dca1b3-6c99-4d72-81a3-8b8e960a4e14" satisfied condition "success or failure" May 7 14:12:58.068: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-63dca1b3-6c99-4d72-81a3-8b8e960a4e14 container projected-secret-volume-test: STEP: delete the pod May 7 14:12:58.099: INFO: Waiting for pod pod-projected-secrets-63dca1b3-6c99-4d72-81a3-8b8e960a4e14 to disappear May 7 14:12:58.134: INFO: Pod pod-projected-secrets-63dca1b3-6c99-4d72-81a3-8b8e960a4e14 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:12:58.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7117" for this suite. May 7 14:13:04.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:13:04.267: INFO: namespace projected-7117 deletion completed in 6.129540622s • [SLOW TEST:10.328 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:13:04.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-3a458d05-472c-40a2-a3ed-f479a5899f82 STEP: Creating a pod to test consume secrets May 7 14:13:04.335: INFO: Waiting up to 5m0s for pod "pod-secrets-c57cda09-0843-4b57-bbb5-b0f505f1d2a3" in namespace "secrets-6464" to be "success or failure" May 7 14:13:04.338: INFO: Pod "pod-secrets-c57cda09-0843-4b57-bbb5-b0f505f1d2a3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.266554ms May 7 14:13:06.346: INFO: Pod "pod-secrets-c57cda09-0843-4b57-bbb5-b0f505f1d2a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010383175s May 7 14:13:08.350: INFO: Pod "pod-secrets-c57cda09-0843-4b57-bbb5-b0f505f1d2a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014644249s STEP: Saw pod success May 7 14:13:08.350: INFO: Pod "pod-secrets-c57cda09-0843-4b57-bbb5-b0f505f1d2a3" satisfied condition "success or failure" May 7 14:13:08.353: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-c57cda09-0843-4b57-bbb5-b0f505f1d2a3 container secret-volume-test: STEP: delete the pod May 7 14:13:08.417: INFO: Waiting for pod pod-secrets-c57cda09-0843-4b57-bbb5-b0f505f1d2a3 to disappear May 7 14:13:08.428: INFO: Pod pod-secrets-c57cda09-0843-4b57-bbb5-b0f505f1d2a3 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:13:08.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6464" for this suite. May 7 14:13:14.445: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:13:14.528: INFO: namespace secrets-6464 deletion completed in 6.096727867s • [SLOW TEST:10.261 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:13:14.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 7 14:13:14.635: INFO: Pod name pod-release: Found 0 pods out of 1 May 7 14:13:19.647: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:13:20.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3991" for this suite. May 7 14:13:26.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:13:26.877: INFO: namespace replication-controller-3991 deletion completed in 6.20014226s • [SLOW TEST:12.349 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:13:26.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:13:31.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7980" for this suite. May 7 14:14:09.073: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:14:09.176: INFO: namespace kubelet-test-7980 deletion completed in 38.118107656s • [SLOW TEST:42.299 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:14:09.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 7 14:14:13.888: INFO: Successfully updated pod "pod-update-346a18f6-3e24-4799-a3b8-7a21d66bcd2d" STEP: verifying the updated pod is in kubernetes May 7 14:14:13.896: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:14:13.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5348" for this suite. May 7 14:14:35.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:14:35.998: INFO: namespace pods-5348 deletion completed in 22.098588493s • [SLOW TEST:26.821 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:14:35.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 7 14:14:36.140: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a004d506-d6c1-45b1-93c9-f4f51f420ad0" in namespace "downward-api-5151" to be "success or failure" May 7 14:14:36.143: INFO: Pod "downwardapi-volume-a004d506-d6c1-45b1-93c9-f4f51f420ad0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.263901ms May 7 14:14:38.243: INFO: Pod "downwardapi-volume-a004d506-d6c1-45b1-93c9-f4f51f420ad0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103360477s May 7 14:14:40.247: INFO: Pod "downwardapi-volume-a004d506-d6c1-45b1-93c9-f4f51f420ad0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.107629696s STEP: Saw pod success May 7 14:14:40.247: INFO: Pod "downwardapi-volume-a004d506-d6c1-45b1-93c9-f4f51f420ad0" satisfied condition "success or failure" May 7 14:14:40.251: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-a004d506-d6c1-45b1-93c9-f4f51f420ad0 container client-container: STEP: delete the pod May 7 14:14:40.272: INFO: Waiting for pod downwardapi-volume-a004d506-d6c1-45b1-93c9-f4f51f420ad0 to disappear May 7 14:14:40.275: INFO: Pod downwardapi-volume-a004d506-d6c1-45b1-93c9-f4f51f420ad0 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:14:40.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5151" for this suite. May 7 14:14:46.318: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:14:46.387: INFO: namespace downward-api-5151 deletion completed in 6.109409624s • [SLOW TEST:10.389 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:14:46.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:14:50.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9419" for this suite. May 7 14:14:56.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:14:56.816: INFO: namespace emptydir-wrapper-9419 deletion completed in 6.146209555s • [SLOW TEST:10.429 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:14:56.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2786.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2786.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2786.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2786.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2786.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2786.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2786.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2786.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2786.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2786.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2786.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 207.48.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.48.207_udp@PTR;check="$$(dig +tcp +noall +answer +search 207.48.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.48.207_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2786.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2786.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2786.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2786.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2786.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2786.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2786.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2786.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2786.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2786.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2786.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 207.48.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.48.207_udp@PTR;check="$$(dig +tcp +noall +answer +search 207.48.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.48.207_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 7 14:15:03.124: INFO: Unable to read wheezy_udp@dns-test-service.dns-2786.svc.cluster.local from pod dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0: the server could not find the requested resource (get pods dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0) May 7 14:15:03.127: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2786.svc.cluster.local from pod dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0: the server could not find the requested resource (get pods dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0) May 7 14:15:03.130: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local from pod dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0: the server could not find the requested resource (get pods dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0) May 7 14:15:03.133: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local from pod dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0: the server could not find the requested resource (get pods dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0) May 7 14:15:03.157: INFO: Unable to read jessie_udp@dns-test-service.dns-2786.svc.cluster.local from pod dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0: the server could not find the requested resource (get pods dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0) May 7 14:15:03.160: INFO: Unable to read jessie_tcp@dns-test-service.dns-2786.svc.cluster.local from pod dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0: the server could not find the requested resource (get pods dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0) May 7 14:15:03.163: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local from pod dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0: the server could not find the requested resource (get pods dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0) May 7 14:15:03.165: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local from pod dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0: the server could not find the requested resource (get pods dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0) May 7 14:15:03.180: INFO: Lookups using dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0 failed for: [wheezy_udp@dns-test-service.dns-2786.svc.cluster.local wheezy_tcp@dns-test-service.dns-2786.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local jessie_udp@dns-test-service.dns-2786.svc.cluster.local jessie_tcp@dns-test-service.dns-2786.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local] May 7 14:15:08.185: INFO: Unable to read wheezy_udp@dns-test-service.dns-2786.svc.cluster.local from pod dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0: the server could not find the requested resource (get pods dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0) May 7 14:15:08.189: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2786.svc.cluster.local from pod dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0: the server could not find the requested resource (get pods dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0) May 7 14:15:08.192: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local from pod dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0: the server could not find the requested resource (get pods dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0) May 7 14:15:08.195: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local from pod dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0: the server could not find the requested resource (get pods dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0) May 7 14:15:08.219: INFO: Unable to read jessie_udp@dns-test-service.dns-2786.svc.cluster.local from pod dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0: the server could not find the requested resource (get pods dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0) May 7 14:15:08.222: INFO: Unable to read jessie_tcp@dns-test-service.dns-2786.svc.cluster.local from pod dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0: the server could not find the requested resource (get pods dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0) May 7 14:15:08.225: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local from pod dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0: the server could not find the requested resource (get pods dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0) May 7 14:15:08.228: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local from pod dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0: the server could not find the requested resource (get pods dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0) May 7 14:15:08.249: INFO: Lookups using dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0 failed for: [wheezy_udp@dns-test-service.dns-2786.svc.cluster.local wheezy_tcp@dns-test-service.dns-2786.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local jessie_udp@dns-test-service.dns-2786.svc.cluster.local jessie_tcp@dns-test-service.dns-2786.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local] May 7 14:15:13.186: INFO: Unable to read wheezy_udp@dns-test-service.dns-2786.svc.cluster.local from pod dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0: the server could not find the requested resource (get pods dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0) May 7 14:15:13.191: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2786.svc.cluster.local from pod dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0: the server could not find the requested resource (get pods dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0) May 7 14:15:13.194: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local from pod dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0: the server could not find the requested resource (get pods dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0) May 7 14:15:13.198: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local from pod dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0: the server could not find the requested resource (get pods dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0) May 7 14:15:13.221: INFO: Unable to read jessie_udp@dns-test-service.dns-2786.svc.cluster.local from pod dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0: the server could not find the requested resource (get pods dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0) May 7 14:15:13.223: INFO: Unable to read jessie_tcp@dns-test-service.dns-2786.svc.cluster.local from pod dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0: the server could not find the requested resource (get pods dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0) May 7 14:15:13.226: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local from pod dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0: the server could not find the requested resource (get pods dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0) May 7 14:15:13.228: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local from pod dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0: the server could not find the requested resource (get pods dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0) May 7 14:15:13.244: INFO: Lookups using dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0 failed for: [wheezy_udp@dns-test-service.dns-2786.svc.cluster.local wheezy_tcp@dns-test-service.dns-2786.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local jessie_udp@dns-test-service.dns-2786.svc.cluster.local jessie_tcp@dns-test-service.dns-2786.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local] May 7 14:15:18.185: INFO: Unable to read wheezy_udp@dns-test-service.dns-2786.svc.cluster.local from pod dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0: the server could not find the requested resource (get pods dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0) May 7 14:15:18.188: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2786.svc.cluster.local from pod dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0: the server could not find the requested resource (get pods dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0) May 7 14:15:18.192: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local from pod dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0: the server could not find the requested resource (get pods dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0) May 7 14:15:18.195: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local from pod dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0: the server could not find the requested resource (get pods dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0) May 7 14:15:18.234: INFO: Unable to read jessie_udp@dns-test-service.dns-2786.svc.cluster.local from pod dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0: the server could not find the requested resource (get pods dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0) May 7 14:15:18.237: INFO: Unable to read jessie_tcp@dns-test-service.dns-2786.svc.cluster.local from pod dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0: the server could not find the requested resource (get pods dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0) May 7 14:15:18.239: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local from pod dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0: the server could not find the requested resource (get pods dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0) May 7 14:15:18.242: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local from pod dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0: the server could not find the requested resource (get pods dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0) May 7 14:15:18.258: INFO: Lookups using dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0 failed for: [wheezy_udp@dns-test-service.dns-2786.svc.cluster.local wheezy_tcp@dns-test-service.dns-2786.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local jessie_udp@dns-test-service.dns-2786.svc.cluster.local jessie_tcp@dns-test-service.dns-2786.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local] May 7 14:15:23.185: INFO: Unable to read wheezy_udp@dns-test-service.dns-2786.svc.cluster.local from pod dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0: the server could not find the requested resource (get pods dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0) May 7 14:15:23.188: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2786.svc.cluster.local from pod dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0: the server could not find the requested resource (get pods dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0) May 7 14:15:23.192: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local from pod dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0: the server could not find the requested resource (get pods dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0) May 7 14:15:23.195: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local from pod dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0: the server could not find the requested resource (get pods dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0) May 7 14:15:23.219: INFO: Unable to read jessie_udp@dns-test-service.dns-2786.svc.cluster.local from pod dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0: the server could not find the requested resource (get pods dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0) May 7 14:15:23.221: INFO: Unable to read jessie_tcp@dns-test-service.dns-2786.svc.cluster.local from pod dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0: the server could not find the requested resource (get pods dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0) May 7 14:15:23.224: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local from pod dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0: the server could not find the requested resource (get pods dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0) May 7 14:15:23.226: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local from pod dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0: the server could not find the requested resource (get pods dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0) May 7 14:15:23.248: INFO: Lookups using dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0 failed for: [wheezy_udp@dns-test-service.dns-2786.svc.cluster.local wheezy_tcp@dns-test-service.dns-2786.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local jessie_udp@dns-test-service.dns-2786.svc.cluster.local jessie_tcp@dns-test-service.dns-2786.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local] May 7 14:15:28.207: INFO: Unable to read wheezy_udp@dns-test-service.dns-2786.svc.cluster.local from pod dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0: the server could not find the requested resource (get pods dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0) May 7 14:15:28.210: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2786.svc.cluster.local from pod dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0: the server could not find the requested resource (get pods dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0) May 7 14:15:28.213: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local from pod dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0: the server could not find the requested resource (get pods dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0) May 7 14:15:28.215: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local from pod dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0: the server could not find the requested resource (get pods dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0) May 7 14:15:28.234: INFO: Unable to read jessie_udp@dns-test-service.dns-2786.svc.cluster.local from pod dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0: the server could not find the requested resource (get pods dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0) May 7 14:15:28.236: INFO: Unable to read jessie_tcp@dns-test-service.dns-2786.svc.cluster.local from pod dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0: the server could not find the requested resource (get pods dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0) May 7 14:15:28.239: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local from pod dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0: the server could not find the requested resource (get pods dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0) May 7 14:15:28.242: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local from pod dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0: the server could not find the requested resource (get pods dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0) May 7 14:15:28.280: INFO: Lookups using dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0 failed for: [wheezy_udp@dns-test-service.dns-2786.svc.cluster.local wheezy_tcp@dns-test-service.dns-2786.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local jessie_udp@dns-test-service.dns-2786.svc.cluster.local jessie_tcp@dns-test-service.dns-2786.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2786.svc.cluster.local] May 7 14:15:33.239: INFO: DNS probes using dns-2786/dns-test-5dab6ed7-1657-42d4-b48c-96746d37c4d0 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:15:34.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2786" for this suite. May 7 14:15:40.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:15:40.243: INFO: namespace dns-2786 deletion completed in 6.091272995s • [SLOW TEST:43.427 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:15:40.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 7 14:15:48.433: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 7 14:15:48.453: INFO: Pod pod-with-prestop-exec-hook still exists May 7 14:15:50.453: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 7 14:15:50.457: INFO: Pod pod-with-prestop-exec-hook still exists May 7 14:15:52.453: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 7 14:15:52.472: INFO: Pod pod-with-prestop-exec-hook still exists May 7 14:15:54.453: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 7 14:15:54.466: INFO: Pod pod-with-prestop-exec-hook still exists May 7 14:15:56.453: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 7 14:15:56.457: INFO: Pod pod-with-prestop-exec-hook still exists May 7 14:15:58.453: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 7 14:15:58.457: INFO: Pod pod-with-prestop-exec-hook still exists May 7 14:16:00.453: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 7 14:16:00.457: INFO: Pod pod-with-prestop-exec-hook still exists May 7 14:16:02.453: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 7 14:16:02.457: INFO: Pod pod-with-prestop-exec-hook still exists May 7 14:16:04.453: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 7 14:16:04.486: INFO: Pod pod-with-prestop-exec-hook still exists May 7 14:16:06.453: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 7 14:16:06.464: INFO: Pod pod-with-prestop-exec-hook still exists May 7 14:16:08.453: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 7 14:16:08.463: INFO: Pod pod-with-prestop-exec-hook still exists May 7 14:16:10.453: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 7 14:16:10.457: INFO: Pod pod-with-prestop-exec-hook still exists May 7 14:16:12.453: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 7 14:16:12.463: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:16:12.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5678" for this suite. May 7 14:16:34.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:16:34.624: INFO: namespace container-lifecycle-hook-5678 deletion completed in 22.148740916s • [SLOW TEST:54.381 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:16:34.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 7 14:16:34.700: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 6.048784ms) May 7 14:16:34.724: INFO: (1) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 23.996707ms) May 7 14:16:34.728: INFO: (2) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.549646ms) May 7 14:16:34.732: INFO: (3) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.822483ms) May 7 14:16:34.736: INFO: (4) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.80302ms) May 7 14:16:34.739: INFO: (5) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.331549ms) May 7 14:16:34.742: INFO: (6) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.359733ms) May 7 14:16:34.746: INFO: (7) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.316289ms) May 7 14:16:34.749: INFO: (8) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.353756ms) May 7 14:16:34.752: INFO: (9) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.266211ms) May 7 14:16:34.756: INFO: (10) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.624767ms) May 7 14:16:34.760: INFO: (11) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.693315ms) May 7 14:16:34.763: INFO: (12) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.295078ms) May 7 14:16:34.767: INFO: (13) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.934456ms) May 7 14:16:34.771: INFO: (14) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.750987ms) May 7 14:16:34.775: INFO: (15) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.707667ms) May 7 14:16:34.777: INFO: (16) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.774982ms) May 7 14:16:34.780: INFO: (17) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.813175ms) May 7 14:16:34.783: INFO: (18) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.896148ms) May 7 14:16:34.786: INFO: (19) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.92548ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:16:34.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-831" for this suite. May 7 14:16:40.804: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:16:40.879: INFO: namespace proxy-831 deletion completed in 6.089084854s • [SLOW TEST:6.254 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:16:40.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-2624 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet May 7 14:16:40.974: INFO: Found 0 stateful pods, waiting for 3 May 7 14:16:50.979: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 7 14:16:50.979: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 7 14:16:50.979: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 7 14:17:00.980: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 7 14:17:00.980: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 7 14:17:00.980: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 7 14:17:00.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2624 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 7 14:17:04.023: INFO: stderr: "I0507 14:17:03.864852 2944 log.go:172] (0xc00012adc0) (0xc00066c8c0) Create stream\nI0507 14:17:03.864896 2944 log.go:172] (0xc00012adc0) (0xc00066c8c0) Stream added, broadcasting: 1\nI0507 14:17:03.878197 2944 log.go:172] (0xc00012adc0) Reply frame received for 1\nI0507 14:17:03.878244 2944 log.go:172] (0xc00012adc0) (0xc0002f8000) Create stream\nI0507 14:17:03.878259 2944 log.go:172] (0xc00012adc0) (0xc0002f8000) Stream added, broadcasting: 3\nI0507 14:17:03.879025 2944 log.go:172] (0xc00012adc0) Reply frame received for 3\nI0507 14:17:03.879053 2944 log.go:172] (0xc00012adc0) (0xc0002f80a0) Create stream\nI0507 14:17:03.879068 2944 log.go:172] (0xc00012adc0) (0xc0002f80a0) Stream added, broadcasting: 5\nI0507 14:17:03.879768 2944 log.go:172] (0xc00012adc0) Reply frame received for 5\nI0507 14:17:03.976283 2944 log.go:172] (0xc00012adc0) Data frame received for 5\nI0507 14:17:03.976307 2944 log.go:172] (0xc0002f80a0) (5) Data frame handling\nI0507 14:17:03.976323 2944 log.go:172] (0xc0002f80a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0507 14:17:04.015645 2944 log.go:172] (0xc00012adc0) Data frame received for 3\nI0507 14:17:04.015684 2944 log.go:172] (0xc0002f8000) (3) Data frame handling\nI0507 14:17:04.015718 2944 log.go:172] (0xc0002f8000) (3) Data frame sent\nI0507 14:17:04.015736 2944 log.go:172] (0xc00012adc0) Data frame received for 3\nI0507 14:17:04.015754 2944 log.go:172] (0xc0002f8000) (3) Data frame handling\nI0507 14:17:04.015789 2944 log.go:172] (0xc00012adc0) Data frame received for 5\nI0507 14:17:04.015803 2944 log.go:172] (0xc0002f80a0) (5) Data frame handling\nI0507 14:17:04.017715 2944 log.go:172] (0xc00012adc0) Data frame received for 1\nI0507 14:17:04.017743 2944 log.go:172] (0xc00066c8c0) (1) Data frame handling\nI0507 14:17:04.017767 2944 log.go:172] (0xc00066c8c0) (1) Data frame sent\nI0507 14:17:04.017782 2944 log.go:172] (0xc00012adc0) (0xc00066c8c0) Stream removed, broadcasting: 1\nI0507 14:17:04.017806 2944 log.go:172] (0xc00012adc0) Go away received\nI0507 14:17:04.018139 2944 log.go:172] (0xc00012adc0) (0xc00066c8c0) Stream removed, broadcasting: 1\nI0507 14:17:04.018157 2944 log.go:172] (0xc00012adc0) (0xc0002f8000) Stream removed, broadcasting: 3\nI0507 14:17:04.018167 2944 log.go:172] (0xc00012adc0) (0xc0002f80a0) Stream removed, broadcasting: 5\n" May 7 14:17:04.024: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 7 14:17:04.024: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 7 14:17:14.090: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 7 14:17:24.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2624 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 7 14:17:24.330: INFO: stderr: "I0507 14:17:24.241021 2973 log.go:172] (0xc000a26420) (0xc0003d8820) Create stream\nI0507 14:17:24.241065 2973 log.go:172] (0xc000a26420) (0xc0003d8820) Stream added, broadcasting: 1\nI0507 14:17:24.243771 2973 log.go:172] (0xc000a26420) Reply frame received for 1\nI0507 14:17:24.243906 2973 log.go:172] (0xc000a26420) (0xc000a48000) Create stream\nI0507 14:17:24.243994 2973 log.go:172] (0xc000a26420) (0xc000a48000) Stream added, broadcasting: 3\nI0507 14:17:24.245733 2973 log.go:172] (0xc000a26420) Reply frame received for 3\nI0507 14:17:24.245777 2973 log.go:172] (0xc000a26420) (0xc000a480a0) Create stream\nI0507 14:17:24.245801 2973 log.go:172] (0xc000a26420) (0xc000a480a0) Stream added, broadcasting: 5\nI0507 14:17:24.246761 2973 log.go:172] (0xc000a26420) Reply frame received for 5\nI0507 14:17:24.322231 2973 log.go:172] (0xc000a26420) Data frame received for 5\nI0507 14:17:24.322284 2973 log.go:172] (0xc000a480a0) (5) Data frame handling\nI0507 14:17:24.322307 2973 log.go:172] (0xc000a480a0) (5) Data frame sent\nI0507 14:17:24.322322 2973 log.go:172] (0xc000a26420) Data frame received for 5\nI0507 14:17:24.322332 2973 log.go:172] (0xc000a480a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0507 14:17:24.322397 2973 log.go:172] (0xc000a26420) Data frame received for 3\nI0507 14:17:24.322445 2973 log.go:172] (0xc000a48000) (3) Data frame handling\nI0507 14:17:24.322466 2973 log.go:172] (0xc000a48000) (3) Data frame sent\nI0507 14:17:24.322483 2973 log.go:172] (0xc000a26420) Data frame received for 3\nI0507 14:17:24.322493 2973 log.go:172] (0xc000a48000) (3) Data frame handling\nI0507 14:17:24.324082 2973 log.go:172] (0xc000a26420) Data frame received for 1\nI0507 14:17:24.324094 2973 log.go:172] (0xc0003d8820) (1) Data frame handling\nI0507 14:17:24.324101 2973 log.go:172] (0xc0003d8820) (1) Data frame sent\nI0507 14:17:24.324112 2973 log.go:172] (0xc000a26420) (0xc0003d8820) Stream removed, broadcasting: 1\nI0507 14:17:24.324124 2973 log.go:172] (0xc000a26420) Go away received\nI0507 14:17:24.324504 2973 log.go:172] (0xc000a26420) (0xc0003d8820) Stream removed, broadcasting: 1\nI0507 14:17:24.324538 2973 log.go:172] (0xc000a26420) (0xc000a48000) Stream removed, broadcasting: 3\nI0507 14:17:24.324565 2973 log.go:172] (0xc000a26420) (0xc000a480a0) Stream removed, broadcasting: 5\n" May 7 14:17:24.330: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 7 14:17:24.330: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 7 14:17:34.351: INFO: Waiting for StatefulSet statefulset-2624/ss2 to complete update May 7 14:17:34.351: INFO: Waiting for Pod statefulset-2624/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 7 14:17:34.351: INFO: Waiting for Pod statefulset-2624/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 7 14:17:44.359: INFO: Waiting for StatefulSet statefulset-2624/ss2 to complete update May 7 14:17:44.359: INFO: Waiting for Pod statefulset-2624/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 7 14:17:54.357: INFO: Waiting for StatefulSet statefulset-2624/ss2 to complete update STEP: Rolling back to a previous revision May 7 14:18:04.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2624 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 7 14:18:04.666: INFO: stderr: "I0507 14:18:04.479246 2995 log.go:172] (0xc000118e70) (0xc00041e780) Create stream\nI0507 14:18:04.479337 2995 log.go:172] (0xc000118e70) (0xc00041e780) Stream added, broadcasting: 1\nI0507 14:18:04.482908 2995 log.go:172] (0xc000118e70) Reply frame received for 1\nI0507 14:18:04.482950 2995 log.go:172] (0xc000118e70) (0xc0001eb0e0) Create stream\nI0507 14:18:04.482967 2995 log.go:172] (0xc000118e70) (0xc0001eb0e0) Stream added, broadcasting: 3\nI0507 14:18:04.483817 2995 log.go:172] (0xc000118e70) Reply frame received for 3\nI0507 14:18:04.483860 2995 log.go:172] (0xc000118e70) (0xc00041e0a0) Create stream\nI0507 14:18:04.483875 2995 log.go:172] (0xc000118e70) (0xc00041e0a0) Stream added, broadcasting: 5\nI0507 14:18:04.484889 2995 log.go:172] (0xc000118e70) Reply frame received for 5\nI0507 14:18:04.605730 2995 log.go:172] (0xc000118e70) Data frame received for 5\nI0507 14:18:04.605769 2995 log.go:172] (0xc00041e0a0) (5) Data frame handling\nI0507 14:18:04.605789 2995 log.go:172] (0xc00041e0a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0507 14:18:04.657485 2995 log.go:172] (0xc000118e70) Data frame received for 3\nI0507 14:18:04.657519 2995 log.go:172] (0xc0001eb0e0) (3) Data frame handling\nI0507 14:18:04.657536 2995 log.go:172] (0xc0001eb0e0) (3) Data frame sent\nI0507 14:18:04.657770 2995 log.go:172] (0xc000118e70) Data frame received for 3\nI0507 14:18:04.657807 2995 log.go:172] (0xc0001eb0e0) (3) Data frame handling\nI0507 14:18:04.657930 2995 log.go:172] (0xc000118e70) Data frame received for 5\nI0507 14:18:04.657950 2995 log.go:172] (0xc00041e0a0) (5) Data frame handling\nI0507 14:18:04.660386 2995 log.go:172] (0xc000118e70) Data frame received for 1\nI0507 14:18:04.660410 2995 log.go:172] (0xc00041e780) (1) Data frame handling\nI0507 14:18:04.660448 2995 log.go:172] (0xc00041e780) (1) Data frame sent\nI0507 14:18:04.660488 2995 log.go:172] (0xc000118e70) (0xc00041e780) Stream removed, broadcasting: 1\nI0507 14:18:04.660624 2995 log.go:172] (0xc000118e70) Go away received\nI0507 14:18:04.660933 2995 log.go:172] (0xc000118e70) (0xc00041e780) Stream removed, broadcasting: 1\nI0507 14:18:04.660966 2995 log.go:172] (0xc000118e70) (0xc0001eb0e0) Stream removed, broadcasting: 3\nI0507 14:18:04.660985 2995 log.go:172] (0xc000118e70) (0xc00041e0a0) Stream removed, broadcasting: 5\n" May 7 14:18:04.666: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 7 14:18:04.666: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 7 14:18:14.725: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 7 14:18:24.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2624 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 7 14:18:24.980: INFO: stderr: "I0507 14:18:24.893818 3018 log.go:172] (0xc0008e82c0) (0xc0007b4640) Create stream\nI0507 14:18:24.893886 3018 log.go:172] (0xc0008e82c0) (0xc0007b4640) Stream added, broadcasting: 1\nI0507 14:18:24.895723 3018 log.go:172] (0xc0008e82c0) Reply frame received for 1\nI0507 14:18:24.895750 3018 log.go:172] (0xc0008e82c0) (0xc00058a000) Create stream\nI0507 14:18:24.895757 3018 log.go:172] (0xc0008e82c0) (0xc00058a000) Stream added, broadcasting: 3\nI0507 14:18:24.896324 3018 log.go:172] (0xc0008e82c0) Reply frame received for 3\nI0507 14:18:24.896353 3018 log.go:172] (0xc0008e82c0) (0xc00058a0a0) Create stream\nI0507 14:18:24.896362 3018 log.go:172] (0xc0008e82c0) (0xc00058a0a0) Stream added, broadcasting: 5\nI0507 14:18:24.897010 3018 log.go:172] (0xc0008e82c0) Reply frame received for 5\nI0507 14:18:24.974109 3018 log.go:172] (0xc0008e82c0) Data frame received for 5\nI0507 14:18:24.974165 3018 log.go:172] (0xc00058a0a0) (5) Data frame handling\nI0507 14:18:24.974189 3018 log.go:172] (0xc00058a0a0) (5) Data frame sent\nI0507 14:18:24.974208 3018 log.go:172] (0xc0008e82c0) Data frame received for 5\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0507 14:18:24.974233 3018 log.go:172] (0xc0008e82c0) Data frame received for 3\nI0507 14:18:24.974261 3018 log.go:172] (0xc00058a000) (3) Data frame handling\nI0507 14:18:24.974268 3018 log.go:172] (0xc00058a000) (3) Data frame sent\nI0507 14:18:24.974275 3018 log.go:172] (0xc0008e82c0) Data frame received for 3\nI0507 14:18:24.974283 3018 log.go:172] (0xc00058a000) (3) Data frame handling\nI0507 14:18:24.974292 3018 log.go:172] (0xc00058a0a0) (5) Data frame handling\nI0507 14:18:24.975201 3018 log.go:172] (0xc0008e82c0) Data frame received for 1\nI0507 14:18:24.975212 3018 log.go:172] (0xc0007b4640) (1) Data frame handling\nI0507 14:18:24.975218 3018 log.go:172] (0xc0007b4640) (1) Data frame sent\nI0507 14:18:24.975225 3018 log.go:172] (0xc0008e82c0) (0xc0007b4640) Stream removed, broadcasting: 1\nI0507 14:18:24.975274 3018 log.go:172] (0xc0008e82c0) Go away received\nI0507 14:18:24.975514 3018 log.go:172] (0xc0008e82c0) (0xc0007b4640) Stream removed, broadcasting: 1\nI0507 14:18:24.975529 3018 log.go:172] (0xc0008e82c0) (0xc00058a000) Stream removed, broadcasting: 3\nI0507 14:18:24.975537 3018 log.go:172] (0xc0008e82c0) (0xc00058a0a0) Stream removed, broadcasting: 5\n" May 7 14:18:24.980: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 7 14:18:24.980: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 7 14:18:54.996: INFO: Deleting all statefulset in ns statefulset-2624 May 7 14:18:54.999: INFO: Scaling statefulset ss2 to 0 May 7 14:19:25.055: INFO: Waiting for statefulset status.replicas updated to 0 May 7 14:19:25.057: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:19:25.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2624" for this suite. May 7 14:19:33.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:19:33.163: INFO: namespace statefulset-2624 deletion completed in 8.084305485s • [SLOW TEST:172.284 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:19:33.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 May 7 14:19:33.231: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 7 14:19:33.252: INFO: Waiting for terminating namespaces to be deleted... May 7 14:19:33.255: INFO: Logging pods the kubelet thinks is on node iruya-worker before test May 7 14:19:33.270: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 7 14:19:33.271: INFO: Container kube-proxy ready: true, restart count 0 May 7 14:19:33.271: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 7 14:19:33.271: INFO: Container kindnet-cni ready: true, restart count 0 May 7 14:19:33.271: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test May 7 14:19:33.277: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) May 7 14:19:33.277: INFO: Container coredns ready: true, restart count 0 May 7 14:19:33.277: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) May 7 14:19:33.278: INFO: Container coredns ready: true, restart count 0 May 7 14:19:33.278: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) May 7 14:19:33.278: INFO: Container kube-proxy ready: true, restart count 0 May 7 14:19:33.278: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) May 7 14:19:33.278: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-252fff53-f8c6-471d-ab5a-63d2478e5e94 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-252fff53-f8c6-471d-ab5a-63d2478e5e94 off the node iruya-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-252fff53-f8c6-471d-ab5a-63d2478e5e94 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:19:41.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3209" for this suite. May 7 14:19:55.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:19:55.598: INFO: namespace sched-pred-3209 deletion completed in 14.121549692s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:22.435 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:19:55.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-1efd9ffd-6af5-47a9-b0bd-b896b9a2b0d0 STEP: Creating a pod to test consume configMaps May 7 14:19:55.681: INFO: Waiting up to 5m0s for pod "pod-configmaps-ac9bce0f-8b32-4129-9f93-f226f19aaa76" in namespace "configmap-5574" to be "success or failure" May 7 14:19:55.745: INFO: Pod "pod-configmaps-ac9bce0f-8b32-4129-9f93-f226f19aaa76": Phase="Pending", Reason="", readiness=false. Elapsed: 63.619839ms May 7 14:19:57.823: INFO: Pod "pod-configmaps-ac9bce0f-8b32-4129-9f93-f226f19aaa76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.141304091s May 7 14:19:59.847: INFO: Pod "pod-configmaps-ac9bce0f-8b32-4129-9f93-f226f19aaa76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.165097426s STEP: Saw pod success May 7 14:19:59.847: INFO: Pod "pod-configmaps-ac9bce0f-8b32-4129-9f93-f226f19aaa76" satisfied condition "success or failure" May 7 14:19:59.859: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-ac9bce0f-8b32-4129-9f93-f226f19aaa76 container configmap-volume-test: STEP: delete the pod May 7 14:19:59.911: INFO: Waiting for pod pod-configmaps-ac9bce0f-8b32-4129-9f93-f226f19aaa76 to disappear May 7 14:19:59.925: INFO: Pod pod-configmaps-ac9bce0f-8b32-4129-9f93-f226f19aaa76 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:19:59.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5574" for this suite. May 7 14:20:05.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:20:06.063: INFO: namespace configmap-5574 deletion completed in 6.13367413s • [SLOW TEST:10.464 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:20:06.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-1ae8083c-743b-40d9-845c-f70216e9dc8d STEP: Creating a pod to test consume configMaps May 7 14:20:06.203: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-863302e0-1560-4a7d-82f4-5a1f35536b5a" in namespace "projected-7492" to be "success or failure" May 7 14:20:06.212: INFO: Pod "pod-projected-configmaps-863302e0-1560-4a7d-82f4-5a1f35536b5a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.694595ms May 7 14:20:08.216: INFO: Pod "pod-projected-configmaps-863302e0-1560-4a7d-82f4-5a1f35536b5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012995836s May 7 14:20:10.686: INFO: Pod "pod-projected-configmaps-863302e0-1560-4a7d-82f4-5a1f35536b5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.482635371s STEP: Saw pod success May 7 14:20:10.686: INFO: Pod "pod-projected-configmaps-863302e0-1560-4a7d-82f4-5a1f35536b5a" satisfied condition "success or failure" May 7 14:20:10.689: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-863302e0-1560-4a7d-82f4-5a1f35536b5a container projected-configmap-volume-test: STEP: delete the pod May 7 14:20:11.138: INFO: Waiting for pod pod-projected-configmaps-863302e0-1560-4a7d-82f4-5a1f35536b5a to disappear May 7 14:20:11.152: INFO: Pod pod-projected-configmaps-863302e0-1560-4a7d-82f4-5a1f35536b5a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:20:11.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7492" for this suite. May 7 14:20:17.191: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:20:17.260: INFO: namespace projected-7492 deletion completed in 6.103780066s • [SLOW TEST:11.196 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:20:17.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 7 14:20:17.448: INFO: Waiting up to 5m0s for pod "downward-api-551b1111-9c18-4c7d-9f18-550e3cf1abdd" in namespace "downward-api-8115" to be "success or failure" May 7 14:20:17.452: INFO: Pod "downward-api-551b1111-9c18-4c7d-9f18-550e3cf1abdd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.77133ms May 7 14:20:19.456: INFO: Pod "downward-api-551b1111-9c18-4c7d-9f18-550e3cf1abdd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008112201s May 7 14:20:21.461: INFO: Pod "downward-api-551b1111-9c18-4c7d-9f18-550e3cf1abdd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012803499s STEP: Saw pod success May 7 14:20:21.461: INFO: Pod "downward-api-551b1111-9c18-4c7d-9f18-550e3cf1abdd" satisfied condition "success or failure" May 7 14:20:21.464: INFO: Trying to get logs from node iruya-worker2 pod downward-api-551b1111-9c18-4c7d-9f18-550e3cf1abdd container dapi-container: STEP: delete the pod May 7 14:20:21.682: INFO: Waiting for pod downward-api-551b1111-9c18-4c7d-9f18-550e3cf1abdd to disappear May 7 14:20:21.704: INFO: Pod downward-api-551b1111-9c18-4c7d-9f18-550e3cf1abdd no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:20:21.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8115" for this suite. May 7 14:20:27.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:20:27.823: INFO: namespace downward-api-8115 deletion completed in 6.116027068s • [SLOW TEST:10.563 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:20:27.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 7 14:20:27.872: INFO: Creating deployment "test-recreate-deployment" May 7 14:20:27.890: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 7 14:20:27.959: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 7 14:20:29.966: INFO: Waiting deployment "test-recreate-deployment" to complete May 7 14:20:29.968: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724458028, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724458028, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724458028, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724458027, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 7 14:20:31.972: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 7 14:20:31.980: INFO: Updating deployment test-recreate-deployment May 7 14:20:31.980: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 7 14:20:32.273: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-5542,SelfLink:/apis/apps/v1/namespaces/deployment-5542/deployments/test-recreate-deployment,UID:f413ec46-ccb1-45f2-aff0-837eedbc1108,ResourceVersion:9545931,Generation:2,CreationTimestamp:2020-05-07 14:20:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-05-07 14:20:32 +0000 UTC 2020-05-07 14:20:32 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-05-07 14:20:32 +0000 UTC 2020-05-07 14:20:27 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} May 7 14:20:32.276: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-5542,SelfLink:/apis/apps/v1/namespaces/deployment-5542/replicasets/test-recreate-deployment-5c8c9cc69d,UID:bf66ec51-cdc9-4cd9-a1a3-41172c48be1e,ResourceVersion:9545929,Generation:1,CreationTimestamp:2020-05-07 14:20:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment f413ec46-ccb1-45f2-aff0-837eedbc1108 0xc00332f1f7 0xc00332f1f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 7 14:20:32.276: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 7 14:20:32.276: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-5542,SelfLink:/apis/apps/v1/namespaces/deployment-5542/replicasets/test-recreate-deployment-6df85df6b9,UID:e73bc2cd-cf21-4631-bda9-ee07e773e5dd,ResourceVersion:9545919,Generation:2,CreationTimestamp:2020-05-07 14:20:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment f413ec46-ccb1-45f2-aff0-837eedbc1108 0xc00332f2c7 0xc00332f2c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 7 14:20:32.561: INFO: Pod "test-recreate-deployment-5c8c9cc69d-6258w" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-6258w,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-5542,SelfLink:/api/v1/namespaces/deployment-5542/pods/test-recreate-deployment-5c8c9cc69d-6258w,UID:4e1cc3d5-550d-4ac9-9351-03e89fff2adc,ResourceVersion:9545933,Generation:0,CreationTimestamp:2020-05-07 14:20:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d bf66ec51-cdc9-4cd9-a1a3-41172c48be1e 0xc00332fbb7 0xc00332fbb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9qdpr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9qdpr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9qdpr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00332fc30} {node.kubernetes.io/unreachable Exists NoExecute 0xc00332fc50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 14:20:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 14:20:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-07 14:20:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 14:20:32 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-07 14:20:32 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:20:32.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5542" for this suite. May 7 14:20:38.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:20:38.696: INFO: namespace deployment-5542 deletion completed in 6.130364034s • [SLOW TEST:10.872 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:20:38.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-a5df0038-7f8b-499f-bbda-2e0b9026587b in namespace container-probe-4198 May 7 14:20:42.834: INFO: Started pod busybox-a5df0038-7f8b-499f-bbda-2e0b9026587b in namespace container-probe-4198 STEP: checking the pod's current state and verifying that restartCount is present May 7 14:20:42.836: INFO: Initial restart count of pod busybox-a5df0038-7f8b-499f-bbda-2e0b9026587b is 0 May 7 14:21:32.941: INFO: Restart count of pod container-probe-4198/busybox-a5df0038-7f8b-499f-bbda-2e0b9026587b is now 1 (50.105105609s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:21:32.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4198" for this suite. May 7 14:21:39.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:21:39.099: INFO: namespace container-probe-4198 deletion completed in 6.111332765s • [SLOW TEST:60.402 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:21:39.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 7 14:21:39.234: INFO: Waiting up to 5m0s for pod "downward-api-1f2db609-6ba1-4371-a620-0f623c294b88" in namespace "downward-api-28" to be "success or failure" May 7 14:21:39.238: INFO: Pod "downward-api-1f2db609-6ba1-4371-a620-0f623c294b88": Phase="Pending", Reason="", readiness=false. Elapsed: 3.263006ms May 7 14:21:41.241: INFO: Pod "downward-api-1f2db609-6ba1-4371-a620-0f623c294b88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006140877s May 7 14:21:43.245: INFO: Pod "downward-api-1f2db609-6ba1-4371-a620-0f623c294b88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01081786s STEP: Saw pod success May 7 14:21:43.245: INFO: Pod "downward-api-1f2db609-6ba1-4371-a620-0f623c294b88" satisfied condition "success or failure" May 7 14:21:43.248: INFO: Trying to get logs from node iruya-worker2 pod downward-api-1f2db609-6ba1-4371-a620-0f623c294b88 container dapi-container: STEP: delete the pod May 7 14:21:43.284: INFO: Waiting for pod downward-api-1f2db609-6ba1-4371-a620-0f623c294b88 to disappear May 7 14:21:43.292: INFO: Pod downward-api-1f2db609-6ba1-4371-a620-0f623c294b88 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:21:43.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-28" for this suite. May 7 14:21:49.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:21:49.391: INFO: namespace downward-api-28 deletion completed in 6.096056505s • [SLOW TEST:10.292 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:21:49.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 7 14:21:49.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-801' May 7 14:21:49.573: INFO: stderr: "" May 7 14:21:49.573: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 May 7 14:21:49.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-801' May 7 14:21:53.359: INFO: stderr: "" May 7 14:21:53.359: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:21:53.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-801" for this suite. May 7 14:21:59.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:21:59.455: INFO: namespace kubectl-801 deletion completed in 6.091993607s • [SLOW TEST:10.063 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:21:59.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-66b126c1-5ef1-4160-b0ba-1f19d8c645de May 7 14:21:59.566: INFO: Pod name my-hostname-basic-66b126c1-5ef1-4160-b0ba-1f19d8c645de: Found 0 pods out of 1 May 7 14:22:04.572: INFO: Pod name my-hostname-basic-66b126c1-5ef1-4160-b0ba-1f19d8c645de: Found 1 pods out of 1 May 7 14:22:04.572: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-66b126c1-5ef1-4160-b0ba-1f19d8c645de" are running May 7 14:22:04.575: INFO: Pod "my-hostname-basic-66b126c1-5ef1-4160-b0ba-1f19d8c645de-bkkm5" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-07 14:21:59 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-07 14:22:02 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-07 14:22:02 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-07 14:21:59 +0000 UTC Reason: Message:}]) May 7 14:22:04.575: INFO: Trying to dial the pod May 7 14:22:09.588: INFO: Controller my-hostname-basic-66b126c1-5ef1-4160-b0ba-1f19d8c645de: Got expected result from replica 1 [my-hostname-basic-66b126c1-5ef1-4160-b0ba-1f19d8c645de-bkkm5]: "my-hostname-basic-66b126c1-5ef1-4160-b0ba-1f19d8c645de-bkkm5", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:22:09.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6574" for this suite. May 7 14:22:15.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:22:15.723: INFO: namespace replication-controller-6574 deletion completed in 6.131791469s • [SLOW TEST:16.267 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:22:15.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command May 7 14:22:15.876: INFO: Waiting up to 5m0s for pod "client-containers-a37dfb5d-7f60-40e7-8835-c7cff9c03ca5" in namespace "containers-4071" to be "success or failure" May 7 14:22:15.879: INFO: Pod "client-containers-a37dfb5d-7f60-40e7-8835-c7cff9c03ca5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.113132ms May 7 14:22:17.883: INFO: Pod "client-containers-a37dfb5d-7f60-40e7-8835-c7cff9c03ca5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007510396s May 7 14:22:19.888: INFO: Pod "client-containers-a37dfb5d-7f60-40e7-8835-c7cff9c03ca5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011815806s STEP: Saw pod success May 7 14:22:19.888: INFO: Pod "client-containers-a37dfb5d-7f60-40e7-8835-c7cff9c03ca5" satisfied condition "success or failure" May 7 14:22:19.891: INFO: Trying to get logs from node iruya-worker pod client-containers-a37dfb5d-7f60-40e7-8835-c7cff9c03ca5 container test-container: STEP: delete the pod May 7 14:22:19.963: INFO: Waiting for pod client-containers-a37dfb5d-7f60-40e7-8835-c7cff9c03ca5 to disappear May 7 14:22:19.966: INFO: Pod client-containers-a37dfb5d-7f60-40e7-8835-c7cff9c03ca5 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:22:19.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4071" for this suite. May 7 14:22:25.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:22:26.061: INFO: namespace containers-4071 deletion completed in 6.091832114s • [SLOW TEST:10.338 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:22:26.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-7f4b7cd3-e8be-4cbf-abcd-a4bce4e7824e [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:22:26.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9083" for this suite. May 7 14:22:32.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:22:32.286: INFO: namespace secrets-9083 deletion completed in 6.099832043s • [SLOW TEST:6.224 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:22:32.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 7 14:22:42.415: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 7 14:22:42.427: INFO: Pod pod-with-poststart-exec-hook still exists May 7 14:22:44.427: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 7 14:22:44.431: INFO: Pod pod-with-poststart-exec-hook still exists May 7 14:22:46.427: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 7 14:22:46.431: INFO: Pod pod-with-poststart-exec-hook still exists May 7 14:22:48.427: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 7 14:22:48.432: INFO: Pod pod-with-poststart-exec-hook still exists May 7 14:22:50.427: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 7 14:22:50.432: INFO: Pod pod-with-poststart-exec-hook still exists May 7 14:22:52.427: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 7 14:22:52.431: INFO: Pod pod-with-poststart-exec-hook still exists May 7 14:22:54.427: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 7 14:22:54.431: INFO: Pod pod-with-poststart-exec-hook still exists May 7 14:22:56.427: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 7 14:22:56.432: INFO: Pod pod-with-poststart-exec-hook still exists May 7 14:22:58.427: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 7 14:22:58.432: INFO: Pod pod-with-poststart-exec-hook still exists May 7 14:23:00.427: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 7 14:23:00.432: INFO: Pod pod-with-poststart-exec-hook still exists May 7 14:23:02.427: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 7 14:23:02.432: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:23:02.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3901" for this suite. May 7 14:23:24.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:23:24.518: INFO: namespace container-lifecycle-hook-3901 deletion completed in 22.081631345s • [SLOW TEST:52.232 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:23:24.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-3318817e-ff3f-4036-ada8-709dc46d3e29 in namespace container-probe-3641 May 7 14:23:28.602: INFO: Started pod liveness-3318817e-ff3f-4036-ada8-709dc46d3e29 in namespace container-probe-3641 STEP: checking the pod's current state and verifying that restartCount is present May 7 14:23:28.606: INFO: Initial restart count of pod liveness-3318817e-ff3f-4036-ada8-709dc46d3e29 is 0 May 7 14:23:44.643: INFO: Restart count of pod container-probe-3641/liveness-3318817e-ff3f-4036-ada8-709dc46d3e29 is now 1 (16.037036029s elapsed) May 7 14:24:04.686: INFO: Restart count of pod container-probe-3641/liveness-3318817e-ff3f-4036-ada8-709dc46d3e29 is now 2 (36.079873821s elapsed) May 7 14:24:24.729: INFO: Restart count of pod container-probe-3641/liveness-3318817e-ff3f-4036-ada8-709dc46d3e29 is now 3 (56.123203814s elapsed) May 7 14:24:44.775: INFO: Restart count of pod container-probe-3641/liveness-3318817e-ff3f-4036-ada8-709dc46d3e29 is now 4 (1m16.169445979s elapsed) May 7 14:25:44.906: INFO: Restart count of pod container-probe-3641/liveness-3318817e-ff3f-4036-ada8-709dc46d3e29 is now 5 (2m16.300366649s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:25:44.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3641" for this suite. May 7 14:25:50.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:25:51.006: INFO: namespace container-probe-3641 deletion completed in 6.080286373s • [SLOW TEST:146.488 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:25:51.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-e39a7111-9936-4b39-9a41-41b78f08ae17 STEP: Creating a pod to test consume configMaps May 7 14:25:51.076: INFO: Waiting up to 5m0s for pod "pod-configmaps-49967e68-7193-419d-ab85-55943c95a592" in namespace "configmap-3972" to be "success or failure" May 7 14:25:51.080: INFO: Pod "pod-configmaps-49967e68-7193-419d-ab85-55943c95a592": Phase="Pending", Reason="", readiness=false. Elapsed: 3.80644ms May 7 14:25:53.084: INFO: Pod "pod-configmaps-49967e68-7193-419d-ab85-55943c95a592": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008079769s May 7 14:25:55.088: INFO: Pod "pod-configmaps-49967e68-7193-419d-ab85-55943c95a592": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01165337s STEP: Saw pod success May 7 14:25:55.088: INFO: Pod "pod-configmaps-49967e68-7193-419d-ab85-55943c95a592" satisfied condition "success or failure" May 7 14:25:55.091: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-49967e68-7193-419d-ab85-55943c95a592 container configmap-volume-test: STEP: delete the pod May 7 14:25:55.663: INFO: Waiting for pod pod-configmaps-49967e68-7193-419d-ab85-55943c95a592 to disappear May 7 14:25:55.705: INFO: Pod pod-configmaps-49967e68-7193-419d-ab85-55943c95a592 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:25:55.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3972" for this suite. May 7 14:26:01.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:26:01.843: INFO: namespace configmap-3972 deletion completed in 6.134807125s • [SLOW TEST:10.837 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:26:01.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-7ced0abf-d4b8-41b2-8a93-00f67c9b0d33 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-7ced0abf-d4b8-41b2-8a93-00f67c9b0d33 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:27:26.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4943" for this suite. May 7 14:27:48.539: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:27:48.610: INFO: namespace configmap-4943 deletion completed in 22.141947763s • [SLOW TEST:106.766 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:27:48.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-83f53ed0-f36f-4814-9450-39fbc9b2d2eb STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-83f53ed0-f36f-4814-9450-39fbc9b2d2eb STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:27:54.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1562" for this suite. May 7 14:28:16.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:28:16.966: INFO: namespace projected-1562 deletion completed in 22.106578744s • [SLOW TEST:28.355 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:28:16.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 7 14:28:17.069: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 7 14:28:26.223: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:28:26.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4498" for this suite. May 7 14:28:32.295: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:28:32.382: INFO: namespace pods-4498 deletion completed in 6.151435562s • [SLOW TEST:15.415 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:28:32.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 7 14:28:32.496: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f739fdbe-9f59-41eb-83f5-a233cf14aed6" in namespace "projected-4775" to be "success or failure" May 7 14:28:32.516: INFO: Pod "downwardapi-volume-f739fdbe-9f59-41eb-83f5-a233cf14aed6": Phase="Pending", Reason="", readiness=false. Elapsed: 20.322939ms May 7 14:28:34.521: INFO: Pod "downwardapi-volume-f739fdbe-9f59-41eb-83f5-a233cf14aed6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024824502s May 7 14:28:36.524: INFO: Pod "downwardapi-volume-f739fdbe-9f59-41eb-83f5-a233cf14aed6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028448766s STEP: Saw pod success May 7 14:28:36.524: INFO: Pod "downwardapi-volume-f739fdbe-9f59-41eb-83f5-a233cf14aed6" satisfied condition "success or failure" May 7 14:28:36.526: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-f739fdbe-9f59-41eb-83f5-a233cf14aed6 container client-container: STEP: delete the pod May 7 14:28:36.552: INFO: Waiting for pod downwardapi-volume-f739fdbe-9f59-41eb-83f5-a233cf14aed6 to disappear May 7 14:28:36.570: INFO: Pod downwardapi-volume-f739fdbe-9f59-41eb-83f5-a233cf14aed6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:28:36.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4775" for this suite. May 7 14:28:42.590: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:28:42.670: INFO: namespace projected-4775 deletion completed in 6.093859571s • [SLOW TEST:10.287 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:28:42.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 7 14:28:42.776: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 7 14:28:47.781: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 7 14:28:47.781: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 7 14:28:47.809: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-7114,SelfLink:/apis/apps/v1/namespaces/deployment-7114/deployments/test-cleanup-deployment,UID:b2c93b9c-e53b-4f05-8308-7e06b96f2bfa,ResourceVersion:9547307,Generation:1,CreationTimestamp:2020-05-07 14:28:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} May 7 14:28:47.815: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-7114,SelfLink:/apis/apps/v1/namespaces/deployment-7114/replicasets/test-cleanup-deployment-55bbcbc84c,UID:cc8b0131-3032-4cc2-bfbe-673a2643b8ea,ResourceVersion:9547309,Generation:1,CreationTimestamp:2020-05-07 14:28:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment b2c93b9c-e53b-4f05-8308-7e06b96f2bfa 0xc001615b77 0xc001615b78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 7 14:28:47.815: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 7 14:28:47.815: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-7114,SelfLink:/apis/apps/v1/namespaces/deployment-7114/replicasets/test-cleanup-controller,UID:bd4215b5-b852-44c4-8cda-c1fb64e0d6af,ResourceVersion:9547308,Generation:1,CreationTimestamp:2020-05-07 14:28:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment b2c93b9c-e53b-4f05-8308-7e06b96f2bfa 0xc001615aa7 0xc001615aa8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 7 14:28:47.866: INFO: Pod "test-cleanup-controller-d4lh4" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-d4lh4,GenerateName:test-cleanup-controller-,Namespace:deployment-7114,SelfLink:/api/v1/namespaces/deployment-7114/pods/test-cleanup-controller-d4lh4,UID:a3e295ae-0313-4ede-9a19-97ce942d1ea0,ResourceVersion:9547302,Generation:0,CreationTimestamp:2020-05-07 14:28:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller bd4215b5-b852-44c4-8cda-c1fb64e0d6af 0xc003480477 0xc003480478}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qcplk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qcplk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qcplk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0034804f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc003480510}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 14:28:42 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 14:28:46 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 14:28:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 14:28:42 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.195,StartTime:2020-05-07 14:28:42 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-07 14:28:45 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://57bf1182f9d6d4f4f544d1fb6417d10102fdba135d6ea12e4a695722e5c70871}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 7 14:28:47.867: INFO: Pod "test-cleanup-deployment-55bbcbc84c-kpsml" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-kpsml,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-7114,SelfLink:/api/v1/namespaces/deployment-7114/pods/test-cleanup-deployment-55bbcbc84c-kpsml,UID:9280927b-5f66-4709-8517-a01afdb5fc82,ResourceVersion:9547313,Generation:0,CreationTimestamp:2020-05-07 14:28:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c cc8b0131-3032-4cc2-bfbe-673a2643b8ea 0xc0034805f7 0xc0034805f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qcplk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qcplk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-qcplk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003480670} {node.kubernetes.io/unreachable Exists NoExecute 0xc003480690}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-07 14:28:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:28:47.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7114" for this suite. May 7 14:28:53.950: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:28:54.025: INFO: namespace deployment-7114 deletion completed in 6.098300728s • [SLOW TEST:11.355 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:28:54.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 7 14:28:54.129: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b63957ba-4964-4477-8b17-03d6ebd1f166" in namespace "downward-api-3872" to be "success or failure" May 7 14:28:54.184: INFO: Pod "downwardapi-volume-b63957ba-4964-4477-8b17-03d6ebd1f166": Phase="Pending", Reason="", readiness=false. Elapsed: 54.760839ms May 7 14:28:56.188: INFO: Pod "downwardapi-volume-b63957ba-4964-4477-8b17-03d6ebd1f166": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058552583s May 7 14:28:58.191: INFO: Pod "downwardapi-volume-b63957ba-4964-4477-8b17-03d6ebd1f166": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062152003s STEP: Saw pod success May 7 14:28:58.191: INFO: Pod "downwardapi-volume-b63957ba-4964-4477-8b17-03d6ebd1f166" satisfied condition "success or failure" May 7 14:28:58.194: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-b63957ba-4964-4477-8b17-03d6ebd1f166 container client-container: STEP: delete the pod May 7 14:28:58.260: INFO: Waiting for pod downwardapi-volume-b63957ba-4964-4477-8b17-03d6ebd1f166 to disappear May 7 14:28:58.272: INFO: Pod downwardapi-volume-b63957ba-4964-4477-8b17-03d6ebd1f166 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:28:58.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3872" for this suite. May 7 14:29:04.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:29:04.378: INFO: namespace downward-api-3872 deletion completed in 6.103091042s • [SLOW TEST:10.352 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:29:04.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:29:04.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2802" for this suite. May 7 14:29:10.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:29:10.770: INFO: namespace kubelet-test-2802 deletion completed in 6.135837996s • [SLOW TEST:6.392 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:29:10.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:29:14.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2490" for this suite. May 7 14:29:52.890: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:29:52.989: INFO: namespace kubelet-test-2490 deletion completed in 38.116243015s • [SLOW TEST:42.219 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:29:52.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 7 14:29:53.092: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e09066a1-8ea3-4359-ab11-f44e0b25b850" in namespace "projected-6785" to be "success or failure" May 7 14:29:53.100: INFO: Pod "downwardapi-volume-e09066a1-8ea3-4359-ab11-f44e0b25b850": Phase="Pending", Reason="", readiness=false. Elapsed: 7.855147ms May 7 14:29:55.112: INFO: Pod "downwardapi-volume-e09066a1-8ea3-4359-ab11-f44e0b25b850": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020627648s May 7 14:29:57.118: INFO: Pod "downwardapi-volume-e09066a1-8ea3-4359-ab11-f44e0b25b850": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026434903s STEP: Saw pod success May 7 14:29:57.118: INFO: Pod "downwardapi-volume-e09066a1-8ea3-4359-ab11-f44e0b25b850" satisfied condition "success or failure" May 7 14:29:57.120: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-e09066a1-8ea3-4359-ab11-f44e0b25b850 container client-container: STEP: delete the pod May 7 14:29:57.138: INFO: Waiting for pod downwardapi-volume-e09066a1-8ea3-4359-ab11-f44e0b25b850 to disappear May 7 14:29:57.142: INFO: Pod downwardapi-volume-e09066a1-8ea3-4359-ab11-f44e0b25b850 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:29:57.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6785" for this suite. May 7 14:30:03.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:30:03.256: INFO: namespace projected-6785 deletion completed in 6.110376068s • [SLOW TEST:10.266 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:30:03.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs May 7 14:30:03.408: INFO: Waiting up to 5m0s for pod "pod-35ab8065-3c27-4bf8-9ad3-debb2ef955ef" in namespace "emptydir-5977" to be "success or failure" May 7 14:30:03.419: INFO: Pod "pod-35ab8065-3c27-4bf8-9ad3-debb2ef955ef": Phase="Pending", Reason="", readiness=false. Elapsed: 10.955709ms May 7 14:30:05.426: INFO: Pod "pod-35ab8065-3c27-4bf8-9ad3-debb2ef955ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017953947s May 7 14:30:07.431: INFO: Pod "pod-35ab8065-3c27-4bf8-9ad3-debb2ef955ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022293261s STEP: Saw pod success May 7 14:30:07.431: INFO: Pod "pod-35ab8065-3c27-4bf8-9ad3-debb2ef955ef" satisfied condition "success or failure" May 7 14:30:07.454: INFO: Trying to get logs from node iruya-worker2 pod pod-35ab8065-3c27-4bf8-9ad3-debb2ef955ef container test-container: STEP: delete the pod May 7 14:30:07.666: INFO: Waiting for pod pod-35ab8065-3c27-4bf8-9ad3-debb2ef955ef to disappear May 7 14:30:07.751: INFO: Pod pod-35ab8065-3c27-4bf8-9ad3-debb2ef955ef no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:30:07.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5977" for this suite. May 7 14:30:13.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:30:13.909: INFO: namespace emptydir-5977 deletion completed in 6.146748175s • [SLOW TEST:10.653 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:30:13.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium May 7 14:30:14.044: INFO: Waiting up to 5m0s for pod "pod-469c3193-e899-4fdf-b782-601b5930d674" in namespace "emptydir-1160" to be "success or failure" May 7 14:30:14.055: INFO: Pod "pod-469c3193-e899-4fdf-b782-601b5930d674": Phase="Pending", Reason="", readiness=false. Elapsed: 10.994546ms May 7 14:30:16.101: INFO: Pod "pod-469c3193-e899-4fdf-b782-601b5930d674": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057246568s May 7 14:30:18.105: INFO: Pod "pod-469c3193-e899-4fdf-b782-601b5930d674": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061005594s STEP: Saw pod success May 7 14:30:18.105: INFO: Pod "pod-469c3193-e899-4fdf-b782-601b5930d674" satisfied condition "success or failure" May 7 14:30:18.107: INFO: Trying to get logs from node iruya-worker pod pod-469c3193-e899-4fdf-b782-601b5930d674 container test-container: STEP: delete the pod May 7 14:30:18.143: INFO: Waiting for pod pod-469c3193-e899-4fdf-b782-601b5930d674 to disappear May 7 14:30:18.169: INFO: Pod pod-469c3193-e899-4fdf-b782-601b5930d674 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:30:18.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1160" for this suite. May 7 14:30:24.191: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:30:24.271: INFO: namespace emptydir-1160 deletion completed in 6.098896089s • [SLOW TEST:10.362 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:30:24.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium May 7 14:30:24.349: INFO: Waiting up to 5m0s for pod "pod-1279483c-cfb4-43c4-bfce-422640a11ded" in namespace "emptydir-3267" to be "success or failure" May 7 14:30:24.352: INFO: Pod "pod-1279483c-cfb4-43c4-bfce-422640a11ded": Phase="Pending", Reason="", readiness=false. Elapsed: 3.339202ms May 7 14:30:26.356: INFO: Pod "pod-1279483c-cfb4-43c4-bfce-422640a11ded": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007636402s May 7 14:30:28.361: INFO: Pod "pod-1279483c-cfb4-43c4-bfce-422640a11ded": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012343062s STEP: Saw pod success May 7 14:30:28.361: INFO: Pod "pod-1279483c-cfb4-43c4-bfce-422640a11ded" satisfied condition "success or failure" May 7 14:30:28.364: INFO: Trying to get logs from node iruya-worker pod pod-1279483c-cfb4-43c4-bfce-422640a11ded container test-container: STEP: delete the pod May 7 14:30:28.600: INFO: Waiting for pod pod-1279483c-cfb4-43c4-bfce-422640a11ded to disappear May 7 14:30:28.652: INFO: Pod pod-1279483c-cfb4-43c4-bfce-422640a11ded no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:30:28.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3267" for this suite. May 7 14:30:34.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:30:34.774: INFO: namespace emptydir-3267 deletion completed in 6.118480989s • [SLOW TEST:10.502 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:30:34.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:30:40.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7796" for this suite. May 7 14:30:46.452: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:30:46.528: INFO: namespace watch-7796 deletion completed in 6.196620204s • [SLOW TEST:11.753 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:30:46.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 May 7 14:30:46.576: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 7 14:30:46.615: INFO: Waiting for terminating namespaces to be deleted... May 7 14:30:46.618: INFO: Logging pods the kubelet thinks is on node iruya-worker before test May 7 14:30:46.622: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 7 14:30:46.622: INFO: Container kube-proxy ready: true, restart count 0 May 7 14:30:46.622: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 7 14:30:46.622: INFO: Container kindnet-cni ready: true, restart count 0 May 7 14:30:46.622: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test May 7 14:30:46.627: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) May 7 14:30:46.627: INFO: Container coredns ready: true, restart count 0 May 7 14:30:46.627: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) May 7 14:30:46.627: INFO: Container coredns ready: true, restart count 0 May 7 14:30:46.627: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) May 7 14:30:46.627: INFO: Container kube-proxy ready: true, restart count 0 May 7 14:30:46.627: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) May 7 14:30:46.627: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-worker STEP: verifying the node has the label node iruya-worker2 May 7 14:30:46.840: INFO: Pod coredns-5d4dd4b4db-6jcgz requesting resource cpu=100m on Node iruya-worker2 May 7 14:30:46.840: INFO: Pod coredns-5d4dd4b4db-gm7vr requesting resource cpu=100m on Node iruya-worker2 May 7 14:30:46.840: INFO: Pod kindnet-gwz5g requesting resource cpu=100m on Node iruya-worker May 7 14:30:46.840: INFO: Pod kindnet-mgd8b requesting resource cpu=100m on Node iruya-worker2 May 7 14:30:46.840: INFO: Pod kube-proxy-pmz4p requesting resource cpu=0m on Node iruya-worker May 7 14:30:46.840: INFO: Pod kube-proxy-vwbcj requesting resource cpu=0m on Node iruya-worker2 STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-557a3f93-2be9-4aaa-ae09-c38e9e8ac843.160cc56e7b2eaae5], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5780/filler-pod-557a3f93-2be9-4aaa-ae09-c38e9e8ac843 to iruya-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-557a3f93-2be9-4aaa-ae09-c38e9e8ac843.160cc56eda05a8c7], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-557a3f93-2be9-4aaa-ae09-c38e9e8ac843.160cc56f20ee6f03], Reason = [Created], Message = [Created container filler-pod-557a3f93-2be9-4aaa-ae09-c38e9e8ac843] STEP: Considering event: Type = [Normal], Name = [filler-pod-557a3f93-2be9-4aaa-ae09-c38e9e8ac843.160cc56f3c28b386], Reason = [Started], Message = [Started container filler-pod-557a3f93-2be9-4aaa-ae09-c38e9e8ac843] STEP: Considering event: Type = [Normal], Name = [filler-pod-77811e62-c856-4988-b701-58ac023e297c.160cc56e7d61393c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5780/filler-pod-77811e62-c856-4988-b701-58ac023e297c to iruya-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-77811e62-c856-4988-b701-58ac023e297c.160cc56f17829118], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-77811e62-c856-4988-b701-58ac023e297c.160cc56f4a16a19a], Reason = [Created], Message = [Created container filler-pod-77811e62-c856-4988-b701-58ac023e297c] STEP: Considering event: Type = [Normal], Name = [filler-pod-77811e62-c856-4988-b701-58ac023e297c.160cc56f59236ac7], Reason = [Started], Message = [Started container filler-pod-77811e62-c856-4988-b701-58ac023e297c] STEP: Considering event: Type = [Warning], Name = [additional-pod.160cc56fe43c3d53], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node iruya-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:30:54.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5780" for this suite. May 7 14:31:02.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:31:02.160: INFO: namespace sched-pred-5780 deletion completed in 8.134692089s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:15.632 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:31:02.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-bd7f64ed-03d2-48e5-9656-7210fdde2ed4 STEP: Creating a pod to test consume configMaps May 7 14:31:02.261: INFO: Waiting up to 5m0s for pod "pod-configmaps-79b2e50e-6fa8-42d0-a5a9-56fc19ee0582" in namespace "configmap-1279" to be "success or failure" May 7 14:31:02.278: INFO: Pod "pod-configmaps-79b2e50e-6fa8-42d0-a5a9-56fc19ee0582": Phase="Pending", Reason="", readiness=false. Elapsed: 17.246942ms May 7 14:31:04.467: INFO: Pod "pod-configmaps-79b2e50e-6fa8-42d0-a5a9-56fc19ee0582": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206350844s May 7 14:31:06.472: INFO: Pod "pod-configmaps-79b2e50e-6fa8-42d0-a5a9-56fc19ee0582": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.210849868s STEP: Saw pod success May 7 14:31:06.472: INFO: Pod "pod-configmaps-79b2e50e-6fa8-42d0-a5a9-56fc19ee0582" satisfied condition "success or failure" May 7 14:31:06.475: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-79b2e50e-6fa8-42d0-a5a9-56fc19ee0582 container configmap-volume-test: STEP: delete the pod May 7 14:31:06.500: INFO: Waiting for pod pod-configmaps-79b2e50e-6fa8-42d0-a5a9-56fc19ee0582 to disappear May 7 14:31:06.503: INFO: Pod pod-configmaps-79b2e50e-6fa8-42d0-a5a9-56fc19ee0582 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:31:06.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1279" for this suite. May 7 14:31:12.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:31:12.654: INFO: namespace configmap-1279 deletion completed in 6.146787205s • [SLOW TEST:10.494 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:31:12.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 7 14:31:12.751: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0e74b485-3ed4-4d57-a109-a5f4f37880b2" in namespace "downward-api-590" to be "success or failure" May 7 14:31:12.763: INFO: Pod "downwardapi-volume-0e74b485-3ed4-4d57-a109-a5f4f37880b2": Phase="Pending", Reason="", readiness=false. Elapsed: 11.762319ms May 7 14:31:14.767: INFO: Pod "downwardapi-volume-0e74b485-3ed4-4d57-a109-a5f4f37880b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016086585s May 7 14:31:16.772: INFO: Pod "downwardapi-volume-0e74b485-3ed4-4d57-a109-a5f4f37880b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02081475s STEP: Saw pod success May 7 14:31:16.772: INFO: Pod "downwardapi-volume-0e74b485-3ed4-4d57-a109-a5f4f37880b2" satisfied condition "success or failure" May 7 14:31:16.775: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-0e74b485-3ed4-4d57-a109-a5f4f37880b2 container client-container: STEP: delete the pod May 7 14:31:16.827: INFO: Waiting for pod downwardapi-volume-0e74b485-3ed4-4d57-a109-a5f4f37880b2 to disappear May 7 14:31:16.835: INFO: Pod downwardapi-volume-0e74b485-3ed4-4d57-a109-a5f4f37880b2 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:31:16.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-590" for this suite. May 7 14:31:22.850: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:31:22.926: INFO: namespace downward-api-590 deletion completed in 6.086994708s • [SLOW TEST:10.272 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:31:22.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-a338e825-b975-432b-9879-b9e724271196 in namespace container-probe-2977 May 7 14:31:26.995: INFO: Started pod test-webserver-a338e825-b975-432b-9879-b9e724271196 in namespace container-probe-2977 STEP: checking the pod's current state and verifying that restartCount is present May 7 14:31:26.999: INFO: Initial restart count of pod test-webserver-a338e825-b975-432b-9879-b9e724271196 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:35:27.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2977" for this suite. May 7 14:35:33.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:35:33.703: INFO: namespace container-probe-2977 deletion completed in 6.129567017s • [SLOW TEST:250.777 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:35:33.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 7 14:35:37.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-f3e656d4-a003-4805-a781-07b3a6c91a60 -c busybox-main-container --namespace=emptydir-7416 -- cat /usr/share/volumeshare/shareddata.txt' May 7 14:35:40.567: INFO: stderr: "I0507 14:35:40.450428 3077 log.go:172] (0xc00013ee70) (0xc0005a6c80) Create stream\nI0507 14:35:40.450456 3077 log.go:172] (0xc00013ee70) (0xc0005a6c80) Stream added, broadcasting: 1\nI0507 14:35:40.452815 3077 log.go:172] (0xc00013ee70) Reply frame received for 1\nI0507 14:35:40.452868 3077 log.go:172] (0xc00013ee70) (0xc000a6c000) Create stream\nI0507 14:35:40.452894 3077 log.go:172] (0xc00013ee70) (0xc000a6c000) Stream added, broadcasting: 3\nI0507 14:35:40.454346 3077 log.go:172] (0xc00013ee70) Reply frame received for 3\nI0507 14:35:40.454391 3077 log.go:172] (0xc00013ee70) (0xc0007a0000) Create stream\nI0507 14:35:40.454415 3077 log.go:172] (0xc00013ee70) (0xc0007a0000) Stream added, broadcasting: 5\nI0507 14:35:40.455527 3077 log.go:172] (0xc00013ee70) Reply frame received for 5\nI0507 14:35:40.558613 3077 log.go:172] (0xc00013ee70) Data frame received for 3\nI0507 14:35:40.558667 3077 log.go:172] (0xc000a6c000) (3) Data frame handling\nI0507 14:35:40.558682 3077 log.go:172] (0xc000a6c000) (3) Data frame sent\nI0507 14:35:40.558696 3077 log.go:172] (0xc00013ee70) Data frame received for 3\nI0507 14:35:40.558727 3077 log.go:172] (0xc000a6c000) (3) Data frame handling\nI0507 14:35:40.558757 3077 log.go:172] (0xc00013ee70) Data frame received for 5\nI0507 14:35:40.558790 3077 log.go:172] (0xc0007a0000) (5) Data frame handling\nI0507 14:35:40.560693 3077 log.go:172] (0xc00013ee70) Data frame received for 1\nI0507 14:35:40.560735 3077 log.go:172] (0xc0005a6c80) (1) Data frame handling\nI0507 14:35:40.560750 3077 log.go:172] (0xc0005a6c80) (1) Data frame sent\nI0507 14:35:40.560773 3077 log.go:172] (0xc00013ee70) (0xc0005a6c80) Stream removed, broadcasting: 1\nI0507 14:35:40.560803 3077 log.go:172] (0xc00013ee70) Go away received\nI0507 14:35:40.561534 3077 log.go:172] (0xc00013ee70) (0xc0005a6c80) Stream removed, broadcasting: 1\nI0507 14:35:40.561560 3077 log.go:172] (0xc00013ee70) (0xc000a6c000) Stream removed, broadcasting: 3\nI0507 14:35:40.561572 3077 log.go:172] (0xc00013ee70) (0xc0007a0000) Stream removed, broadcasting: 5\n" May 7 14:35:40.567: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:35:40.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7416" for this suite. May 7 14:35:46.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:35:46.660: INFO: namespace emptydir-7416 deletion completed in 6.088301149s • [SLOW TEST:12.956 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:35:46.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 7 14:35:51.294: INFO: Successfully updated pod "pod-update-activedeadlineseconds-21b67bfd-4186-4713-a0ea-019c4ea94396" May 7 14:35:51.294: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-21b67bfd-4186-4713-a0ea-019c4ea94396" in namespace "pods-1339" to be "terminated due to deadline exceeded" May 7 14:35:51.300: INFO: Pod "pod-update-activedeadlineseconds-21b67bfd-4186-4713-a0ea-019c4ea94396": Phase="Running", Reason="", readiness=true. Elapsed: 5.927994ms May 7 14:35:53.303: INFO: Pod "pod-update-activedeadlineseconds-21b67bfd-4186-4713-a0ea-019c4ea94396": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.00935711s May 7 14:35:53.303: INFO: Pod "pod-update-activedeadlineseconds-21b67bfd-4186-4713-a0ea-019c4ea94396" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:35:53.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1339" for this suite. May 7 14:35:59.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:35:59.428: INFO: namespace pods-1339 deletion completed in 6.12200039s • [SLOW TEST:12.768 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:35:59.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-9913 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9913 to expose endpoints map[] May 7 14:35:59.851: INFO: Get endpoints failed (2.802222ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 7 14:36:00.855: INFO: successfully validated that service multi-endpoint-test in namespace services-9913 exposes endpoints map[] (1.007062254s elapsed) STEP: Creating pod pod1 in namespace services-9913 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9913 to expose endpoints map[pod1:[100]] May 7 14:36:03.944: INFO: successfully validated that service multi-endpoint-test in namespace services-9913 exposes endpoints map[pod1:[100]] (3.081998514s elapsed) STEP: Creating pod pod2 in namespace services-9913 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9913 to expose endpoints map[pod1:[100] pod2:[101]] May 7 14:36:07.062: INFO: successfully validated that service multi-endpoint-test in namespace services-9913 exposes endpoints map[pod1:[100] pod2:[101]] (3.114513226s elapsed) STEP: Deleting pod pod1 in namespace services-9913 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9913 to expose endpoints map[pod2:[101]] May 7 14:36:08.086: INFO: successfully validated that service multi-endpoint-test in namespace services-9913 exposes endpoints map[pod2:[101]] (1.019823196s elapsed) STEP: Deleting pod pod2 in namespace services-9913 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9913 to expose endpoints map[] May 7 14:36:09.107: INFO: successfully validated that service multi-endpoint-test in namespace services-9913 exposes endpoints map[] (1.015263957s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:36:09.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9913" for this suite. May 7 14:36:15.185: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:36:15.309: INFO: namespace services-9913 deletion completed in 6.134222436s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:15.881 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:36:15.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-df255f51-2577-445b-aac1-db9c37b02e94 STEP: Creating a pod to test consume configMaps May 7 14:36:15.374: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-535246c4-dcf1-4f12-900b-57433ea5c0f5" in namespace "projected-8079" to be "success or failure" May 7 14:36:15.412: INFO: Pod "pod-projected-configmaps-535246c4-dcf1-4f12-900b-57433ea5c0f5": Phase="Pending", Reason="", readiness=false. Elapsed: 37.783969ms May 7 14:36:17.416: INFO: Pod "pod-projected-configmaps-535246c4-dcf1-4f12-900b-57433ea5c0f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042298271s May 7 14:36:19.420: INFO: Pod "pod-projected-configmaps-535246c4-dcf1-4f12-900b-57433ea5c0f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046227757s STEP: Saw pod success May 7 14:36:19.420: INFO: Pod "pod-projected-configmaps-535246c4-dcf1-4f12-900b-57433ea5c0f5" satisfied condition "success or failure" May 7 14:36:19.422: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-535246c4-dcf1-4f12-900b-57433ea5c0f5 container projected-configmap-volume-test: STEP: delete the pod May 7 14:36:19.458: INFO: Waiting for pod pod-projected-configmaps-535246c4-dcf1-4f12-900b-57433ea5c0f5 to disappear May 7 14:36:19.480: INFO: Pod pod-projected-configmaps-535246c4-dcf1-4f12-900b-57433ea5c0f5 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:36:19.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8079" for this suite. May 7 14:36:25.502: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:36:25.619: INFO: namespace projected-8079 deletion completed in 6.135455919s • [SLOW TEST:10.309 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 7 14:36:25.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 7 14:36:29.769: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 7 14:36:29.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4556" for this suite. May 7 14:36:35.820: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 7 14:36:35.888: INFO: namespace container-runtime-4556 deletion completed in 6.092048489s • [SLOW TEST:10.268 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMay 7 14:36:35.889: INFO: Running AfterSuite actions on all nodes May 7 14:36:35.889: INFO: Running AfterSuite actions on node 1 May 7 14:36:35.889: INFO: Skipping dumping logs from cluster Ran 215 of 4412 Specs in 6051.141 seconds SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped PASS