I0411 12:55:44.503091 6 e2e.go:243] Starting e2e run "6e02c623-31f7-407b-978e-03bd21157b98" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1586609743 - Will randomize all specs Will run 215 of 4412 specs Apr 11 12:55:44.682: INFO: >>> kubeConfig: /root/.kube/config Apr 11 12:55:44.686: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 11 12:55:44.707: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 11 12:55:44.746: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 11 12:55:44.746: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 11 12:55:44.746: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 11 12:55:44.755: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 11 12:55:44.755: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 11 12:55:44.755: INFO: e2e test version: v1.15.11 Apr 11 12:55:44.756: INFO: kube-apiserver version: v1.15.7 SSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 12:55:44.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment Apr 11 12:55:44.837: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 11 12:55:44.839: INFO: Creating deployment "nginx-deployment" Apr 11 12:55:44.842: INFO: Waiting for observed generation 1 Apr 11 12:55:46.925: INFO: Waiting for all required pods to come up Apr 11 12:55:46.929: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Apr 11 12:55:54.973: INFO: Waiting for deployment "nginx-deployment" to complete Apr 11 12:55:54.978: INFO: Updating deployment "nginx-deployment" with a non-existent image Apr 11 12:55:54.984: INFO: Updating deployment nginx-deployment Apr 11 12:55:54.984: INFO: Waiting for observed generation 2 Apr 11 12:55:56.999: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Apr 11 12:55:57.002: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Apr 11 12:55:57.003: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Apr 11 12:55:57.009: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Apr 11 12:55:57.009: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Apr 11 12:55:57.029: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Apr 11 12:55:57.033: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Apr 11 12:55:57.033: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Apr 11 12:55:57.039: INFO: Updating deployment nginx-deployment Apr 11 12:55:57.039: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Apr 11 12:55:57.051: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Apr 11 12:55:57.072: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 11 12:55:57.202: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-7457,SelfLink:/apis/apps/v1/namespaces/deployment-7457/deployments/nginx-deployment,UID:15ee9889-aea3-44e2-b7db-261c5d361fb3,ResourceVersion:4837854,Generation:3,CreationTimestamp:2020-04-11 12:55:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-04-11 12:55:55 +0000 UTC 2020-04-11 12:55:44 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-04-11 12:55:57 +0000 UTC 2020-04-11 12:55:57 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Apr 11 12:55:57.230: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-7457,SelfLink:/apis/apps/v1/namespaces/deployment-7457/replicasets/nginx-deployment-55fb7cb77f,UID:15c3a3ab-74f3-4629-9501-8d5c2d9b89af,ResourceVersion:4837885,Generation:3,CreationTimestamp:2020-04-11 12:55:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 15ee9889-aea3-44e2-b7db-261c5d361fb3 0xc002d0d867 0xc002d0d868}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 11 12:55:57.230: INFO: All old ReplicaSets of Deployment "nginx-deployment": Apr 11 12:55:57.230: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-7457,SelfLink:/apis/apps/v1/namespaces/deployment-7457/replicasets/nginx-deployment-7b8c6f4498,UID:19f12386-093d-4d35-959f-16daecd485fe,ResourceVersion:4837883,Generation:3,CreationTimestamp:2020-04-11 12:55:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 15ee9889-aea3-44e2-b7db-261c5d361fb3 0xc002d0d937 0xc002d0d938}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Apr 11 12:55:57.368: INFO: Pod "nginx-deployment-55fb7cb77f-2jz4z" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-2jz4z,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7457,SelfLink:/api/v1/namespaces/deployment-7457/pods/nginx-deployment-55fb7cb77f-2jz4z,UID:fedc6281-5274-4cfb-a81d-246f4887ab90,ResourceVersion:4837866,Generation:0,CreationTimestamp:2020-04-11 12:55:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 15c3a3ab-74f3-4629-9501-8d5c2d9b89af 0xc002b06767 0xc002b06768}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-76k4n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-76k4n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-76k4n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b067e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b06800}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 11 12:55:57.369: INFO: Pod "nginx-deployment-55fb7cb77f-575fw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-575fw,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7457,SelfLink:/api/v1/namespaces/deployment-7457/pods/nginx-deployment-55fb7cb77f-575fw,UID:bae15bc7-a635-4172-962a-a462f77b8c93,ResourceVersion:4837833,Generation:0,CreationTimestamp:2020-04-11 12:55:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 15c3a3ab-74f3-4629-9501-8d5c2d9b89af 0xc002b06887 0xc002b06888}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-76k4n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-76k4n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-76k4n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b06900} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b06920}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:55 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-11 12:55:55 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 11 12:55:57.369: INFO: Pod "nginx-deployment-55fb7cb77f-7dzxz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-7dzxz,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7457,SelfLink:/api/v1/namespaces/deployment-7457/pods/nginx-deployment-55fb7cb77f-7dzxz,UID:92420afe-b398-4f77-82ab-bcff5dfa66eb,ResourceVersion:4837828,Generation:0,CreationTimestamp:2020-04-11 12:55:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 15c3a3ab-74f3-4629-9501-8d5c2d9b89af 0xc002b069f0 0xc002b069f1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-76k4n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-76k4n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-76k4n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b06a70} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b06a90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:55 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-11 12:55:55 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 11 12:55:57.369: INFO: Pod "nginx-deployment-55fb7cb77f-8bmtg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8bmtg,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7457,SelfLink:/api/v1/namespaces/deployment-7457/pods/nginx-deployment-55fb7cb77f-8bmtg,UID:f73ebce4-9b62-4dce-98b7-384429c964ce,ResourceVersion:4837814,Generation:0,CreationTimestamp:2020-04-11 12:55:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 15c3a3ab-74f3-4629-9501-8d5c2d9b89af 0xc002b06b60 0xc002b06b61}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-76k4n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-76k4n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-76k4n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b06be0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b06c00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:55 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-11 12:55:55 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 11 12:55:57.369: INFO: Pod "nginx-deployment-55fb7cb77f-8qjm9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8qjm9,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7457,SelfLink:/api/v1/namespaces/deployment-7457/pods/nginx-deployment-55fb7cb77f-8qjm9,UID:5623426d-d4d1-4036-b3a8-d7b5d67b8670,ResourceVersion:4837882,Generation:0,CreationTimestamp:2020-04-11 12:55:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 15c3a3ab-74f3-4629-9501-8d5c2d9b89af 0xc002b06cd0 0xc002b06cd1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-76k4n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-76k4n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-76k4n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b06d50} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b06d70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 11 12:55:57.370: INFO: Pod "nginx-deployment-55fb7cb77f-98hpj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-98hpj,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7457,SelfLink:/api/v1/namespaces/deployment-7457/pods/nginx-deployment-55fb7cb77f-98hpj,UID:1040a0fc-b942-4e16-a470-3ee3d4f8dffd,ResourceVersion:4837808,Generation:0,CreationTimestamp:2020-04-11 12:55:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 15c3a3ab-74f3-4629-9501-8d5c2d9b89af 0xc002b06df7 0xc002b06df8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-76k4n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-76k4n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-76k4n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b06e70} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b06e90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:55 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-11 12:55:55 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 11 12:55:57.370: INFO: Pod "nginx-deployment-55fb7cb77f-mrzn9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-mrzn9,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7457,SelfLink:/api/v1/namespaces/deployment-7457/pods/nginx-deployment-55fb7cb77f-mrzn9,UID:bad57006-4368-4fa4-b2ee-0aa0bb77af02,ResourceVersion:4837826,Generation:0,CreationTimestamp:2020-04-11 12:55:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 15c3a3ab-74f3-4629-9501-8d5c2d9b89af 0xc002b06f60 0xc002b06f61}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-76k4n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-76k4n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-76k4n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b06fe0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b07000}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:55 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-11 12:55:55 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 11 12:55:57.370: INFO: Pod "nginx-deployment-55fb7cb77f-nv5xv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-nv5xv,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7457,SelfLink:/api/v1/namespaces/deployment-7457/pods/nginx-deployment-55fb7cb77f-nv5xv,UID:c1c52614-193d-4a58-8c1a-c4e9e51803d1,ResourceVersion:4837888,Generation:0,CreationTimestamp:2020-04-11 12:55:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 15c3a3ab-74f3-4629-9501-8d5c2d9b89af 0xc002b070d0 0xc002b070d1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-76k4n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-76k4n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-76k4n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b07150} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b07170}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 11 12:55:57.370: INFO: Pod "nginx-deployment-55fb7cb77f-nxf5q" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-nxf5q,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7457,SelfLink:/api/v1/namespaces/deployment-7457/pods/nginx-deployment-55fb7cb77f-nxf5q,UID:b39c7f8a-d685-4ad8-92cd-c6651e0246a2,ResourceVersion:4837865,Generation:0,CreationTimestamp:2020-04-11 12:55:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 15c3a3ab-74f3-4629-9501-8d5c2d9b89af 0xc002b071f7 0xc002b071f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-76k4n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-76k4n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-76k4n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b07270} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b07290}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 11 12:55:57.370: INFO: Pod "nginx-deployment-55fb7cb77f-qc9xd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-qc9xd,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7457,SelfLink:/api/v1/namespaces/deployment-7457/pods/nginx-deployment-55fb7cb77f-qc9xd,UID:70700e78-0fe8-4165-aeca-2900374358ae,ResourceVersion:4837893,Generation:0,CreationTimestamp:2020-04-11 12:55:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 15c3a3ab-74f3-4629-9501-8d5c2d9b89af 0xc002b07317 0xc002b07318}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-76k4n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-76k4n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-76k4n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b07390} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b073b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 11 12:55:57.370: INFO: Pod "nginx-deployment-55fb7cb77f-t5zv6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-t5zv6,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7457,SelfLink:/api/v1/namespaces/deployment-7457/pods/nginx-deployment-55fb7cb77f-t5zv6,UID:94c0ab31-a8e9-4fa3-9fe0-809d7793d4e6,ResourceVersion:4837857,Generation:0,CreationTimestamp:2020-04-11 12:55:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 15c3a3ab-74f3-4629-9501-8d5c2d9b89af 0xc002b07437 0xc002b07438}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-76k4n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-76k4n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-76k4n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b074b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b074d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 11 12:55:57.370: INFO: Pod "nginx-deployment-55fb7cb77f-wcz84" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-wcz84,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7457,SelfLink:/api/v1/namespaces/deployment-7457/pods/nginx-deployment-55fb7cb77f-wcz84,UID:270f7211-b99c-4c71-a446-6ddfa5b48bbf,ResourceVersion:4837890,Generation:0,CreationTimestamp:2020-04-11 12:55:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 15c3a3ab-74f3-4629-9501-8d5c2d9b89af 0xc002b07557 0xc002b07558}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-76k4n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-76k4n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-76k4n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b075d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b075f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 11 12:55:57.370: INFO: Pod "nginx-deployment-55fb7cb77f-xw8g9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-xw8g9,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7457,SelfLink:/api/v1/namespaces/deployment-7457/pods/nginx-deployment-55fb7cb77f-xw8g9,UID:dbfe4599-aba0-4c31-a26a-3d3ffc004da7,ResourceVersion:4837894,Generation:0,CreationTimestamp:2020-04-11 12:55:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 15c3a3ab-74f3-4629-9501-8d5c2d9b89af 0xc002b07677 0xc002b07678}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-76k4n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-76k4n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-76k4n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b076f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b07710}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 11 12:55:57.371: INFO: Pod "nginx-deployment-7b8c6f4498-4kcpm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4kcpm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7457,SelfLink:/api/v1/namespaces/deployment-7457/pods/nginx-deployment-7b8c6f4498-4kcpm,UID:da2e7041-c8f2-445a-9e37-cb6957f88243,ResourceVersion:4837899,Generation:0,CreationTimestamp:2020-04-11 12:55:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19f12386-093d-4d35-959f-16daecd485fe 0xc002b07797 0xc002b07798}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-76k4n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-76k4n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-76k4n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b07810} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b07830}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:57 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-11 12:55:57 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 11 12:55:57.371: INFO: Pod "nginx-deployment-7b8c6f4498-5c6jw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5c6jw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7457,SelfLink:/api/v1/namespaces/deployment-7457/pods/nginx-deployment-7b8c6f4498-5c6jw,UID:7fb8e2d2-6b1c-41a8-874e-ca559a00f142,ResourceVersion:4837867,Generation:0,CreationTimestamp:2020-04-11 12:55:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19f12386-093d-4d35-959f-16daecd485fe 0xc002b078f7 0xc002b078f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-76k4n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-76k4n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-76k4n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b07970} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b07990}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 11 12:55:57.371: INFO: Pod "nginx-deployment-7b8c6f4498-65cf2" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-65cf2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7457,SelfLink:/api/v1/namespaces/deployment-7457/pods/nginx-deployment-7b8c6f4498-65cf2,UID:0a93d066-b824-4030-8f76-f485721e651a,ResourceVersion:4837750,Generation:0,CreationTimestamp:2020-04-11 12:55:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19f12386-093d-4d35-959f-16daecd485fe 0xc002b07a17 0xc002b07a18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-76k4n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-76k4n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-76k4n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b07a90} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b07ab0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:45 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:52 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.232,StartTime:2020-04-11 12:55:45 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-11 12:55:51 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://6aa9487ad14bbbb51a1bc77ae5347a2c844a56a7e9228f1ee2a9bef4b32b39d1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 11 12:55:57.371: INFO: Pod "nginx-deployment-7b8c6f4498-6vzms" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6vzms,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7457,SelfLink:/api/v1/namespaces/deployment-7457/pods/nginx-deployment-7b8c6f4498-6vzms,UID:635cecff-91c8-4459-b3a9-ed2e2c97db31,ResourceVersion:4837889,Generation:0,CreationTimestamp:2020-04-11 12:55:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19f12386-093d-4d35-959f-16daecd485fe 0xc002b07b87 0xc002b07b88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-76k4n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-76k4n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-76k4n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b07c00} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b07c20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 11 12:55:57.371: INFO: Pod "nginx-deployment-7b8c6f4498-7sftp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7sftp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7457,SelfLink:/api/v1/namespaces/deployment-7457/pods/nginx-deployment-7b8c6f4498-7sftp,UID:0b7f5a7c-8058-4c24-b457-d054bb35f016,ResourceVersion:4837884,Generation:0,CreationTimestamp:2020-04-11 12:55:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19f12386-093d-4d35-959f-16daecd485fe 0xc002b07ca7 0xc002b07ca8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-76k4n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-76k4n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-76k4n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b07d20} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b07d40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:57 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-11 12:55:57 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 11 12:55:57.371: INFO: Pod "nginx-deployment-7b8c6f4498-8hqxb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8hqxb,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7457,SelfLink:/api/v1/namespaces/deployment-7457/pods/nginx-deployment-7b8c6f4498-8hqxb,UID:145e05b8-f76b-4f6f-a303-72e33e3d33fd,ResourceVersion:4837892,Generation:0,CreationTimestamp:2020-04-11 12:55:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19f12386-093d-4d35-959f-16daecd485fe 0xc002b07e07 0xc002b07e08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-76k4n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-76k4n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-76k4n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b07e80} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b07ea0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 11 12:55:57.371: INFO: Pod "nginx-deployment-7b8c6f4498-958ql" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-958ql,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7457,SelfLink:/api/v1/namespaces/deployment-7457/pods/nginx-deployment-7b8c6f4498-958ql,UID:d40f95a5-837b-40d4-9b50-a6dcf2e37360,ResourceVersion:4837743,Generation:0,CreationTimestamp:2020-04-11 12:55:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19f12386-093d-4d35-959f-16daecd485fe 0xc002b07f27 0xc002b07f28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-76k4n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-76k4n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-76k4n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b07fa0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b07fc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:45 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:51 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.22,StartTime:2020-04-11 12:55:45 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-11 12:55:51 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://fafa5a82a9c1f5c1f653b0a4164f97757cfa26a153e07756e06e5643a783b136}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 11 12:55:57.372: INFO: Pod "nginx-deployment-7b8c6f4498-9dcp6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9dcp6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7457,SelfLink:/api/v1/namespaces/deployment-7457/pods/nginx-deployment-7b8c6f4498-9dcp6,UID:f36df22f-46a5-435f-a9fb-104673213305,ResourceVersion:4837895,Generation:0,CreationTimestamp:2020-04-11 12:55:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19f12386-093d-4d35-959f-16daecd485fe 0xc00290c097 0xc00290c098}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-76k4n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-76k4n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-76k4n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00290c110} {node.kubernetes.io/unreachable Exists NoExecute 0xc00290c130}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 11 12:55:57.372: INFO: Pod "nginx-deployment-7b8c6f4498-9kj69" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9kj69,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7457,SelfLink:/api/v1/namespaces/deployment-7457/pods/nginx-deployment-7b8c6f4498-9kj69,UID:228a242a-113d-4b19-86f0-d5b2519f7889,ResourceVersion:4837756,Generation:0,CreationTimestamp:2020-04-11 12:55:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19f12386-093d-4d35-959f-16daecd485fe 0xc00290c1b7 0xc00290c1b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-76k4n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-76k4n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-76k4n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00290c230} {node.kubernetes.io/unreachable Exists NoExecute 0xc00290c250}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:45 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:52 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.21,StartTime:2020-04-11 12:55:45 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-11 12:55:51 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://4f3af1cd599767d0a2db3e42f44f00849ecb5fe7b8cd591bf09b57bd40a12405}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 11 12:55:57.372: INFO: Pod "nginx-deployment-7b8c6f4498-cp4g9" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-cp4g9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7457,SelfLink:/api/v1/namespaces/deployment-7457/pods/nginx-deployment-7b8c6f4498-cp4g9,UID:0792789b-ee61-4e0c-82a2-ad31bb2daa7e,ResourceVersion:4837744,Generation:0,CreationTimestamp:2020-04-11 12:55:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19f12386-093d-4d35-959f-16daecd485fe 0xc00290c327 0xc00290c328}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-76k4n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-76k4n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-76k4n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00290c3a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00290c3c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:45 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:52 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.233,StartTime:2020-04-11 12:55:45 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-11 12:55:51 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://eeb8c14d02be7610a5dbf0aa1f722fa39a374947e1c8e10fc3b1d34adf94ff3e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 11 12:55:57.372: INFO: Pod "nginx-deployment-7b8c6f4498-dd7jt" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-dd7jt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7457,SelfLink:/api/v1/namespaces/deployment-7457/pods/nginx-deployment-7b8c6f4498-dd7jt,UID:b75ed658-bad5-4314-b035-0af24cfc7a4e,ResourceVersion:4837728,Generation:0,CreationTimestamp:2020-04-11 12:55:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19f12386-093d-4d35-959f-16daecd485fe 0xc00290c497 0xc00290c498}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-76k4n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-76k4n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-76k4n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00290c510} {node.kubernetes.io/unreachable Exists NoExecute 0xc00290c530}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:44 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:50 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:50 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.20,StartTime:2020-04-11 12:55:44 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-11 12:55:50 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://a65f942fde59921e02f60f433fdbb3c1856b8365960554d7da155398e591b022}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 11 12:55:57.372: INFO: Pod "nginx-deployment-7b8c6f4498-dqdj2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-dqdj2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7457,SelfLink:/api/v1/namespaces/deployment-7457/pods/nginx-deployment-7b8c6f4498-dqdj2,UID:503a617e-bf15-44d3-9402-06d2e86ce142,ResourceVersion:4837877,Generation:0,CreationTimestamp:2020-04-11 12:55:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19f12386-093d-4d35-959f-16daecd485fe 0xc00290c607 0xc00290c608}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-76k4n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-76k4n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-76k4n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00290c680} {node.kubernetes.io/unreachable Exists NoExecute 0xc00290c6a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 11 12:55:57.372: INFO: Pod "nginx-deployment-7b8c6f4498-fd69p" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fd69p,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7457,SelfLink:/api/v1/namespaces/deployment-7457/pods/nginx-deployment-7b8c6f4498-fd69p,UID:7b278ebc-ce3e-4fc0-8d8d-21e9bd62d189,ResourceVersion:4837887,Generation:0,CreationTimestamp:2020-04-11 12:55:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19f12386-093d-4d35-959f-16daecd485fe 0xc00290c727 0xc00290c728}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-76k4n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-76k4n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-76k4n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00290c7a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00290c7c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 11 12:55:57.372: INFO: Pod "nginx-deployment-7b8c6f4498-gbv7d" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gbv7d,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7457,SelfLink:/api/v1/namespaces/deployment-7457/pods/nginx-deployment-7b8c6f4498-gbv7d,UID:f17b27bf-eddb-47d3-ae76-8fd6a8563eab,ResourceVersion:4837772,Generation:0,CreationTimestamp:2020-04-11 12:55:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19f12386-093d-4d35-959f-16daecd485fe 0xc00290c847 0xc00290c848}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-76k4n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-76k4n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-76k4n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00290c8c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00290c8e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:45 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:53 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:53 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.24,StartTime:2020-04-11 12:55:45 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-11 12:55:53 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://b3930df20ccaf1d90dd3eedd81986ed85c8ea9d588c2b086da95fa413e6c3f7c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 11 12:55:57.372: INFO: Pod "nginx-deployment-7b8c6f4498-l457b" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-l457b,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7457,SelfLink:/api/v1/namespaces/deployment-7457/pods/nginx-deployment-7b8c6f4498-l457b,UID:84871f81-7363-4af7-8497-ee6233248048,ResourceVersion:4837891,Generation:0,CreationTimestamp:2020-04-11 12:55:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19f12386-093d-4d35-959f-16daecd485fe 0xc00290c9d7 0xc00290c9d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-76k4n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-76k4n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-76k4n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00290ca50} {node.kubernetes.io/unreachable Exists NoExecute 0xc00290ca70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 11 12:55:57.372: INFO: Pod "nginx-deployment-7b8c6f4498-qdzpd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qdzpd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7457,SelfLink:/api/v1/namespaces/deployment-7457/pods/nginx-deployment-7b8c6f4498-qdzpd,UID:77ef5fd4-30e4-47ed-9338-c854f0c3a6c7,ResourceVersion:4837853,Generation:0,CreationTimestamp:2020-04-11 12:55:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19f12386-093d-4d35-959f-16daecd485fe 0xc00290caf7 0xc00290caf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-76k4n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-76k4n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-76k4n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00290cb70} {node.kubernetes.io/unreachable Exists NoExecute 0xc00290cb90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 11 12:55:57.373: INFO: Pod "nginx-deployment-7b8c6f4498-rkxvg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rkxvg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7457,SelfLink:/api/v1/namespaces/deployment-7457/pods/nginx-deployment-7b8c6f4498-rkxvg,UID:14cc5951-bfd7-4019-8384-1b6c4c2aa0f7,ResourceVersion:4837754,Generation:0,CreationTimestamp:2020-04-11 12:55:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19f12386-093d-4d35-959f-16daecd485fe 0xc00290cc17 0xc00290cc18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-76k4n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-76k4n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-76k4n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00290cc90} {node.kubernetes.io/unreachable Exists NoExecute 0xc00290ccb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:45 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:52 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.234,StartTime:2020-04-11 12:55:45 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-11 12:55:51 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://ca6a030999a59a4f40764d147af236d98f64d82e547644e1f13526876818602c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 11 12:55:57.373: INFO: Pod "nginx-deployment-7b8c6f4498-v2jmc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-v2jmc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7457,SelfLink:/api/v1/namespaces/deployment-7457/pods/nginx-deployment-7b8c6f4498-v2jmc,UID:e663b013-aac4-4562-8f32-bbf442f737f5,ResourceVersion:4837875,Generation:0,CreationTimestamp:2020-04-11 12:55:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19f12386-093d-4d35-959f-16daecd485fe 0xc00290cd87 0xc00290cd88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-76k4n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-76k4n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-76k4n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00290ce00} {node.kubernetes.io/unreachable Exists NoExecute 0xc00290ce20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 11 12:55:57.373: INFO: Pod "nginx-deployment-7b8c6f4498-xlgpf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xlgpf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7457,SelfLink:/api/v1/namespaces/deployment-7457/pods/nginx-deployment-7b8c6f4498-xlgpf,UID:97cd4260-677b-4508-af94-f2ffa73a568e,ResourceVersion:4837869,Generation:0,CreationTimestamp:2020-04-11 12:55:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19f12386-093d-4d35-959f-16daecd485fe 0xc00290cea7 0xc00290cea8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-76k4n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-76k4n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-76k4n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00290cf20} {node.kubernetes.io/unreachable Exists NoExecute 0xc00290cf40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 11 12:55:57.373: INFO: Pod "nginx-deployment-7b8c6f4498-xndpn" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xndpn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7457,SelfLink:/api/v1/namespaces/deployment-7457/pods/nginx-deployment-7b8c6f4498-xndpn,UID:776506cd-75be-4aab-b856-edf2eff7618c,ResourceVersion:4837769,Generation:0,CreationTimestamp:2020-04-11 12:55:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 19f12386-093d-4d35-959f-16daecd485fe 0xc00290cfc7 0xc00290cfc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-76k4n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-76k4n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-76k4n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00290d040} {node.kubernetes.io/unreachable Exists NoExecute 0xc00290d060}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:45 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:53 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:53 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 12:55:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.23,StartTime:2020-04-11 12:55:45 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-11 12:55:53 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://dd22468215698370cc7d7c5dd028c5ff68e6d932e8172441a894bf979b53527c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 12:55:57.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7457" for this suite. Apr 11 12:56:15.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 12:56:15.720: INFO: namespace deployment-7457 deletion completed in 18.222521818s • [SLOW TEST:30.964 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 12:56:15.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-ae3d534b-67bb-4a1f-8bab-bd4f553fc369 STEP: Creating a pod to test consume configMaps Apr 11 12:56:15.818: INFO: Waiting up to 5m0s for pod "pod-configmaps-b77524a1-f626-432e-b76a-e43e0742f21f" in namespace "configmap-5945" to be "success or failure" Apr 11 12:56:15.827: INFO: Pod "pod-configmaps-b77524a1-f626-432e-b76a-e43e0742f21f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.967036ms Apr 11 12:56:17.831: INFO: Pod "pod-configmaps-b77524a1-f626-432e-b76a-e43e0742f21f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013369477s Apr 11 12:56:19.835: INFO: Pod "pod-configmaps-b77524a1-f626-432e-b76a-e43e0742f21f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017292961s STEP: Saw pod success Apr 11 12:56:19.835: INFO: Pod "pod-configmaps-b77524a1-f626-432e-b76a-e43e0742f21f" satisfied condition "success or failure" Apr 11 12:56:19.839: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-b77524a1-f626-432e-b76a-e43e0742f21f container configmap-volume-test: STEP: delete the pod Apr 11 12:56:19.899: INFO: Waiting for pod pod-configmaps-b77524a1-f626-432e-b76a-e43e0742f21f to disappear Apr 11 12:56:19.914: INFO: Pod pod-configmaps-b77524a1-f626-432e-b76a-e43e0742f21f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 12:56:19.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5945" for this suite. Apr 11 12:56:25.929: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 12:56:26.026: INFO: namespace configmap-5945 deletion completed in 6.107570142s • [SLOW TEST:10.305 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 12:56:26.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 11 12:56:26.115: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b23416d4-50a9-4c50-b4fd-54128131de20" in namespace "projected-3665" to be "success or failure" Apr 11 12:56:26.118: INFO: Pod "downwardapi-volume-b23416d4-50a9-4c50-b4fd-54128131de20": Phase="Pending", Reason="", readiness=false. Elapsed: 3.886856ms Apr 11 12:56:28.122: INFO: Pod "downwardapi-volume-b23416d4-50a9-4c50-b4fd-54128131de20": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007130316s Apr 11 12:56:30.126: INFO: Pod "downwardapi-volume-b23416d4-50a9-4c50-b4fd-54128131de20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011637548s STEP: Saw pod success Apr 11 12:56:30.126: INFO: Pod "downwardapi-volume-b23416d4-50a9-4c50-b4fd-54128131de20" satisfied condition "success or failure" Apr 11 12:56:30.130: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-b23416d4-50a9-4c50-b4fd-54128131de20 container client-container: STEP: delete the pod Apr 11 12:56:30.150: INFO: Waiting for pod downwardapi-volume-b23416d4-50a9-4c50-b4fd-54128131de20 to disappear Apr 11 12:56:30.154: INFO: Pod downwardapi-volume-b23416d4-50a9-4c50-b4fd-54128131de20 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 12:56:30.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3665" for this suite. Apr 11 12:56:36.165: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 12:56:36.241: INFO: namespace projected-3665 deletion completed in 6.083927738s • [SLOW TEST:10.215 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 12:56:36.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 11 12:56:36.348: INFO: Waiting up to 5m0s for pod "downwardapi-volume-85da6fde-18f0-4c03-80af-e0b6501106dc" in namespace "downward-api-9677" to be "success or failure" Apr 11 12:56:36.358: INFO: Pod "downwardapi-volume-85da6fde-18f0-4c03-80af-e0b6501106dc": Phase="Pending", Reason="", readiness=false. Elapsed: 9.529023ms Apr 11 12:56:38.362: INFO: Pod "downwardapi-volume-85da6fde-18f0-4c03-80af-e0b6501106dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013563207s Apr 11 12:56:40.367: INFO: Pod "downwardapi-volume-85da6fde-18f0-4c03-80af-e0b6501106dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018164971s STEP: Saw pod success Apr 11 12:56:40.367: INFO: Pod "downwardapi-volume-85da6fde-18f0-4c03-80af-e0b6501106dc" satisfied condition "success or failure" Apr 11 12:56:40.370: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-85da6fde-18f0-4c03-80af-e0b6501106dc container client-container: STEP: delete the pod Apr 11 12:56:40.432: INFO: Waiting for pod downwardapi-volume-85da6fde-18f0-4c03-80af-e0b6501106dc to disappear Apr 11 12:56:40.444: INFO: Pod downwardapi-volume-85da6fde-18f0-4c03-80af-e0b6501106dc no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 12:56:40.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9677" for this suite. Apr 11 12:56:46.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 12:56:46.534: INFO: namespace downward-api-9677 deletion completed in 6.085718039s • [SLOW TEST:10.293 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 12:56:46.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 11 12:56:54.666: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 11 12:56:54.672: INFO: Pod pod-with-prestop-http-hook still exists Apr 11 12:56:56.673: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 11 12:56:56.676: INFO: Pod pod-with-prestop-http-hook still exists Apr 11 12:56:58.673: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 11 12:56:58.677: INFO: Pod pod-with-prestop-http-hook still exists Apr 11 12:57:00.673: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 11 12:57:00.676: INFO: Pod pod-with-prestop-http-hook still exists Apr 11 12:57:02.673: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 11 12:57:02.676: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 12:57:02.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3965" for this suite. Apr 11 12:57:24.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 12:57:24.798: INFO: namespace container-lifecycle-hook-3965 deletion completed in 22.102867013s • [SLOW TEST:38.264 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 12:57:24.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 11 12:57:24.883: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2474597e-b393-436e-b747-fcc56bc17fcf" in namespace "projected-6168" to be "success or failure" Apr 11 12:57:24.893: INFO: Pod "downwardapi-volume-2474597e-b393-436e-b747-fcc56bc17fcf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.141957ms Apr 11 12:57:26.897: INFO: Pod "downwardapi-volume-2474597e-b393-436e-b747-fcc56bc17fcf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014330653s Apr 11 12:57:28.902: INFO: Pod "downwardapi-volume-2474597e-b393-436e-b747-fcc56bc17fcf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018886712s STEP: Saw pod success Apr 11 12:57:28.902: INFO: Pod "downwardapi-volume-2474597e-b393-436e-b747-fcc56bc17fcf" satisfied condition "success or failure" Apr 11 12:57:28.905: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-2474597e-b393-436e-b747-fcc56bc17fcf container client-container: STEP: delete the pod Apr 11 12:57:28.936: INFO: Waiting for pod downwardapi-volume-2474597e-b393-436e-b747-fcc56bc17fcf to disappear Apr 11 12:57:28.943: INFO: Pod downwardapi-volume-2474597e-b393-436e-b747-fcc56bc17fcf no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 12:57:28.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6168" for this suite. Apr 11 12:57:34.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 12:57:35.050: INFO: namespace projected-6168 deletion completed in 6.10420513s • [SLOW TEST:10.251 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 12:57:35.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 12:57:35.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2748" for this suite. Apr 11 12:57:41.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 12:57:41.228: INFO: namespace services-2748 deletion completed in 6.103924684s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.178 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 12:57:41.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Apr 11 12:57:41.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4056' Apr 11 12:57:43.708: INFO: stderr: "" Apr 11 12:57:43.708: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 11 12:57:43.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4056' Apr 11 12:57:43.820: INFO: stderr: "" Apr 11 12:57:43.821: INFO: stdout: "update-demo-nautilus-bnstd update-demo-nautilus-bt4lz " Apr 11 12:57:43.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bnstd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4056' Apr 11 12:57:43.918: INFO: stderr: "" Apr 11 12:57:43.918: INFO: stdout: "" Apr 11 12:57:43.918: INFO: update-demo-nautilus-bnstd is created but not running Apr 11 12:57:48.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4056' Apr 11 12:57:49.014: INFO: stderr: "" Apr 11 12:57:49.014: INFO: stdout: "update-demo-nautilus-bnstd update-demo-nautilus-bt4lz " Apr 11 12:57:49.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bnstd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4056' Apr 11 12:57:49.106: INFO: stderr: "" Apr 11 12:57:49.106: INFO: stdout: "true" Apr 11 12:57:49.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bnstd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4056' Apr 11 12:57:49.213: INFO: stderr: "" Apr 11 12:57:49.213: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 11 12:57:49.213: INFO: validating pod update-demo-nautilus-bnstd Apr 11 12:57:49.216: INFO: got data: { "image": "nautilus.jpg" } Apr 11 12:57:49.217: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 11 12:57:49.217: INFO: update-demo-nautilus-bnstd is verified up and running Apr 11 12:57:49.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bt4lz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4056' Apr 11 12:57:49.308: INFO: stderr: "" Apr 11 12:57:49.308: INFO: stdout: "true" Apr 11 12:57:49.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bt4lz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4056' Apr 11 12:57:49.406: INFO: stderr: "" Apr 11 12:57:49.406: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 11 12:57:49.406: INFO: validating pod update-demo-nautilus-bt4lz Apr 11 12:57:49.410: INFO: got data: { "image": "nautilus.jpg" } Apr 11 12:57:49.410: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 11 12:57:49.410: INFO: update-demo-nautilus-bt4lz is verified up and running STEP: scaling down the replication controller Apr 11 12:57:49.414: INFO: scanned /root for discovery docs: Apr 11 12:57:49.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-4056' Apr 11 12:57:50.526: INFO: stderr: "" Apr 11 12:57:50.526: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 11 12:57:50.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4056' Apr 11 12:57:50.614: INFO: stderr: "" Apr 11 12:57:50.614: INFO: stdout: "update-demo-nautilus-bnstd update-demo-nautilus-bt4lz " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 11 12:57:55.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4056' Apr 11 12:57:55.708: INFO: stderr: "" Apr 11 12:57:55.708: INFO: stdout: "update-demo-nautilus-bnstd update-demo-nautilus-bt4lz " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 11 12:58:00.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4056' Apr 11 12:58:00.825: INFO: stderr: "" Apr 11 12:58:00.825: INFO: stdout: "update-demo-nautilus-bnstd update-demo-nautilus-bt4lz " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 11 12:58:05.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4056' Apr 11 12:58:05.916: INFO: stderr: "" Apr 11 12:58:05.916: INFO: stdout: "update-demo-nautilus-bnstd " Apr 11 12:58:05.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bnstd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4056' Apr 11 12:58:06.015: INFO: stderr: "" Apr 11 12:58:06.015: INFO: stdout: "true" Apr 11 12:58:06.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bnstd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4056' Apr 11 12:58:06.118: INFO: stderr: "" Apr 11 12:58:06.118: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 11 12:58:06.118: INFO: validating pod update-demo-nautilus-bnstd Apr 11 12:58:06.121: INFO: got data: { "image": "nautilus.jpg" } Apr 11 12:58:06.121: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 11 12:58:06.121: INFO: update-demo-nautilus-bnstd is verified up and running STEP: scaling up the replication controller Apr 11 12:58:06.123: INFO: scanned /root for discovery docs: Apr 11 12:58:06.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-4056' Apr 11 12:58:07.248: INFO: stderr: "" Apr 11 12:58:07.248: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 11 12:58:07.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4056' Apr 11 12:58:07.331: INFO: stderr: "" Apr 11 12:58:07.331: INFO: stdout: "update-demo-nautilus-26bjs update-demo-nautilus-bnstd " Apr 11 12:58:07.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-26bjs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4056' Apr 11 12:58:07.409: INFO: stderr: "" Apr 11 12:58:07.409: INFO: stdout: "" Apr 11 12:58:07.409: INFO: update-demo-nautilus-26bjs is created but not running Apr 11 12:58:12.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4056' Apr 11 12:58:12.505: INFO: stderr: "" Apr 11 12:58:12.505: INFO: stdout: "update-demo-nautilus-26bjs update-demo-nautilus-bnstd " Apr 11 12:58:12.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-26bjs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4056' Apr 11 12:58:12.604: INFO: stderr: "" Apr 11 12:58:12.604: INFO: stdout: "true" Apr 11 12:58:12.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-26bjs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4056' Apr 11 12:58:12.711: INFO: stderr: "" Apr 11 12:58:12.711: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 11 12:58:12.711: INFO: validating pod update-demo-nautilus-26bjs Apr 11 12:58:12.715: INFO: got data: { "image": "nautilus.jpg" } Apr 11 12:58:12.715: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 11 12:58:12.715: INFO: update-demo-nautilus-26bjs is verified up and running Apr 11 12:58:12.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bnstd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4056' Apr 11 12:58:12.811: INFO: stderr: "" Apr 11 12:58:12.811: INFO: stdout: "true" Apr 11 12:58:12.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bnstd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4056' Apr 11 12:58:12.899: INFO: stderr: "" Apr 11 12:58:12.899: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 11 12:58:12.899: INFO: validating pod update-demo-nautilus-bnstd Apr 11 12:58:12.902: INFO: got data: { "image": "nautilus.jpg" } Apr 11 12:58:12.902: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 11 12:58:12.902: INFO: update-demo-nautilus-bnstd is verified up and running STEP: using delete to clean up resources Apr 11 12:58:12.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4056' Apr 11 12:58:12.999: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 11 12:58:12.999: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 11 12:58:12.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4056' Apr 11 12:58:13.105: INFO: stderr: "No resources found.\n" Apr 11 12:58:13.105: INFO: stdout: "" Apr 11 12:58:13.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4056 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 11 12:58:13.211: INFO: stderr: "" Apr 11 12:58:13.211: INFO: stdout: "update-demo-nautilus-26bjs\nupdate-demo-nautilus-bnstd\n" Apr 11 12:58:13.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4056' Apr 11 12:58:13.812: INFO: stderr: "No resources found.\n" Apr 11 12:58:13.812: INFO: stdout: "" Apr 11 12:58:13.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4056 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 11 12:58:13.908: INFO: stderr: "" Apr 11 12:58:13.908: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 12:58:13.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4056" for this suite. Apr 11 12:58:35.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 12:58:36.042: INFO: namespace kubectl-4056 deletion completed in 22.13127209s • [SLOW TEST:54.813 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 12:58:36.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-83ea3336-23c2-4a13-82df-cf5f8bf91f97 STEP: Creating a pod to test consume configMaps Apr 11 12:58:36.104: INFO: Waiting up to 5m0s for pod "pod-configmaps-d878ef35-8d5c-4714-8ddc-f1cdc15e6c7b" in namespace "configmap-1484" to be "success or failure" Apr 11 12:58:36.116: INFO: Pod "pod-configmaps-d878ef35-8d5c-4714-8ddc-f1cdc15e6c7b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.861851ms Apr 11 12:58:38.120: INFO: Pod "pod-configmaps-d878ef35-8d5c-4714-8ddc-f1cdc15e6c7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016666871s Apr 11 12:58:40.125: INFO: Pod "pod-configmaps-d878ef35-8d5c-4714-8ddc-f1cdc15e6c7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021516841s STEP: Saw pod success Apr 11 12:58:40.125: INFO: Pod "pod-configmaps-d878ef35-8d5c-4714-8ddc-f1cdc15e6c7b" satisfied condition "success or failure" Apr 11 12:58:40.128: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-d878ef35-8d5c-4714-8ddc-f1cdc15e6c7b container configmap-volume-test: STEP: delete the pod Apr 11 12:58:40.148: INFO: Waiting for pod pod-configmaps-d878ef35-8d5c-4714-8ddc-f1cdc15e6c7b to disappear Apr 11 12:58:40.151: INFO: Pod pod-configmaps-d878ef35-8d5c-4714-8ddc-f1cdc15e6c7b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 12:58:40.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1484" for this suite. Apr 11 12:58:46.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 12:58:46.290: INFO: namespace configmap-1484 deletion completed in 6.135390241s • [SLOW TEST:10.248 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 12:58:46.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-4314 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-4314 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4314 Apr 11 12:58:46.368: INFO: Found 0 stateful pods, waiting for 1 Apr 11 12:58:56.373: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Apr 11 12:58:56.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4314 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 11 12:58:56.699: INFO: stderr: "I0411 12:58:56.510579 614 log.go:172] (0xc000924420) (0xc0007ac6e0) Create stream\nI0411 12:58:56.510637 614 log.go:172] (0xc000924420) (0xc0007ac6e0) Stream added, broadcasting: 1\nI0411 12:58:56.513016 614 log.go:172] (0xc000924420) Reply frame received for 1\nI0411 12:58:56.513069 614 log.go:172] (0xc000924420) (0xc00060e140) Create stream\nI0411 12:58:56.513089 614 log.go:172] (0xc000924420) (0xc00060e140) Stream added, broadcasting: 3\nI0411 12:58:56.514227 614 log.go:172] (0xc000924420) Reply frame received for 3\nI0411 12:58:56.514260 614 log.go:172] (0xc000924420) (0xc00060e1e0) Create stream\nI0411 12:58:56.514269 614 log.go:172] (0xc000924420) (0xc00060e1e0) Stream added, broadcasting: 5\nI0411 12:58:56.515112 614 log.go:172] (0xc000924420) Reply frame received for 5\nI0411 12:58:56.595534 614 log.go:172] (0xc000924420) Data frame received for 5\nI0411 12:58:56.595563 614 log.go:172] (0xc00060e1e0) (5) Data frame handling\nI0411 12:58:56.595583 614 log.go:172] (0xc00060e1e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0411 12:58:56.692474 614 log.go:172] (0xc000924420) Data frame received for 3\nI0411 12:58:56.692509 614 log.go:172] (0xc00060e140) (3) Data frame handling\nI0411 12:58:56.692540 614 log.go:172] (0xc000924420) Data frame received for 5\nI0411 12:58:56.692564 614 log.go:172] (0xc00060e1e0) (5) Data frame handling\nI0411 12:58:56.692592 614 log.go:172] (0xc00060e140) (3) Data frame sent\nI0411 12:58:56.692600 614 log.go:172] (0xc000924420) Data frame received for 3\nI0411 12:58:56.692604 614 log.go:172] (0xc00060e140) (3) Data frame handling\nI0411 12:58:56.694744 614 log.go:172] (0xc000924420) Data frame received for 1\nI0411 12:58:56.694758 614 log.go:172] (0xc0007ac6e0) (1) Data frame handling\nI0411 12:58:56.694768 614 log.go:172] (0xc0007ac6e0) (1) Data frame sent\nI0411 12:58:56.694787 614 log.go:172] (0xc000924420) (0xc0007ac6e0) Stream removed, broadcasting: 1\nI0411 12:58:56.694858 614 log.go:172] (0xc000924420) Go away received\nI0411 12:58:56.695102 614 log.go:172] (0xc000924420) (0xc0007ac6e0) Stream removed, broadcasting: 1\nI0411 12:58:56.695113 614 log.go:172] (0xc000924420) (0xc00060e140) Stream removed, broadcasting: 3\nI0411 12:58:56.695118 614 log.go:172] (0xc000924420) (0xc00060e1e0) Stream removed, broadcasting: 5\n" Apr 11 12:58:56.699: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 11 12:58:56.699: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 11 12:58:56.708: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 11 12:59:06.723: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 11 12:59:06.723: INFO: Waiting for statefulset status.replicas updated to 0 Apr 11 12:59:06.741: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999757s Apr 11 12:59:07.745: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.992125761s Apr 11 12:59:08.751: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.98740142s Apr 11 12:59:09.756: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.982102643s Apr 11 12:59:10.761: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.976783742s Apr 11 12:59:11.766: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.971969422s Apr 11 12:59:12.770: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.96693863s Apr 11 12:59:13.775: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.962332039s Apr 11 12:59:14.780: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.957510626s Apr 11 12:59:15.785: INFO: Verifying statefulset ss doesn't scale past 1 for another 952.438094ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4314 Apr 11 12:59:16.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4314 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 11 12:59:17.023: INFO: stderr: "I0411 12:59:16.913667 635 log.go:172] (0xc00099e370) (0xc0009306e0) Create stream\nI0411 12:59:16.913714 635 log.go:172] (0xc00099e370) (0xc0009306e0) Stream added, broadcasting: 1\nI0411 12:59:16.915694 635 log.go:172] (0xc00099e370) Reply frame received for 1\nI0411 12:59:16.915723 635 log.go:172] (0xc00099e370) (0xc0006b60a0) Create stream\nI0411 12:59:16.915738 635 log.go:172] (0xc00099e370) (0xc0006b60a0) Stream added, broadcasting: 3\nI0411 12:59:16.916524 635 log.go:172] (0xc00099e370) Reply frame received for 3\nI0411 12:59:16.916563 635 log.go:172] (0xc00099e370) (0xc000930780) Create stream\nI0411 12:59:16.916575 635 log.go:172] (0xc00099e370) (0xc000930780) Stream added, broadcasting: 5\nI0411 12:59:16.917537 635 log.go:172] (0xc00099e370) Reply frame received for 5\nI0411 12:59:17.015779 635 log.go:172] (0xc00099e370) Data frame received for 3\nI0411 12:59:17.015809 635 log.go:172] (0xc0006b60a0) (3) Data frame handling\nI0411 12:59:17.015825 635 log.go:172] (0xc0006b60a0) (3) Data frame sent\nI0411 12:59:17.015833 635 log.go:172] (0xc00099e370) Data frame received for 3\nI0411 12:59:17.015840 635 log.go:172] (0xc0006b60a0) (3) Data frame handling\nI0411 12:59:17.016724 635 log.go:172] (0xc00099e370) Data frame received for 5\nI0411 12:59:17.016742 635 log.go:172] (0xc000930780) (5) Data frame handling\nI0411 12:59:17.016754 635 log.go:172] (0xc000930780) (5) Data frame sent\nI0411 12:59:17.016761 635 log.go:172] (0xc00099e370) Data frame received for 5\nI0411 12:59:17.016767 635 log.go:172] (0xc000930780) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0411 12:59:17.018703 635 log.go:172] (0xc00099e370) Data frame received for 1\nI0411 12:59:17.018718 635 log.go:172] (0xc0009306e0) (1) Data frame handling\nI0411 12:59:17.018725 635 log.go:172] (0xc0009306e0) (1) Data frame sent\nI0411 12:59:17.018892 635 log.go:172] (0xc00099e370) (0xc0009306e0) Stream removed, broadcasting: 1\nI0411 12:59:17.018938 635 log.go:172] (0xc00099e370) Go away received\nI0411 12:59:17.019343 635 log.go:172] (0xc00099e370) (0xc0009306e0) Stream removed, broadcasting: 1\nI0411 12:59:17.019364 635 log.go:172] (0xc00099e370) (0xc0006b60a0) Stream removed, broadcasting: 3\nI0411 12:59:17.019373 635 log.go:172] (0xc00099e370) (0xc000930780) Stream removed, broadcasting: 5\n" Apr 11 12:59:17.023: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 11 12:59:17.024: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 11 12:59:17.027: INFO: Found 1 stateful pods, waiting for 3 Apr 11 12:59:27.033: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 11 12:59:27.033: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 11 12:59:27.033: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Apr 11 12:59:27.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4314 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 11 12:59:27.271: INFO: stderr: "I0411 12:59:27.174708 655 log.go:172] (0xc0008ae420) (0xc0002e0820) Create stream\nI0411 12:59:27.174763 655 log.go:172] (0xc0008ae420) (0xc0002e0820) Stream added, broadcasting: 1\nI0411 12:59:27.177654 655 log.go:172] (0xc0008ae420) Reply frame received for 1\nI0411 12:59:27.177713 655 log.go:172] (0xc0008ae420) (0xc00074e000) Create stream\nI0411 12:59:27.177741 655 log.go:172] (0xc0008ae420) (0xc00074e000) Stream added, broadcasting: 3\nI0411 12:59:27.178940 655 log.go:172] (0xc0008ae420) Reply frame received for 3\nI0411 12:59:27.178996 655 log.go:172] (0xc0008ae420) (0xc0002e08c0) Create stream\nI0411 12:59:27.179013 655 log.go:172] (0xc0008ae420) (0xc0002e08c0) Stream added, broadcasting: 5\nI0411 12:59:27.180148 655 log.go:172] (0xc0008ae420) Reply frame received for 5\nI0411 12:59:27.265311 655 log.go:172] (0xc0008ae420) Data frame received for 3\nI0411 12:59:27.265347 655 log.go:172] (0xc00074e000) (3) Data frame handling\nI0411 12:59:27.265359 655 log.go:172] (0xc00074e000) (3) Data frame sent\nI0411 12:59:27.265368 655 log.go:172] (0xc0008ae420) Data frame received for 3\nI0411 12:59:27.265374 655 log.go:172] (0xc00074e000) (3) Data frame handling\nI0411 12:59:27.265403 655 log.go:172] (0xc0008ae420) Data frame received for 5\nI0411 12:59:27.265427 655 log.go:172] (0xc0002e08c0) (5) Data frame handling\nI0411 12:59:27.265441 655 log.go:172] (0xc0002e08c0) (5) Data frame sent\nI0411 12:59:27.265447 655 log.go:172] (0xc0008ae420) Data frame received for 5\nI0411 12:59:27.265451 655 log.go:172] (0xc0002e08c0) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0411 12:59:27.266898 655 log.go:172] (0xc0008ae420) Data frame received for 1\nI0411 12:59:27.266921 655 log.go:172] (0xc0002e0820) (1) Data frame handling\nI0411 12:59:27.266931 655 log.go:172] (0xc0002e0820) (1) Data frame sent\nI0411 12:59:27.266943 655 log.go:172] (0xc0008ae420) (0xc0002e0820) Stream removed, broadcasting: 1\nI0411 12:59:27.267241 655 log.go:172] (0xc0008ae420) (0xc0002e0820) Stream removed, broadcasting: 1\nI0411 12:59:27.267257 655 log.go:172] (0xc0008ae420) (0xc00074e000) Stream removed, broadcasting: 3\nI0411 12:59:27.267267 655 log.go:172] (0xc0008ae420) (0xc0002e08c0) Stream removed, broadcasting: 5\n" Apr 11 12:59:27.271: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 11 12:59:27.271: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 11 12:59:27.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4314 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 11 12:59:27.549: INFO: stderr: "I0411 12:59:27.406142 675 log.go:172] (0xc00098a370) (0xc0003dc780) Create stream\nI0411 12:59:27.406187 675 log.go:172] (0xc00098a370) (0xc0003dc780) Stream added, broadcasting: 1\nI0411 12:59:27.408763 675 log.go:172] (0xc00098a370) Reply frame received for 1\nI0411 12:59:27.408830 675 log.go:172] (0xc00098a370) (0xc0003dc820) Create stream\nI0411 12:59:27.408856 675 log.go:172] (0xc00098a370) (0xc0003dc820) Stream added, broadcasting: 3\nI0411 12:59:27.409973 675 log.go:172] (0xc00098a370) Reply frame received for 3\nI0411 12:59:27.409999 675 log.go:172] (0xc00098a370) (0xc000998000) Create stream\nI0411 12:59:27.410007 675 log.go:172] (0xc00098a370) (0xc000998000) Stream added, broadcasting: 5\nI0411 12:59:27.410785 675 log.go:172] (0xc00098a370) Reply frame received for 5\nI0411 12:59:27.474331 675 log.go:172] (0xc00098a370) Data frame received for 5\nI0411 12:59:27.474353 675 log.go:172] (0xc000998000) (5) Data frame handling\nI0411 12:59:27.474367 675 log.go:172] (0xc000998000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0411 12:59:27.542430 675 log.go:172] (0xc00098a370) Data frame received for 5\nI0411 12:59:27.542462 675 log.go:172] (0xc000998000) (5) Data frame handling\nI0411 12:59:27.542484 675 log.go:172] (0xc00098a370) Data frame received for 3\nI0411 12:59:27.542491 675 log.go:172] (0xc0003dc820) (3) Data frame handling\nI0411 12:59:27.542499 675 log.go:172] (0xc0003dc820) (3) Data frame sent\nI0411 12:59:27.542506 675 log.go:172] (0xc00098a370) Data frame received for 3\nI0411 12:59:27.542511 675 log.go:172] (0xc0003dc820) (3) Data frame handling\nI0411 12:59:27.545084 675 log.go:172] (0xc00098a370) Data frame received for 1\nI0411 12:59:27.545108 675 log.go:172] (0xc0003dc780) (1) Data frame handling\nI0411 12:59:27.545253 675 log.go:172] (0xc0003dc780) (1) Data frame sent\nI0411 12:59:27.545267 675 log.go:172] (0xc00098a370) (0xc0003dc780) Stream removed, broadcasting: 1\nI0411 12:59:27.545280 675 log.go:172] (0xc00098a370) Go away received\nI0411 12:59:27.545693 675 log.go:172] (0xc00098a370) (0xc0003dc780) Stream removed, broadcasting: 1\nI0411 12:59:27.545710 675 log.go:172] (0xc00098a370) (0xc0003dc820) Stream removed, broadcasting: 3\nI0411 12:59:27.545716 675 log.go:172] (0xc00098a370) (0xc000998000) Stream removed, broadcasting: 5\n" Apr 11 12:59:27.549: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 11 12:59:27.549: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 11 12:59:27.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4314 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 11 12:59:27.805: INFO: stderr: "I0411 12:59:27.668774 697 log.go:172] (0xc00070e420) (0xc0002e0820) Create stream\nI0411 12:59:27.668858 697 log.go:172] (0xc00070e420) (0xc0002e0820) Stream added, broadcasting: 1\nI0411 12:59:27.672623 697 log.go:172] (0xc00070e420) Reply frame received for 1\nI0411 12:59:27.672689 697 log.go:172] (0xc00070e420) (0xc0008ba000) Create stream\nI0411 12:59:27.672724 697 log.go:172] (0xc00070e420) (0xc0008ba000) Stream added, broadcasting: 3\nI0411 12:59:27.674165 697 log.go:172] (0xc00070e420) Reply frame received for 3\nI0411 12:59:27.674199 697 log.go:172] (0xc00070e420) (0xc0008ba0a0) Create stream\nI0411 12:59:27.674208 697 log.go:172] (0xc00070e420) (0xc0008ba0a0) Stream added, broadcasting: 5\nI0411 12:59:27.675084 697 log.go:172] (0xc00070e420) Reply frame received for 5\nI0411 12:59:27.738167 697 log.go:172] (0xc00070e420) Data frame received for 5\nI0411 12:59:27.738188 697 log.go:172] (0xc0008ba0a0) (5) Data frame handling\nI0411 12:59:27.738200 697 log.go:172] (0xc0008ba0a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0411 12:59:27.798763 697 log.go:172] (0xc00070e420) Data frame received for 5\nI0411 12:59:27.798832 697 log.go:172] (0xc0008ba0a0) (5) Data frame handling\nI0411 12:59:27.798868 697 log.go:172] (0xc00070e420) Data frame received for 3\nI0411 12:59:27.798894 697 log.go:172] (0xc0008ba000) (3) Data frame handling\nI0411 12:59:27.798929 697 log.go:172] (0xc0008ba000) (3) Data frame sent\nI0411 12:59:27.798956 697 log.go:172] (0xc00070e420) Data frame received for 3\nI0411 12:59:27.798977 697 log.go:172] (0xc0008ba000) (3) Data frame handling\nI0411 12:59:27.800885 697 log.go:172] (0xc00070e420) Data frame received for 1\nI0411 12:59:27.800905 697 log.go:172] (0xc0002e0820) (1) Data frame handling\nI0411 12:59:27.800911 697 log.go:172] (0xc0002e0820) (1) Data frame sent\nI0411 12:59:27.800925 697 log.go:172] (0xc00070e420) (0xc0002e0820) Stream removed, broadcasting: 1\nI0411 12:59:27.800956 697 log.go:172] (0xc00070e420) Go away received\nI0411 12:59:27.801309 697 log.go:172] (0xc00070e420) (0xc0002e0820) Stream removed, broadcasting: 1\nI0411 12:59:27.801322 697 log.go:172] (0xc00070e420) (0xc0008ba000) Stream removed, broadcasting: 3\nI0411 12:59:27.801327 697 log.go:172] (0xc00070e420) (0xc0008ba0a0) Stream removed, broadcasting: 5\n" Apr 11 12:59:27.805: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 11 12:59:27.805: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 11 12:59:27.805: INFO: Waiting for statefulset status.replicas updated to 0 Apr 11 12:59:27.809: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Apr 11 12:59:37.818: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 11 12:59:37.818: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 11 12:59:37.818: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 11 12:59:37.830: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.99999953s Apr 11 12:59:38.838: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993870787s Apr 11 12:59:39.843: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.986029995s Apr 11 12:59:40.848: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.980618013s Apr 11 12:59:41.865: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.975764934s Apr 11 12:59:42.869: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.958499589s Apr 11 12:59:43.875: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.95461204s Apr 11 12:59:44.880: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.948590206s Apr 11 12:59:45.897: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.943829299s Apr 11 12:59:46.902: INFO: Verifying statefulset ss doesn't scale past 3 for another 926.709821ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4314 Apr 11 12:59:47.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4314 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 11 12:59:48.105: INFO: stderr: "I0411 12:59:48.033417 717 log.go:172] (0xc000a420b0) (0xc000a306e0) Create stream\nI0411 12:59:48.033477 717 log.go:172] (0xc000a420b0) (0xc000a306e0) Stream added, broadcasting: 1\nI0411 12:59:48.035839 717 log.go:172] (0xc000a420b0) Reply frame received for 1\nI0411 12:59:48.035891 717 log.go:172] (0xc000a420b0) (0xc0006d01e0) Create stream\nI0411 12:59:48.035907 717 log.go:172] (0xc000a420b0) (0xc0006d01e0) Stream added, broadcasting: 3\nI0411 12:59:48.036785 717 log.go:172] (0xc000a420b0) Reply frame received for 3\nI0411 12:59:48.036819 717 log.go:172] (0xc000a420b0) (0xc000a30780) Create stream\nI0411 12:59:48.036829 717 log.go:172] (0xc000a420b0) (0xc000a30780) Stream added, broadcasting: 5\nI0411 12:59:48.038034 717 log.go:172] (0xc000a420b0) Reply frame received for 5\nI0411 12:59:48.098650 717 log.go:172] (0xc000a420b0) Data frame received for 5\nI0411 12:59:48.098695 717 log.go:172] (0xc000a30780) (5) Data frame handling\nI0411 12:59:48.098709 717 log.go:172] (0xc000a30780) (5) Data frame sent\nI0411 12:59:48.098721 717 log.go:172] (0xc000a420b0) Data frame received for 5\nI0411 12:59:48.098732 717 log.go:172] (0xc000a30780) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0411 12:59:48.098781 717 log.go:172] (0xc000a420b0) Data frame received for 3\nI0411 12:59:48.098810 717 log.go:172] (0xc0006d01e0) (3) Data frame handling\nI0411 12:59:48.098830 717 log.go:172] (0xc0006d01e0) (3) Data frame sent\nI0411 12:59:48.098841 717 log.go:172] (0xc000a420b0) Data frame received for 3\nI0411 12:59:48.098854 717 log.go:172] (0xc0006d01e0) (3) Data frame handling\nI0411 12:59:48.099916 717 log.go:172] (0xc000a420b0) Data frame received for 1\nI0411 12:59:48.099947 717 log.go:172] (0xc000a306e0) (1) Data frame handling\nI0411 12:59:48.099975 717 log.go:172] (0xc000a306e0) (1) Data frame sent\nI0411 12:59:48.099999 717 log.go:172] (0xc000a420b0) (0xc000a306e0) Stream removed, broadcasting: 1\nI0411 12:59:48.100134 717 log.go:172] (0xc000a420b0) Go away received\nI0411 12:59:48.100454 717 log.go:172] (0xc000a420b0) (0xc000a306e0) Stream removed, broadcasting: 1\nI0411 12:59:48.100470 717 log.go:172] (0xc000a420b0) (0xc0006d01e0) Stream removed, broadcasting: 3\nI0411 12:59:48.100478 717 log.go:172] (0xc000a420b0) (0xc000a30780) Stream removed, broadcasting: 5\n" Apr 11 12:59:48.105: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 11 12:59:48.105: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 11 12:59:48.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4314 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 11 12:59:48.307: INFO: stderr: "I0411 12:59:48.243426 738 log.go:172] (0xc000a0e4d0) (0xc000366820) Create stream\nI0411 12:59:48.243481 738 log.go:172] (0xc000a0e4d0) (0xc000366820) Stream added, broadcasting: 1\nI0411 12:59:48.246653 738 log.go:172] (0xc000a0e4d0) Reply frame received for 1\nI0411 12:59:48.247417 738 log.go:172] (0xc000a0e4d0) (0xc00071e000) Create stream\nI0411 12:59:48.247457 738 log.go:172] (0xc000a0e4d0) (0xc00071e000) Stream added, broadcasting: 3\nI0411 12:59:48.249761 738 log.go:172] (0xc000a0e4d0) Reply frame received for 3\nI0411 12:59:48.249791 738 log.go:172] (0xc000a0e4d0) (0xc00071e0a0) Create stream\nI0411 12:59:48.249801 738 log.go:172] (0xc000a0e4d0) (0xc00071e0a0) Stream added, broadcasting: 5\nI0411 12:59:48.250823 738 log.go:172] (0xc000a0e4d0) Reply frame received for 5\nI0411 12:59:48.300999 738 log.go:172] (0xc000a0e4d0) Data frame received for 3\nI0411 12:59:48.301027 738 log.go:172] (0xc00071e000) (3) Data frame handling\nI0411 12:59:48.301047 738 log.go:172] (0xc00071e000) (3) Data frame sent\nI0411 12:59:48.301058 738 log.go:172] (0xc000a0e4d0) Data frame received for 3\nI0411 12:59:48.301068 738 log.go:172] (0xc00071e000) (3) Data frame handling\nI0411 12:59:48.301480 738 log.go:172] (0xc000a0e4d0) Data frame received for 5\nI0411 12:59:48.301503 738 log.go:172] (0xc00071e0a0) (5) Data frame handling\nI0411 12:59:48.301530 738 log.go:172] (0xc00071e0a0) (5) Data frame sent\nI0411 12:59:48.301545 738 log.go:172] (0xc000a0e4d0) Data frame received for 5\nI0411 12:59:48.301559 738 log.go:172] (0xc00071e0a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0411 12:59:48.303347 738 log.go:172] (0xc000a0e4d0) Data frame received for 1\nI0411 12:59:48.303365 738 log.go:172] (0xc000366820) (1) Data frame handling\nI0411 12:59:48.303378 738 log.go:172] (0xc000366820) (1) Data frame sent\nI0411 12:59:48.303392 738 log.go:172] (0xc000a0e4d0) (0xc000366820) Stream removed, broadcasting: 1\nI0411 12:59:48.303458 738 log.go:172] (0xc000a0e4d0) Go away received\nI0411 12:59:48.303618 738 log.go:172] (0xc000a0e4d0) (0xc000366820) Stream removed, broadcasting: 1\nI0411 12:59:48.303635 738 log.go:172] (0xc000a0e4d0) (0xc00071e000) Stream removed, broadcasting: 3\nI0411 12:59:48.303646 738 log.go:172] (0xc000a0e4d0) (0xc00071e0a0) Stream removed, broadcasting: 5\n" Apr 11 12:59:48.307: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 11 12:59:48.307: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 11 12:59:48.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4314 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 11 12:59:48.502: INFO: stderr: "I0411 12:59:48.434838 758 log.go:172] (0xc000116f20) (0xc000870820) Create stream\nI0411 12:59:48.434947 758 log.go:172] (0xc000116f20) (0xc000870820) Stream added, broadcasting: 1\nI0411 12:59:48.438268 758 log.go:172] (0xc000116f20) Reply frame received for 1\nI0411 12:59:48.438326 758 log.go:172] (0xc000116f20) (0xc00086a000) Create stream\nI0411 12:59:48.438354 758 log.go:172] (0xc000116f20) (0xc00086a000) Stream added, broadcasting: 3\nI0411 12:59:48.439403 758 log.go:172] (0xc000116f20) Reply frame received for 3\nI0411 12:59:48.439438 758 log.go:172] (0xc000116f20) (0xc0008708c0) Create stream\nI0411 12:59:48.439453 758 log.go:172] (0xc000116f20) (0xc0008708c0) Stream added, broadcasting: 5\nI0411 12:59:48.440435 758 log.go:172] (0xc000116f20) Reply frame received for 5\nI0411 12:59:48.497292 758 log.go:172] (0xc000116f20) Data frame received for 3\nI0411 12:59:48.497318 758 log.go:172] (0xc00086a000) (3) Data frame handling\nI0411 12:59:48.497332 758 log.go:172] (0xc00086a000) (3) Data frame sent\nI0411 12:59:48.497340 758 log.go:172] (0xc000116f20) Data frame received for 3\nI0411 12:59:48.497344 758 log.go:172] (0xc00086a000) (3) Data frame handling\nI0411 12:59:48.497354 758 log.go:172] (0xc000116f20) Data frame received for 5\nI0411 12:59:48.497364 758 log.go:172] (0xc0008708c0) (5) Data frame handling\nI0411 12:59:48.497374 758 log.go:172] (0xc0008708c0) (5) Data frame sent\nI0411 12:59:48.497382 758 log.go:172] (0xc000116f20) Data frame received for 5\nI0411 12:59:48.497390 758 log.go:172] (0xc0008708c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0411 12:59:48.498720 758 log.go:172] (0xc000116f20) Data frame received for 1\nI0411 12:59:48.498739 758 log.go:172] (0xc000870820) (1) Data frame handling\nI0411 12:59:48.498760 758 log.go:172] (0xc000870820) (1) Data frame sent\nI0411 12:59:48.498782 758 log.go:172] (0xc000116f20) (0xc000870820) Stream removed, broadcasting: 1\nI0411 12:59:48.498800 758 log.go:172] (0xc000116f20) Go away received\nI0411 12:59:48.499142 758 log.go:172] (0xc000116f20) (0xc000870820) Stream removed, broadcasting: 1\nI0411 12:59:48.499165 758 log.go:172] (0xc000116f20) (0xc00086a000) Stream removed, broadcasting: 3\nI0411 12:59:48.499176 758 log.go:172] (0xc000116f20) (0xc0008708c0) Stream removed, broadcasting: 5\n" Apr 11 12:59:48.502: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 11 12:59:48.502: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 11 12:59:48.502: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 11 13:00:28.522: INFO: Deleting all statefulset in ns statefulset-4314 Apr 11 13:00:28.525: INFO: Scaling statefulset ss to 0 Apr 11 13:00:28.534: INFO: Waiting for statefulset status.replicas updated to 0 Apr 11 13:00:28.537: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:00:28.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4314" for this suite. Apr 11 13:00:34.583: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:00:34.663: INFO: namespace statefulset-4314 deletion completed in 6.10848518s • [SLOW TEST:108.373 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:00:34.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-fe4d93e9-54e1-4c78-8193-f58666d19e1f STEP: Creating a pod to test consume configMaps Apr 11 13:00:34.731: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c103d0cc-588b-4cf7-a412-8aec06f7aa1e" in namespace "projected-9013" to be "success or failure" Apr 11 13:00:34.776: INFO: Pod "pod-projected-configmaps-c103d0cc-588b-4cf7-a412-8aec06f7aa1e": Phase="Pending", Reason="", readiness=false. Elapsed: 44.601745ms Apr 11 13:00:36.780: INFO: Pod "pod-projected-configmaps-c103d0cc-588b-4cf7-a412-8aec06f7aa1e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048930271s Apr 11 13:00:38.784: INFO: Pod "pod-projected-configmaps-c103d0cc-588b-4cf7-a412-8aec06f7aa1e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053452777s STEP: Saw pod success Apr 11 13:00:38.785: INFO: Pod "pod-projected-configmaps-c103d0cc-588b-4cf7-a412-8aec06f7aa1e" satisfied condition "success or failure" Apr 11 13:00:38.788: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-c103d0cc-588b-4cf7-a412-8aec06f7aa1e container projected-configmap-volume-test: STEP: delete the pod Apr 11 13:00:38.809: INFO: Waiting for pod pod-projected-configmaps-c103d0cc-588b-4cf7-a412-8aec06f7aa1e to disappear Apr 11 13:00:38.828: INFO: Pod pod-projected-configmaps-c103d0cc-588b-4cf7-a412-8aec06f7aa1e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:00:38.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9013" for this suite. Apr 11 13:00:44.846: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:00:44.924: INFO: namespace projected-9013 deletion completed in 6.092066658s • [SLOW TEST:10.260 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:00:44.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-05e7a8d0-efc1-4d04-8b9e-099b96dd0a46 STEP: Creating a pod to test consume configMaps Apr 11 13:00:44.991: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-652e4e0b-87e7-419c-bba3-250a75726ab8" in namespace "projected-1868" to be "success or failure" Apr 11 13:00:44.995: INFO: Pod "pod-projected-configmaps-652e4e0b-87e7-419c-bba3-250a75726ab8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.787898ms Apr 11 13:00:46.998: INFO: Pod "pod-projected-configmaps-652e4e0b-87e7-419c-bba3-250a75726ab8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007719454s Apr 11 13:00:49.003: INFO: Pod "pod-projected-configmaps-652e4e0b-87e7-419c-bba3-250a75726ab8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012042464s STEP: Saw pod success Apr 11 13:00:49.003: INFO: Pod "pod-projected-configmaps-652e4e0b-87e7-419c-bba3-250a75726ab8" satisfied condition "success or failure" Apr 11 13:00:49.006: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-652e4e0b-87e7-419c-bba3-250a75726ab8 container projected-configmap-volume-test: STEP: delete the pod Apr 11 13:00:49.024: INFO: Waiting for pod pod-projected-configmaps-652e4e0b-87e7-419c-bba3-250a75726ab8 to disappear Apr 11 13:00:49.029: INFO: Pod pod-projected-configmaps-652e4e0b-87e7-419c-bba3-250a75726ab8 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:00:49.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1868" for this suite. Apr 11 13:00:55.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:00:55.131: INFO: namespace projected-1868 deletion completed in 6.090336263s • [SLOW TEST:10.205 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:00:55.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 11 13:00:55.236: INFO: Waiting up to 5m0s for pod "pod-d0cbb485-8c24-401a-9adf-9a53000cde05" in namespace "emptydir-9861" to be "success or failure" Apr 11 13:00:55.244: INFO: Pod "pod-d0cbb485-8c24-401a-9adf-9a53000cde05": Phase="Pending", Reason="", readiness=false. Elapsed: 8.848956ms Apr 11 13:00:57.248: INFO: Pod "pod-d0cbb485-8c24-401a-9adf-9a53000cde05": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012794892s Apr 11 13:00:59.253: INFO: Pod "pod-d0cbb485-8c24-401a-9adf-9a53000cde05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016915007s STEP: Saw pod success Apr 11 13:00:59.253: INFO: Pod "pod-d0cbb485-8c24-401a-9adf-9a53000cde05" satisfied condition "success or failure" Apr 11 13:00:59.256: INFO: Trying to get logs from node iruya-worker pod pod-d0cbb485-8c24-401a-9adf-9a53000cde05 container test-container: STEP: delete the pod Apr 11 13:00:59.270: INFO: Waiting for pod pod-d0cbb485-8c24-401a-9adf-9a53000cde05 to disappear Apr 11 13:00:59.297: INFO: Pod pod-d0cbb485-8c24-401a-9adf-9a53000cde05 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:00:59.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9861" for this suite. Apr 11 13:01:05.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:01:05.400: INFO: namespace emptydir-9861 deletion completed in 6.099243248s • [SLOW TEST:10.268 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:01:05.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info Apr 11 13:01:05.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Apr 11 13:01:05.575: INFO: stderr: "" Apr 11 13:01:05.575: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:01:05.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4837" for this suite. Apr 11 13:01:11.596: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:01:11.710: INFO: namespace kubectl-4837 deletion completed in 6.130545891s • [SLOW TEST:6.309 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:01:11.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Apr 11 13:01:11.775: INFO: Waiting up to 5m0s for pod "var-expansion-528d6b13-cd35-495e-a220-726b7e133ca2" in namespace "var-expansion-2885" to be "success or failure" Apr 11 13:01:11.778: INFO: Pod "var-expansion-528d6b13-cd35-495e-a220-726b7e133ca2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.267ms Apr 11 13:01:13.782: INFO: Pod "var-expansion-528d6b13-cd35-495e-a220-726b7e133ca2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007731444s Apr 11 13:01:15.787: INFO: Pod "var-expansion-528d6b13-cd35-495e-a220-726b7e133ca2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012168133s STEP: Saw pod success Apr 11 13:01:15.787: INFO: Pod "var-expansion-528d6b13-cd35-495e-a220-726b7e133ca2" satisfied condition "success or failure" Apr 11 13:01:15.790: INFO: Trying to get logs from node iruya-worker pod var-expansion-528d6b13-cd35-495e-a220-726b7e133ca2 container dapi-container: STEP: delete the pod Apr 11 13:01:15.828: INFO: Waiting for pod var-expansion-528d6b13-cd35-495e-a220-726b7e133ca2 to disappear Apr 11 13:01:15.844: INFO: Pod var-expansion-528d6b13-cd35-495e-a220-726b7e133ca2 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:01:15.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2885" for this suite. Apr 11 13:01:21.861: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:01:21.962: INFO: namespace var-expansion-2885 deletion completed in 6.115113409s • [SLOW TEST:10.253 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:01:21.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:01:22.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9209" for this suite. Apr 11 13:01:28.129: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:01:28.211: INFO: namespace kubelet-test-9209 deletion completed in 6.094822463s • [SLOW TEST:6.245 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:01:28.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 11 13:01:28.253: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b84b53eb-1022-4f00-b9be-930a0206c3ca" in namespace "downward-api-9051" to be "success or failure" Apr 11 13:01:28.269: INFO: Pod "downwardapi-volume-b84b53eb-1022-4f00-b9be-930a0206c3ca": Phase="Pending", Reason="", readiness=false. Elapsed: 15.782316ms Apr 11 13:01:30.274: INFO: Pod "downwardapi-volume-b84b53eb-1022-4f00-b9be-930a0206c3ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020268435s Apr 11 13:01:32.278: INFO: Pod "downwardapi-volume-b84b53eb-1022-4f00-b9be-930a0206c3ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024704883s STEP: Saw pod success Apr 11 13:01:32.278: INFO: Pod "downwardapi-volume-b84b53eb-1022-4f00-b9be-930a0206c3ca" satisfied condition "success or failure" Apr 11 13:01:32.281: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-b84b53eb-1022-4f00-b9be-930a0206c3ca container client-container: STEP: delete the pod Apr 11 13:01:32.318: INFO: Waiting for pod downwardapi-volume-b84b53eb-1022-4f00-b9be-930a0206c3ca to disappear Apr 11 13:01:32.335: INFO: Pod downwardapi-volume-b84b53eb-1022-4f00-b9be-930a0206c3ca no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:01:32.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9051" for this suite. Apr 11 13:01:38.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:01:38.437: INFO: namespace downward-api-9051 deletion completed in 6.098059761s • [SLOW TEST:10.226 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:01:38.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:02:08.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6753" for this suite. Apr 11 13:02:14.859: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:02:14.943: INFO: namespace container-runtime-6753 deletion completed in 6.106945598s • [SLOW TEST:36.506 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:02:14.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 11 13:02:15.004: INFO: Waiting up to 5m0s for pod "downwardapi-volume-721214d4-dd16-4024-9213-e5d2b1be2795" in namespace "projected-4611" to be "success or failure" Apr 11 13:02:15.018: INFO: Pod "downwardapi-volume-721214d4-dd16-4024-9213-e5d2b1be2795": Phase="Pending", Reason="", readiness=false. Elapsed: 13.744231ms Apr 11 13:02:17.021: INFO: Pod "downwardapi-volume-721214d4-dd16-4024-9213-e5d2b1be2795": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01762698s Apr 11 13:02:19.026: INFO: Pod "downwardapi-volume-721214d4-dd16-4024-9213-e5d2b1be2795": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022466457s STEP: Saw pod success Apr 11 13:02:19.026: INFO: Pod "downwardapi-volume-721214d4-dd16-4024-9213-e5d2b1be2795" satisfied condition "success or failure" Apr 11 13:02:19.030: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-721214d4-dd16-4024-9213-e5d2b1be2795 container client-container: STEP: delete the pod Apr 11 13:02:19.056: INFO: Waiting for pod downwardapi-volume-721214d4-dd16-4024-9213-e5d2b1be2795 to disappear Apr 11 13:02:19.066: INFO: Pod downwardapi-volume-721214d4-dd16-4024-9213-e5d2b1be2795 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:02:19.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4611" for this suite. Apr 11 13:02:25.094: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:02:25.175: INFO: namespace projected-4611 deletion completed in 6.105897354s • [SLOW TEST:10.232 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:02:25.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-52988c80-f89b-48ad-af44-f1b0086b4e7c STEP: Creating a pod to test consume configMaps Apr 11 13:02:25.286: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4428171f-db51-4849-b300-82a4a589a25a" in namespace "projected-6204" to be "success or failure" Apr 11 13:02:25.295: INFO: Pod "pod-projected-configmaps-4428171f-db51-4849-b300-82a4a589a25a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.156364ms Apr 11 13:02:27.298: INFO: Pod "pod-projected-configmaps-4428171f-db51-4849-b300-82a4a589a25a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011742774s Apr 11 13:02:29.303: INFO: Pod "pod-projected-configmaps-4428171f-db51-4849-b300-82a4a589a25a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016188171s STEP: Saw pod success Apr 11 13:02:29.303: INFO: Pod "pod-projected-configmaps-4428171f-db51-4849-b300-82a4a589a25a" satisfied condition "success or failure" Apr 11 13:02:29.306: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-4428171f-db51-4849-b300-82a4a589a25a container projected-configmap-volume-test: STEP: delete the pod Apr 11 13:02:29.359: INFO: Waiting for pod pod-projected-configmaps-4428171f-db51-4849-b300-82a4a589a25a to disappear Apr 11 13:02:29.378: INFO: Pod pod-projected-configmaps-4428171f-db51-4849-b300-82a4a589a25a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:02:29.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6204" for this suite. Apr 11 13:02:35.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:02:35.471: INFO: namespace projected-6204 deletion completed in 6.090134168s • [SLOW TEST:10.295 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:02:35.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 11 13:02:35.536: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f47f4f13-d034-4746-b444-694a20d54453" in namespace "projected-2691" to be "success or failure" Apr 11 13:02:35.540: INFO: Pod "downwardapi-volume-f47f4f13-d034-4746-b444-694a20d54453": Phase="Pending", Reason="", readiness=false. Elapsed: 3.661754ms Apr 11 13:02:37.544: INFO: Pod "downwardapi-volume-f47f4f13-d034-4746-b444-694a20d54453": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007711297s Apr 11 13:02:39.548: INFO: Pod "downwardapi-volume-f47f4f13-d034-4746-b444-694a20d54453": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011679384s STEP: Saw pod success Apr 11 13:02:39.548: INFO: Pod "downwardapi-volume-f47f4f13-d034-4746-b444-694a20d54453" satisfied condition "success or failure" Apr 11 13:02:39.551: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-f47f4f13-d034-4746-b444-694a20d54453 container client-container: STEP: delete the pod Apr 11 13:02:39.571: INFO: Waiting for pod downwardapi-volume-f47f4f13-d034-4746-b444-694a20d54453 to disappear Apr 11 13:02:39.593: INFO: Pod downwardapi-volume-f47f4f13-d034-4746-b444-694a20d54453 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:02:39.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2691" for this suite. Apr 11 13:02:45.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:02:45.729: INFO: namespace projected-2691 deletion completed in 6.131977477s • [SLOW TEST:10.258 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:02:45.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 11 13:02:45.770: INFO: Waiting up to 5m0s for pod "downwardapi-volume-06cde48e-6dd1-455c-a752-a9cb8e9bf9da" in namespace "downward-api-3342" to be "success or failure" Apr 11 13:02:45.787: INFO: Pod "downwardapi-volume-06cde48e-6dd1-455c-a752-a9cb8e9bf9da": Phase="Pending", Reason="", readiness=false. Elapsed: 17.083027ms Apr 11 13:02:47.791: INFO: Pod "downwardapi-volume-06cde48e-6dd1-455c-a752-a9cb8e9bf9da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021048988s Apr 11 13:02:49.796: INFO: Pod "downwardapi-volume-06cde48e-6dd1-455c-a752-a9cb8e9bf9da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025864818s STEP: Saw pod success Apr 11 13:02:49.796: INFO: Pod "downwardapi-volume-06cde48e-6dd1-455c-a752-a9cb8e9bf9da" satisfied condition "success or failure" Apr 11 13:02:49.799: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-06cde48e-6dd1-455c-a752-a9cb8e9bf9da container client-container: STEP: delete the pod Apr 11 13:02:49.831: INFO: Waiting for pod downwardapi-volume-06cde48e-6dd1-455c-a752-a9cb8e9bf9da to disappear Apr 11 13:02:49.846: INFO: Pod downwardapi-volume-06cde48e-6dd1-455c-a752-a9cb8e9bf9da no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:02:49.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3342" for this suite. Apr 11 13:02:55.861: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:02:55.938: INFO: namespace downward-api-3342 deletion completed in 6.08915606s • [SLOW TEST:10.209 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:02:55.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-2956d559-8992-4c2d-bfc8-23c859b77fd9 STEP: Creating a pod to test consume secrets Apr 11 13:02:56.009: INFO: Waiting up to 5m0s for pod "pod-secrets-88462868-f474-4e39-905c-b76f18cb1e81" in namespace "secrets-9275" to be "success or failure" Apr 11 13:02:56.014: INFO: Pod "pod-secrets-88462868-f474-4e39-905c-b76f18cb1e81": Phase="Pending", Reason="", readiness=false. Elapsed: 4.392539ms Apr 11 13:02:58.018: INFO: Pod "pod-secrets-88462868-f474-4e39-905c-b76f18cb1e81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008151877s Apr 11 13:03:00.022: INFO: Pod "pod-secrets-88462868-f474-4e39-905c-b76f18cb1e81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012816437s STEP: Saw pod success Apr 11 13:03:00.022: INFO: Pod "pod-secrets-88462868-f474-4e39-905c-b76f18cb1e81" satisfied condition "success or failure" Apr 11 13:03:00.026: INFO: Trying to get logs from node iruya-worker pod pod-secrets-88462868-f474-4e39-905c-b76f18cb1e81 container secret-env-test: STEP: delete the pod Apr 11 13:03:00.046: INFO: Waiting for pod pod-secrets-88462868-f474-4e39-905c-b76f18cb1e81 to disappear Apr 11 13:03:00.050: INFO: Pod pod-secrets-88462868-f474-4e39-905c-b76f18cb1e81 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:03:00.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9275" for this suite. Apr 11 13:03:06.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:03:06.165: INFO: namespace secrets-9275 deletion completed in 6.11264227s • [SLOW TEST:10.227 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:03:06.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:03:10.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1911" for this suite. Apr 11 13:03:56.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:03:56.364: INFO: namespace kubelet-test-1911 deletion completed in 46.09789172s • [SLOW TEST:50.198 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:03:56.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0411 13:04:08.247423 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 11 13:04:08.247: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:04:08.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9085" for this suite. Apr 11 13:04:16.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:04:16.346: INFO: namespace gc-9085 deletion completed in 8.094409289s • [SLOW TEST:19.982 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:04:16.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-lncd STEP: Creating a pod to test atomic-volume-subpath Apr 11 13:04:16.431: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-lncd" in namespace "subpath-2231" to be "success or failure" Apr 11 13:04:16.434: INFO: Pod "pod-subpath-test-downwardapi-lncd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.094938ms Apr 11 13:04:18.510: INFO: Pod "pod-subpath-test-downwardapi-lncd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078555475s Apr 11 13:04:20.513: INFO: Pod "pod-subpath-test-downwardapi-lncd": Phase="Running", Reason="", readiness=true. Elapsed: 4.082232905s Apr 11 13:04:22.517: INFO: Pod "pod-subpath-test-downwardapi-lncd": Phase="Running", Reason="", readiness=true. Elapsed: 6.086441791s Apr 11 13:04:24.521: INFO: Pod "pod-subpath-test-downwardapi-lncd": Phase="Running", Reason="", readiness=true. Elapsed: 8.090060592s Apr 11 13:04:26.525: INFO: Pod "pod-subpath-test-downwardapi-lncd": Phase="Running", Reason="", readiness=true. Elapsed: 10.09452087s Apr 11 13:04:28.530: INFO: Pod "pod-subpath-test-downwardapi-lncd": Phase="Running", Reason="", readiness=true. Elapsed: 12.098789362s Apr 11 13:04:30.534: INFO: Pod "pod-subpath-test-downwardapi-lncd": Phase="Running", Reason="", readiness=true. Elapsed: 14.102960622s Apr 11 13:04:32.538: INFO: Pod "pod-subpath-test-downwardapi-lncd": Phase="Running", Reason="", readiness=true. Elapsed: 16.107272161s Apr 11 13:04:34.542: INFO: Pod "pod-subpath-test-downwardapi-lncd": Phase="Running", Reason="", readiness=true. Elapsed: 18.111446782s Apr 11 13:04:36.546: INFO: Pod "pod-subpath-test-downwardapi-lncd": Phase="Running", Reason="", readiness=true. Elapsed: 20.115371952s Apr 11 13:04:38.551: INFO: Pod "pod-subpath-test-downwardapi-lncd": Phase="Running", Reason="", readiness=true. Elapsed: 22.119776398s Apr 11 13:04:40.555: INFO: Pod "pod-subpath-test-downwardapi-lncd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.124010638s STEP: Saw pod success Apr 11 13:04:40.555: INFO: Pod "pod-subpath-test-downwardapi-lncd" satisfied condition "success or failure" Apr 11 13:04:40.558: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-downwardapi-lncd container test-container-subpath-downwardapi-lncd: STEP: delete the pod Apr 11 13:04:40.764: INFO: Waiting for pod pod-subpath-test-downwardapi-lncd to disappear Apr 11 13:04:40.923: INFO: Pod pod-subpath-test-downwardapi-lncd no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-lncd Apr 11 13:04:40.923: INFO: Deleting pod "pod-subpath-test-downwardapi-lncd" in namespace "subpath-2231" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:04:40.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2231" for this suite. Apr 11 13:04:46.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:04:47.046: INFO: namespace subpath-2231 deletion completed in 6.113139291s • [SLOW TEST:30.700 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:04:47.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-d9d9e066-8a2d-4f7d-86e1-3c457f2d7670 STEP: Creating a pod to test consume configMaps Apr 11 13:04:47.150: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e8267ebc-06c3-48da-97f1-e5e2f5a64af3" in namespace "projected-3869" to be "success or failure" Apr 11 13:04:47.163: INFO: Pod "pod-projected-configmaps-e8267ebc-06c3-48da-97f1-e5e2f5a64af3": Phase="Pending", Reason="", readiness=false. Elapsed: 12.374335ms Apr 11 13:04:49.167: INFO: Pod "pod-projected-configmaps-e8267ebc-06c3-48da-97f1-e5e2f5a64af3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016588097s Apr 11 13:04:51.171: INFO: Pod "pod-projected-configmaps-e8267ebc-06c3-48da-97f1-e5e2f5a64af3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020854749s STEP: Saw pod success Apr 11 13:04:51.171: INFO: Pod "pod-projected-configmaps-e8267ebc-06c3-48da-97f1-e5e2f5a64af3" satisfied condition "success or failure" Apr 11 13:04:51.174: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-e8267ebc-06c3-48da-97f1-e5e2f5a64af3 container projected-configmap-volume-test: STEP: delete the pod Apr 11 13:04:51.191: INFO: Waiting for pod pod-projected-configmaps-e8267ebc-06c3-48da-97f1-e5e2f5a64af3 to disappear Apr 11 13:04:51.232: INFO: Pod pod-projected-configmaps-e8267ebc-06c3-48da-97f1-e5e2f5a64af3 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:04:51.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3869" for this suite. Apr 11 13:04:57.247: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:04:57.326: INFO: namespace projected-3869 deletion completed in 6.091200381s • [SLOW TEST:10.280 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:04:57.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 11 13:04:57.369: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:05:01.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2208" for this suite. Apr 11 13:05:43.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:05:43.531: INFO: namespace pods-2208 deletion completed in 42.098103287s • [SLOW TEST:46.204 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:05:43.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-5ff075a3-2b6c-4bdd-9386-331d3e402491 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-5ff075a3-2b6c-4bdd-9386-331d3e402491 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:07:16.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8979" for this suite. Apr 11 13:07:38.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:07:38.159: INFO: namespace projected-8979 deletion completed in 22.089723956s • [SLOW TEST:114.628 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:07:38.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 11 13:07:42.767: INFO: Successfully updated pod "labelsupdatef82d2705-9d48-4939-948c-161dfc97dde1" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:07:44.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3496" for this suite. Apr 11 13:08:06.806: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:08:06.888: INFO: namespace projected-3496 deletion completed in 22.099859821s • [SLOW TEST:28.728 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:08:06.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Apr 11 13:08:06.949: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-7999" to be "success or failure" Apr 11 13:08:06.954: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 5.12009ms Apr 11 13:08:08.966: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017440926s Apr 11 13:08:10.973: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 4.023750058s Apr 11 13:08:12.976: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027091885s STEP: Saw pod success Apr 11 13:08:12.976: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Apr 11 13:08:12.979: INFO: Trying to get logs from node iruya-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Apr 11 13:08:13.002: INFO: Waiting for pod pod-host-path-test to disappear Apr 11 13:08:13.014: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:08:13.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-7999" for this suite. Apr 11 13:08:19.029: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:08:19.124: INFO: namespace hostpath-7999 deletion completed in 6.106261645s • [SLOW TEST:12.236 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:08:19.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 11 13:08:19.164: INFO: Waiting up to 5m0s for pod "pod-6b8f5350-b4d0-4aa7-810d-997e6d9c318b" in namespace "emptydir-9706" to be "success or failure" Apr 11 13:08:19.213: INFO: Pod "pod-6b8f5350-b4d0-4aa7-810d-997e6d9c318b": Phase="Pending", Reason="", readiness=false. Elapsed: 49.231053ms Apr 11 13:08:21.230: INFO: Pod "pod-6b8f5350-b4d0-4aa7-810d-997e6d9c318b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066670896s Apr 11 13:08:23.234: INFO: Pod "pod-6b8f5350-b4d0-4aa7-810d-997e6d9c318b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070308434s STEP: Saw pod success Apr 11 13:08:23.234: INFO: Pod "pod-6b8f5350-b4d0-4aa7-810d-997e6d9c318b" satisfied condition "success or failure" Apr 11 13:08:23.237: INFO: Trying to get logs from node iruya-worker pod pod-6b8f5350-b4d0-4aa7-810d-997e6d9c318b container test-container: STEP: delete the pod Apr 11 13:08:23.267: INFO: Waiting for pod pod-6b8f5350-b4d0-4aa7-810d-997e6d9c318b to disappear Apr 11 13:08:23.282: INFO: Pod pod-6b8f5350-b4d0-4aa7-810d-997e6d9c318b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:08:23.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9706" for this suite. Apr 11 13:08:29.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:08:29.394: INFO: namespace emptydir-9706 deletion completed in 6.108140399s • [SLOW TEST:10.269 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:08:29.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 11 13:08:29.491: INFO: Waiting up to 5m0s for pod "downwardapi-volume-76020ae6-e547-4a77-8ec0-e4bbb9823293" in namespace "projected-8863" to be "success or failure" Apr 11 13:08:29.498: INFO: Pod "downwardapi-volume-76020ae6-e547-4a77-8ec0-e4bbb9823293": Phase="Pending", Reason="", readiness=false. Elapsed: 6.841733ms Apr 11 13:08:31.501: INFO: Pod "downwardapi-volume-76020ae6-e547-4a77-8ec0-e4bbb9823293": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009931307s Apr 11 13:08:33.505: INFO: Pod "downwardapi-volume-76020ae6-e547-4a77-8ec0-e4bbb9823293": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013737575s STEP: Saw pod success Apr 11 13:08:33.505: INFO: Pod "downwardapi-volume-76020ae6-e547-4a77-8ec0-e4bbb9823293" satisfied condition "success or failure" Apr 11 13:08:33.508: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-76020ae6-e547-4a77-8ec0-e4bbb9823293 container client-container: STEP: delete the pod Apr 11 13:08:33.552: INFO: Waiting for pod downwardapi-volume-76020ae6-e547-4a77-8ec0-e4bbb9823293 to disappear Apr 11 13:08:33.563: INFO: Pod downwardapi-volume-76020ae6-e547-4a77-8ec0-e4bbb9823293 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:08:33.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8863" for this suite. Apr 11 13:08:39.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:08:39.684: INFO: namespace projected-8863 deletion completed in 6.117273852s • [SLOW TEST:10.290 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:08:39.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Apr 11 13:08:39.861: INFO: Waiting up to 5m0s for pod "var-expansion-05fed9ab-9125-45d8-9755-a716acf75486" in namespace "var-expansion-2655" to be "success or failure" Apr 11 13:08:39.865: INFO: Pod "var-expansion-05fed9ab-9125-45d8-9755-a716acf75486": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011776ms Apr 11 13:08:41.869: INFO: Pod "var-expansion-05fed9ab-9125-45d8-9755-a716acf75486": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007833151s Apr 11 13:08:43.873: INFO: Pod "var-expansion-05fed9ab-9125-45d8-9755-a716acf75486": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01237867s STEP: Saw pod success Apr 11 13:08:43.873: INFO: Pod "var-expansion-05fed9ab-9125-45d8-9755-a716acf75486" satisfied condition "success or failure" Apr 11 13:08:43.876: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-05fed9ab-9125-45d8-9755-a716acf75486 container dapi-container: STEP: delete the pod Apr 11 13:08:43.912: INFO: Waiting for pod var-expansion-05fed9ab-9125-45d8-9755-a716acf75486 to disappear Apr 11 13:08:43.919: INFO: Pod var-expansion-05fed9ab-9125-45d8-9755-a716acf75486 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:08:43.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2655" for this suite. Apr 11 13:08:49.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:08:50.010: INFO: namespace var-expansion-2655 deletion completed in 6.08810191s • [SLOW TEST:10.326 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:08:50.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Apr 11 13:08:50.095: INFO: Waiting up to 5m0s for pod "client-containers-7f507480-60dc-4e93-bbc5-b2a9838a4bfe" in namespace "containers-7908" to be "success or failure" Apr 11 13:08:50.099: INFO: Pod "client-containers-7f507480-60dc-4e93-bbc5-b2a9838a4bfe": Phase="Pending", Reason="", readiness=false. Elapsed: 3.324023ms Apr 11 13:08:52.103: INFO: Pod "client-containers-7f507480-60dc-4e93-bbc5-b2a9838a4bfe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007529963s Apr 11 13:08:54.108: INFO: Pod "client-containers-7f507480-60dc-4e93-bbc5-b2a9838a4bfe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012475094s STEP: Saw pod success Apr 11 13:08:54.108: INFO: Pod "client-containers-7f507480-60dc-4e93-bbc5-b2a9838a4bfe" satisfied condition "success or failure" Apr 11 13:08:54.111: INFO: Trying to get logs from node iruya-worker2 pod client-containers-7f507480-60dc-4e93-bbc5-b2a9838a4bfe container test-container: STEP: delete the pod Apr 11 13:08:54.160: INFO: Waiting for pod client-containers-7f507480-60dc-4e93-bbc5-b2a9838a4bfe to disappear Apr 11 13:08:54.164: INFO: Pod client-containers-7f507480-60dc-4e93-bbc5-b2a9838a4bfe no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:08:54.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7908" for this suite. Apr 11 13:09:00.174: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:09:00.250: INFO: namespace containers-7908 deletion completed in 6.082704625s • [SLOW TEST:10.239 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:09:00.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-8gscq in namespace proxy-6181 I0411 13:09:00.382875 6 runners.go:180] Created replication controller with name: proxy-service-8gscq, namespace: proxy-6181, replica count: 1 I0411 13:09:01.433411 6 runners.go:180] proxy-service-8gscq Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0411 13:09:02.433639 6 runners.go:180] proxy-service-8gscq Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0411 13:09:03.433860 6 runners.go:180] proxy-service-8gscq Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0411 13:09:04.434109 6 runners.go:180] proxy-service-8gscq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0411 13:09:05.434364 6 runners.go:180] proxy-service-8gscq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0411 13:09:06.434610 6 runners.go:180] proxy-service-8gscq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0411 13:09:07.434814 6 runners.go:180] proxy-service-8gscq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0411 13:09:08.435059 6 runners.go:180] proxy-service-8gscq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0411 13:09:09.435331 6 runners.go:180] proxy-service-8gscq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0411 13:09:10.435576 6 runners.go:180] proxy-service-8gscq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0411 13:09:11.435798 6 runners.go:180] proxy-service-8gscq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0411 13:09:12.436006 6 runners.go:180] proxy-service-8gscq Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 11 13:09:12.439: INFO: setup took 12.144793907s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Apr 11 13:09:12.445: INFO: (0) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm:162/proxy/: bar (200; 5.685526ms) Apr 11 13:09:12.445: INFO: (0) /api/v1/namespaces/proxy-6181/services/http:proxy-service-8gscq:portname2/proxy/: bar (200; 6.069342ms) Apr 11 13:09:12.447: INFO: (0) /api/v1/namespaces/proxy-6181/pods/http:proxy-service-8gscq-b9fsm:162/proxy/: bar (200; 8.623629ms) Apr 11 13:09:12.447: INFO: (0) /api/v1/namespaces/proxy-6181/services/http:proxy-service-8gscq:portname1/proxy/: foo (200; 8.560093ms) Apr 11 13:09:12.447: INFO: (0) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm:1080/proxy/: test<... (200; 8.646676ms) Apr 11 13:09:12.447: INFO: (0) /api/v1/namespaces/proxy-6181/services/proxy-service-8gscq:portname2/proxy/: bar (200; 8.631834ms) Apr 11 13:09:12.448: INFO: (0) /api/v1/namespaces/proxy-6181/pods/http:proxy-service-8gscq-b9fsm:160/proxy/: foo (200; 8.657134ms) Apr 11 13:09:12.448: INFO: (0) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm:160/proxy/: foo (200; 8.79118ms) Apr 11 13:09:12.448: INFO: (0) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm/proxy/: test (200; 8.710524ms) Apr 11 13:09:12.448: INFO: (0) /api/v1/namespaces/proxy-6181/pods/http:proxy-service-8gscq-b9fsm:1080/proxy/: ... (200; 8.732816ms) Apr 11 13:09:12.448: INFO: (0) /api/v1/namespaces/proxy-6181/services/proxy-service-8gscq:portname1/proxy/: foo (200; 8.703239ms) Apr 11 13:09:12.452: INFO: (0) /api/v1/namespaces/proxy-6181/services/https:proxy-service-8gscq:tlsportname2/proxy/: tls qux (200; 13.335237ms) Apr 11 13:09:12.452: INFO: (0) /api/v1/namespaces/proxy-6181/pods/https:proxy-service-8gscq-b9fsm:443/proxy/: ... (200; 13.084596ms) Apr 11 13:09:12.466: INFO: (1) /api/v1/namespaces/proxy-6181/services/https:proxy-service-8gscq:tlsportname1/proxy/: tls baz (200; 13.186906ms) Apr 11 13:09:12.466: INFO: (1) /api/v1/namespaces/proxy-6181/services/proxy-service-8gscq:portname1/proxy/: foo (200; 13.175597ms) Apr 11 13:09:12.467: INFO: (1) /api/v1/namespaces/proxy-6181/services/http:proxy-service-8gscq:portname1/proxy/: foo (200; 13.54338ms) Apr 11 13:09:12.467: INFO: (1) /api/v1/namespaces/proxy-6181/services/https:proxy-service-8gscq:tlsportname2/proxy/: tls qux (200; 13.592694ms) Apr 11 13:09:12.467: INFO: (1) /api/v1/namespaces/proxy-6181/pods/https:proxy-service-8gscq-b9fsm:460/proxy/: tls baz (200; 13.668053ms) Apr 11 13:09:12.467: INFO: (1) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm:162/proxy/: bar (200; 13.696154ms) Apr 11 13:09:12.467: INFO: (1) /api/v1/namespaces/proxy-6181/pods/http:proxy-service-8gscq-b9fsm:160/proxy/: foo (200; 13.76518ms) Apr 11 13:09:12.467: INFO: (1) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm/proxy/: test (200; 13.887819ms) Apr 11 13:09:12.467: INFO: (1) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm:1080/proxy/: test<... (200; 13.868944ms) Apr 11 13:09:12.467: INFO: (1) /api/v1/namespaces/proxy-6181/pods/http:proxy-service-8gscq-b9fsm:162/proxy/: bar (200; 14.052019ms) Apr 11 13:09:12.472: INFO: (2) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm:162/proxy/: bar (200; 4.323441ms) Apr 11 13:09:12.472: INFO: (2) /api/v1/namespaces/proxy-6181/pods/https:proxy-service-8gscq-b9fsm:462/proxy/: tls qux (200; 4.815945ms) Apr 11 13:09:12.472: INFO: (2) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm:160/proxy/: foo (200; 4.801825ms) Apr 11 13:09:12.472: INFO: (2) /api/v1/namespaces/proxy-6181/pods/https:proxy-service-8gscq-b9fsm:460/proxy/: tls baz (200; 4.796665ms) Apr 11 13:09:12.472: INFO: (2) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm:1080/proxy/: test<... (200; 4.784679ms) Apr 11 13:09:12.472: INFO: (2) /api/v1/namespaces/proxy-6181/pods/http:proxy-service-8gscq-b9fsm:160/proxy/: foo (200; 4.871307ms) Apr 11 13:09:12.472: INFO: (2) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm/proxy/: test (200; 4.862694ms) Apr 11 13:09:12.472: INFO: (2) /api/v1/namespaces/proxy-6181/pods/http:proxy-service-8gscq-b9fsm:1080/proxy/: ... (200; 4.792331ms) Apr 11 13:09:12.473: INFO: (2) /api/v1/namespaces/proxy-6181/pods/https:proxy-service-8gscq-b9fsm:443/proxy/: ... (200; 4.522392ms) Apr 11 13:09:12.479: INFO: (3) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm:1080/proxy/: test<... (200; 4.681645ms) Apr 11 13:09:12.479: INFO: (3) /api/v1/namespaces/proxy-6181/pods/http:proxy-service-8gscq-b9fsm:160/proxy/: foo (200; 4.767846ms) Apr 11 13:09:12.479: INFO: (3) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm:162/proxy/: bar (200; 4.804113ms) Apr 11 13:09:12.479: INFO: (3) /api/v1/namespaces/proxy-6181/pods/https:proxy-service-8gscq-b9fsm:462/proxy/: tls qux (200; 4.767162ms) Apr 11 13:09:12.480: INFO: (3) /api/v1/namespaces/proxy-6181/services/http:proxy-service-8gscq:portname1/proxy/: foo (200; 4.963613ms) Apr 11 13:09:12.480: INFO: (3) /api/v1/namespaces/proxy-6181/pods/https:proxy-service-8gscq-b9fsm:443/proxy/: test (200; 5.423109ms) Apr 11 13:09:12.483: INFO: (4) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm:162/proxy/: bar (200; 2.490032ms) Apr 11 13:09:12.483: INFO: (4) /api/v1/namespaces/proxy-6181/pods/http:proxy-service-8gscq-b9fsm:162/proxy/: bar (200; 2.948087ms) Apr 11 13:09:12.483: INFO: (4) /api/v1/namespaces/proxy-6181/pods/https:proxy-service-8gscq-b9fsm:462/proxy/: tls qux (200; 2.98142ms) Apr 11 13:09:12.483: INFO: (4) /api/v1/namespaces/proxy-6181/pods/http:proxy-service-8gscq-b9fsm:160/proxy/: foo (200; 3.010179ms) Apr 11 13:09:12.483: INFO: (4) /api/v1/namespaces/proxy-6181/services/https:proxy-service-8gscq:tlsportname1/proxy/: tls baz (200; 3.131045ms) Apr 11 13:09:12.483: INFO: (4) /api/v1/namespaces/proxy-6181/pods/https:proxy-service-8gscq-b9fsm:443/proxy/: test (200; 4.781263ms) Apr 11 13:09:12.485: INFO: (4) /api/v1/namespaces/proxy-6181/pods/http:proxy-service-8gscq-b9fsm:1080/proxy/: ... (200; 5.156127ms) Apr 11 13:09:12.485: INFO: (4) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm:1080/proxy/: test<... (200; 5.109294ms) Apr 11 13:09:12.486: INFO: (4) /api/v1/namespaces/proxy-6181/services/proxy-service-8gscq:portname2/proxy/: bar (200; 6.168963ms) Apr 11 13:09:12.486: INFO: (4) /api/v1/namespaces/proxy-6181/services/http:proxy-service-8gscq:portname1/proxy/: foo (200; 6.185883ms) Apr 11 13:09:12.486: INFO: (4) /api/v1/namespaces/proxy-6181/services/http:proxy-service-8gscq:portname2/proxy/: bar (200; 6.157423ms) Apr 11 13:09:12.486: INFO: (4) /api/v1/namespaces/proxy-6181/services/proxy-service-8gscq:portname1/proxy/: foo (200; 6.216623ms) Apr 11 13:09:12.490: INFO: (5) /api/v1/namespaces/proxy-6181/pods/http:proxy-service-8gscq-b9fsm:162/proxy/: bar (200; 3.863853ms) Apr 11 13:09:12.490: INFO: (5) /api/v1/namespaces/proxy-6181/pods/https:proxy-service-8gscq-b9fsm:443/proxy/: ... (200; 4.566034ms) Apr 11 13:09:12.491: INFO: (5) /api/v1/namespaces/proxy-6181/services/proxy-service-8gscq:portname1/proxy/: foo (200; 4.60335ms) Apr 11 13:09:12.491: INFO: (5) /api/v1/namespaces/proxy-6181/pods/http:proxy-service-8gscq-b9fsm:160/proxy/: foo (200; 4.638685ms) Apr 11 13:09:12.491: INFO: (5) /api/v1/namespaces/proxy-6181/services/proxy-service-8gscq:portname2/proxy/: bar (200; 4.856684ms) Apr 11 13:09:12.491: INFO: (5) /api/v1/namespaces/proxy-6181/pods/https:proxy-service-8gscq-b9fsm:462/proxy/: tls qux (200; 4.954785ms) Apr 11 13:09:12.492: INFO: (5) /api/v1/namespaces/proxy-6181/services/http:proxy-service-8gscq:portname2/proxy/: bar (200; 5.043003ms) Apr 11 13:09:12.492: INFO: (5) /api/v1/namespaces/proxy-6181/pods/https:proxy-service-8gscq-b9fsm:460/proxy/: tls baz (200; 5.061731ms) Apr 11 13:09:12.492: INFO: (5) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm/proxy/: test (200; 5.420174ms) Apr 11 13:09:12.492: INFO: (5) /api/v1/namespaces/proxy-6181/services/https:proxy-service-8gscq:tlsportname1/proxy/: tls baz (200; 5.410273ms) Apr 11 13:09:12.492: INFO: (5) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm:1080/proxy/: test<... (200; 5.473768ms) Apr 11 13:09:12.492: INFO: (5) /api/v1/namespaces/proxy-6181/services/http:proxy-service-8gscq:portname1/proxy/: foo (200; 5.534944ms) Apr 11 13:09:12.492: INFO: (5) /api/v1/namespaces/proxy-6181/services/https:proxy-service-8gscq:tlsportname2/proxy/: tls qux (200; 5.545348ms) Apr 11 13:09:12.496: INFO: (6) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm/proxy/: test (200; 3.408922ms) Apr 11 13:09:12.496: INFO: (6) /api/v1/namespaces/proxy-6181/pods/http:proxy-service-8gscq-b9fsm:1080/proxy/: ... (200; 3.686621ms) Apr 11 13:09:12.496: INFO: (6) /api/v1/namespaces/proxy-6181/pods/http:proxy-service-8gscq-b9fsm:162/proxy/: bar (200; 3.77051ms) Apr 11 13:09:12.496: INFO: (6) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm:162/proxy/: bar (200; 3.771335ms) Apr 11 13:09:12.496: INFO: (6) /api/v1/namespaces/proxy-6181/pods/http:proxy-service-8gscq-b9fsm:160/proxy/: foo (200; 3.731371ms) Apr 11 13:09:12.496: INFO: (6) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm:1080/proxy/: test<... (200; 3.698405ms) Apr 11 13:09:12.497: INFO: (6) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm:160/proxy/: foo (200; 4.828294ms) Apr 11 13:09:12.497: INFO: (6) /api/v1/namespaces/proxy-6181/pods/https:proxy-service-8gscq-b9fsm:462/proxy/: tls qux (200; 4.816243ms) Apr 11 13:09:12.497: INFO: (6) /api/v1/namespaces/proxy-6181/services/http:proxy-service-8gscq:portname2/proxy/: bar (200; 5.041305ms) Apr 11 13:09:12.497: INFO: (6) /api/v1/namespaces/proxy-6181/services/https:proxy-service-8gscq:tlsportname1/proxy/: tls baz (200; 5.111015ms) Apr 11 13:09:12.497: INFO: (6) /api/v1/namespaces/proxy-6181/pods/https:proxy-service-8gscq-b9fsm:460/proxy/: tls baz (200; 5.271346ms) Apr 11 13:09:12.497: INFO: (6) /api/v1/namespaces/proxy-6181/services/https:proxy-service-8gscq:tlsportname2/proxy/: tls qux (200; 5.237452ms) Apr 11 13:09:12.497: INFO: (6) /api/v1/namespaces/proxy-6181/services/proxy-service-8gscq:portname2/proxy/: bar (200; 5.210611ms) Apr 11 13:09:12.497: INFO: (6) /api/v1/namespaces/proxy-6181/services/http:proxy-service-8gscq:portname1/proxy/: foo (200; 5.256346ms) Apr 11 13:09:12.498: INFO: (6) /api/v1/namespaces/proxy-6181/pods/https:proxy-service-8gscq-b9fsm:443/proxy/: ... (200; 4.654166ms) Apr 11 13:09:12.502: INFO: (7) /api/v1/namespaces/proxy-6181/pods/http:proxy-service-8gscq-b9fsm:162/proxy/: bar (200; 4.699258ms) Apr 11 13:09:12.502: INFO: (7) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm:162/proxy/: bar (200; 4.710718ms) Apr 11 13:09:12.502: INFO: (7) /api/v1/namespaces/proxy-6181/pods/https:proxy-service-8gscq-b9fsm:460/proxy/: tls baz (200; 4.721444ms) Apr 11 13:09:12.502: INFO: (7) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm/proxy/: test (200; 4.821422ms) Apr 11 13:09:12.502: INFO: (7) /api/v1/namespaces/proxy-6181/pods/https:proxy-service-8gscq-b9fsm:462/proxy/: tls qux (200; 4.827683ms) Apr 11 13:09:12.502: INFO: (7) /api/v1/namespaces/proxy-6181/pods/https:proxy-service-8gscq-b9fsm:443/proxy/: test<... (200; 5.142185ms) Apr 11 13:09:12.503: INFO: (7) /api/v1/namespaces/proxy-6181/services/proxy-service-8gscq:portname1/proxy/: foo (200; 5.484817ms) Apr 11 13:09:12.503: INFO: (7) /api/v1/namespaces/proxy-6181/services/http:proxy-service-8gscq:portname1/proxy/: foo (200; 5.708072ms) Apr 11 13:09:12.504: INFO: (7) /api/v1/namespaces/proxy-6181/services/proxy-service-8gscq:portname2/proxy/: bar (200; 6.475132ms) Apr 11 13:09:12.504: INFO: (7) /api/v1/namespaces/proxy-6181/services/https:proxy-service-8gscq:tlsportname1/proxy/: tls baz (200; 6.594872ms) Apr 11 13:09:12.504: INFO: (7) /api/v1/namespaces/proxy-6181/services/https:proxy-service-8gscq:tlsportname2/proxy/: tls qux (200; 6.742061ms) Apr 11 13:09:12.505: INFO: (7) /api/v1/namespaces/proxy-6181/services/http:proxy-service-8gscq:portname2/proxy/: bar (200; 7.649981ms) Apr 11 13:09:12.508: INFO: (8) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm/proxy/: test (200; 2.213257ms) Apr 11 13:09:12.508: INFO: (8) /api/v1/namespaces/proxy-6181/pods/https:proxy-service-8gscq-b9fsm:460/proxy/: tls baz (200; 2.388004ms) Apr 11 13:09:12.508: INFO: (8) /api/v1/namespaces/proxy-6181/pods/https:proxy-service-8gscq-b9fsm:462/proxy/: tls qux (200; 2.947843ms) Apr 11 13:09:12.508: INFO: (8) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm:162/proxy/: bar (200; 3.018153ms) Apr 11 13:09:12.509: INFO: (8) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm:160/proxy/: foo (200; 3.52198ms) Apr 11 13:09:12.509: INFO: (8) /api/v1/namespaces/proxy-6181/pods/http:proxy-service-8gscq-b9fsm:162/proxy/: bar (200; 3.72931ms) Apr 11 13:09:12.509: INFO: (8) /api/v1/namespaces/proxy-6181/pods/https:proxy-service-8gscq-b9fsm:443/proxy/: ... (200; 3.7838ms) Apr 11 13:09:12.509: INFO: (8) /api/v1/namespaces/proxy-6181/pods/http:proxy-service-8gscq-b9fsm:160/proxy/: foo (200; 3.77235ms) Apr 11 13:09:12.509: INFO: (8) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm:1080/proxy/: test<... (200; 3.836451ms) Apr 11 13:09:12.510: INFO: (8) /api/v1/namespaces/proxy-6181/services/http:proxy-service-8gscq:portname1/proxy/: foo (200; 4.992845ms) Apr 11 13:09:12.510: INFO: (8) /api/v1/namespaces/proxy-6181/services/https:proxy-service-8gscq:tlsportname2/proxy/: tls qux (200; 5.108182ms) Apr 11 13:09:12.510: INFO: (8) /api/v1/namespaces/proxy-6181/services/https:proxy-service-8gscq:tlsportname1/proxy/: tls baz (200; 5.091929ms) Apr 11 13:09:12.511: INFO: (8) /api/v1/namespaces/proxy-6181/services/proxy-service-8gscq:portname1/proxy/: foo (200; 5.136896ms) Apr 11 13:09:12.511: INFO: (8) /api/v1/namespaces/proxy-6181/services/proxy-service-8gscq:portname2/proxy/: bar (200; 5.121745ms) Apr 11 13:09:12.511: INFO: (8) /api/v1/namespaces/proxy-6181/services/http:proxy-service-8gscq:portname2/proxy/: bar (200; 5.181182ms) Apr 11 13:09:12.514: INFO: (9) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm:1080/proxy/: test<... (200; 3.780447ms) Apr 11 13:09:12.515: INFO: (9) /api/v1/namespaces/proxy-6181/pods/https:proxy-service-8gscq-b9fsm:462/proxy/: tls qux (200; 3.945693ms) Apr 11 13:09:12.515: INFO: (9) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm:160/proxy/: foo (200; 4.003399ms) Apr 11 13:09:12.515: INFO: (9) /api/v1/namespaces/proxy-6181/pods/https:proxy-service-8gscq-b9fsm:443/proxy/: test (200; 4.104248ms) Apr 11 13:09:12.515: INFO: (9) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm:162/proxy/: bar (200; 4.183374ms) Apr 11 13:09:12.515: INFO: (9) /api/v1/namespaces/proxy-6181/pods/http:proxy-service-8gscq-b9fsm:1080/proxy/: ... (200; 4.272018ms) Apr 11 13:09:12.515: INFO: (9) /api/v1/namespaces/proxy-6181/pods/http:proxy-service-8gscq-b9fsm:160/proxy/: foo (200; 4.2374ms) Apr 11 13:09:12.515: INFO: (9) /api/v1/namespaces/proxy-6181/pods/http:proxy-service-8gscq-b9fsm:162/proxy/: bar (200; 4.257795ms) Apr 11 13:09:12.516: INFO: (9) /api/v1/namespaces/proxy-6181/services/proxy-service-8gscq:portname2/proxy/: bar (200; 5.511061ms) Apr 11 13:09:12.516: INFO: (9) /api/v1/namespaces/proxy-6181/services/proxy-service-8gscq:portname1/proxy/: foo (200; 5.463793ms) Apr 11 13:09:12.516: INFO: (9) /api/v1/namespaces/proxy-6181/services/http:proxy-service-8gscq:portname1/proxy/: foo (200; 5.482522ms) Apr 11 13:09:12.516: INFO: (9) /api/v1/namespaces/proxy-6181/services/http:proxy-service-8gscq:portname2/proxy/: bar (200; 5.670076ms) Apr 11 13:09:12.516: INFO: (9) /api/v1/namespaces/proxy-6181/services/https:proxy-service-8gscq:tlsportname1/proxy/: tls baz (200; 5.610733ms) Apr 11 13:09:12.516: INFO: (9) /api/v1/namespaces/proxy-6181/services/https:proxy-service-8gscq:tlsportname2/proxy/: tls qux (200; 5.67012ms) Apr 11 13:09:12.520: INFO: (10) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm:1080/proxy/: test<... (200; 3.946206ms) Apr 11 13:09:12.520: INFO: (10) /api/v1/namespaces/proxy-6181/pods/http:proxy-service-8gscq-b9fsm:162/proxy/: bar (200; 4.048183ms) Apr 11 13:09:12.520: INFO: (10) /api/v1/namespaces/proxy-6181/pods/https:proxy-service-8gscq-b9fsm:462/proxy/: tls qux (200; 4.043563ms) Apr 11 13:09:12.521: INFO: (10) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm:162/proxy/: bar (200; 4.076998ms) Apr 11 13:09:12.521: INFO: (10) /api/v1/namespaces/proxy-6181/pods/https:proxy-service-8gscq-b9fsm:460/proxy/: tls baz (200; 4.062789ms) Apr 11 13:09:12.521: INFO: (10) /api/v1/namespaces/proxy-6181/pods/http:proxy-service-8gscq-b9fsm:160/proxy/: foo (200; 4.403198ms) Apr 11 13:09:12.521: INFO: (10) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm/proxy/: test (200; 4.529191ms) Apr 11 13:09:12.521: INFO: (10) /api/v1/namespaces/proxy-6181/pods/https:proxy-service-8gscq-b9fsm:443/proxy/: ... (200; 4.614095ms) Apr 11 13:09:12.521: INFO: (10) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm:160/proxy/: foo (200; 4.587031ms) Apr 11 13:09:12.521: INFO: (10) /api/v1/namespaces/proxy-6181/services/http:proxy-service-8gscq:portname2/proxy/: bar (200; 4.844398ms) Apr 11 13:09:12.521: INFO: (10) /api/v1/namespaces/proxy-6181/services/http:proxy-service-8gscq:portname1/proxy/: foo (200; 4.962241ms) Apr 11 13:09:12.522: INFO: (10) /api/v1/namespaces/proxy-6181/services/https:proxy-service-8gscq:tlsportname1/proxy/: tls baz (200; 5.011546ms) Apr 11 13:09:12.522: INFO: (10) /api/v1/namespaces/proxy-6181/services/proxy-service-8gscq:portname1/proxy/: foo (200; 5.057459ms) Apr 11 13:09:12.522: INFO: (10) /api/v1/namespaces/proxy-6181/services/proxy-service-8gscq:portname2/proxy/: bar (200; 5.206546ms) Apr 11 13:09:12.522: INFO: (10) /api/v1/namespaces/proxy-6181/services/https:proxy-service-8gscq:tlsportname2/proxy/: tls qux (200; 5.426689ms) Apr 11 13:09:12.526: INFO: (11) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm:160/proxy/: foo (200; 3.078325ms) Apr 11 13:09:12.526: INFO: (11) /api/v1/namespaces/proxy-6181/pods/http:proxy-service-8gscq-b9fsm:160/proxy/: foo (200; 3.426282ms) Apr 11 13:09:12.526: INFO: (11) /api/v1/namespaces/proxy-6181/pods/https:proxy-service-8gscq-b9fsm:460/proxy/: tls baz (200; 3.424436ms) Apr 11 13:09:12.526: INFO: (11) /api/v1/namespaces/proxy-6181/services/proxy-service-8gscq:portname1/proxy/: foo (200; 3.931642ms) Apr 11 13:09:12.526: INFO: (11) /api/v1/namespaces/proxy-6181/pods/https:proxy-service-8gscq-b9fsm:443/proxy/: ... (200; 5.137219ms) Apr 11 13:09:12.527: INFO: (11) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm/proxy/: test (200; 4.772783ms) Apr 11 13:09:12.527: INFO: (11) /api/v1/namespaces/proxy-6181/services/proxy-service-8gscq:portname2/proxy/: bar (200; 4.646656ms) Apr 11 13:09:12.527: INFO: (11) /api/v1/namespaces/proxy-6181/pods/http:proxy-service-8gscq-b9fsm:162/proxy/: bar (200; 5.316585ms) Apr 11 13:09:12.527: INFO: (11) /api/v1/namespaces/proxy-6181/services/http:proxy-service-8gscq:portname2/proxy/: bar (200; 4.681552ms) Apr 11 13:09:12.527: INFO: (11) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm:1080/proxy/: test<... (200; 4.829162ms) Apr 11 13:09:12.530: INFO: (12) /api/v1/namespaces/proxy-6181/pods/http:proxy-service-8gscq-b9fsm:1080/proxy/: ... (200; 2.353085ms) Apr 11 13:09:12.530: INFO: (12) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm/proxy/: test (200; 2.732161ms) Apr 11 13:09:12.532: INFO: (12) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm:160/proxy/: foo (200; 3.810359ms) Apr 11 13:09:12.532: INFO: (12) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm:162/proxy/: bar (200; 3.619662ms) Apr 11 13:09:12.532: INFO: (12) /api/v1/namespaces/proxy-6181/pods/https:proxy-service-8gscq-b9fsm:460/proxy/: tls baz (200; 3.493372ms) Apr 11 13:09:12.532: INFO: (12) /api/v1/namespaces/proxy-6181/pods/http:proxy-service-8gscq-b9fsm:162/proxy/: bar (200; 2.969408ms) Apr 11 13:09:12.532: INFO: (12) /api/v1/namespaces/proxy-6181/pods/https:proxy-service-8gscq-b9fsm:443/proxy/: test<... (200; 3.055578ms) Apr 11 13:09:12.532: INFO: (12) /api/v1/namespaces/proxy-6181/pods/http:proxy-service-8gscq-b9fsm:160/proxy/: foo (200; 4.500202ms) Apr 11 13:09:12.533: INFO: (12) /api/v1/namespaces/proxy-6181/services/proxy-service-8gscq:portname1/proxy/: foo (200; 4.557868ms) Apr 11 13:09:12.533: INFO: (12) /api/v1/namespaces/proxy-6181/services/http:proxy-service-8gscq:portname2/proxy/: bar (200; 4.932822ms) Apr 11 13:09:12.533: INFO: (12) /api/v1/namespaces/proxy-6181/services/https:proxy-service-8gscq:tlsportname2/proxy/: tls qux (200; 4.565645ms) Apr 11 13:09:12.533: INFO: (12) /api/v1/namespaces/proxy-6181/services/proxy-service-8gscq:portname2/proxy/: bar (200; 5.075713ms) Apr 11 13:09:12.533: INFO: (12) /api/v1/namespaces/proxy-6181/services/https:proxy-service-8gscq:tlsportname1/proxy/: tls baz (200; 4.583144ms) Apr 11 13:09:12.536: INFO: (13) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm:162/proxy/: bar (200; 2.375403ms) Apr 11 13:09:12.537: INFO: (13) /api/v1/namespaces/proxy-6181/pods/http:proxy-service-8gscq-b9fsm:160/proxy/: foo (200; 3.503178ms) Apr 11 13:09:12.537: INFO: (13) /api/v1/namespaces/proxy-6181/pods/https:proxy-service-8gscq-b9fsm:443/proxy/: test (200; 3.484763ms) Apr 11 13:09:12.537: INFO: (13) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm:160/proxy/: foo (200; 3.553186ms) Apr 11 13:09:12.537: INFO: (13) /api/v1/namespaces/proxy-6181/pods/https:proxy-service-8gscq-b9fsm:460/proxy/: tls baz (200; 3.515731ms) Apr 11 13:09:12.537: INFO: (13) /api/v1/namespaces/proxy-6181/pods/http:proxy-service-8gscq-b9fsm:162/proxy/: bar (200; 3.647348ms) Apr 11 13:09:12.537: INFO: (13) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm:1080/proxy/: test<... (200; 3.591557ms) Apr 11 13:09:12.537: INFO: (13) /api/v1/namespaces/proxy-6181/pods/https:proxy-service-8gscq-b9fsm:462/proxy/: tls qux (200; 3.631924ms) Apr 11 13:09:12.537: INFO: (13) /api/v1/namespaces/proxy-6181/pods/http:proxy-service-8gscq-b9fsm:1080/proxy/: ... (200; 3.557445ms) Apr 11 13:09:12.538: INFO: (13) /api/v1/namespaces/proxy-6181/services/https:proxy-service-8gscq:tlsportname2/proxy/: tls qux (200; 4.111356ms) Apr 11 13:09:12.538: INFO: (13) /api/v1/namespaces/proxy-6181/services/http:proxy-service-8gscq:portname2/proxy/: bar (200; 4.127294ms) Apr 11 13:09:12.538: INFO: (13) /api/v1/namespaces/proxy-6181/services/proxy-service-8gscq:portname1/proxy/: foo (200; 4.322575ms) Apr 11 13:09:12.538: INFO: (13) /api/v1/namespaces/proxy-6181/services/proxy-service-8gscq:portname2/proxy/: bar (200; 4.36398ms) Apr 11 13:09:12.538: INFO: (13) /api/v1/namespaces/proxy-6181/services/http:proxy-service-8gscq:portname1/proxy/: foo (200; 4.346095ms) Apr 11 13:09:12.538: INFO: (13) /api/v1/namespaces/proxy-6181/services/https:proxy-service-8gscq:tlsportname1/proxy/: tls baz (200; 4.389252ms) Apr 11 13:09:12.540: INFO: (14) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm:1080/proxy/: test<... (200; 2.073304ms) Apr 11 13:09:12.540: INFO: (14) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm/proxy/: test (200; 2.248903ms) Apr 11 13:09:12.540: INFO: (14) /api/v1/namespaces/proxy-6181/pods/http:proxy-service-8gscq-b9fsm:1080/proxy/: ... (200; 2.263304ms) Apr 11 13:09:12.542: INFO: (14) /api/v1/namespaces/proxy-6181/pods/https:proxy-service-8gscq-b9fsm:443/proxy/: test<... (200; 3.007292ms) Apr 11 13:09:12.547: INFO: (15) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm:162/proxy/: bar (200; 3.052299ms) Apr 11 13:09:12.547: INFO: (15) /api/v1/namespaces/proxy-6181/pods/https:proxy-service-8gscq-b9fsm:462/proxy/: tls qux (200; 3.102562ms) Apr 11 13:09:12.547: INFO: (15) /api/v1/namespaces/proxy-6181/pods/http:proxy-service-8gscq-b9fsm:160/proxy/: foo (200; 3.037939ms) Apr 11 13:09:12.547: INFO: (15) /api/v1/namespaces/proxy-6181/pods/http:proxy-service-8gscq-b9fsm:1080/proxy/: ... (200; 3.18662ms) Apr 11 13:09:12.547: INFO: (15) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm:160/proxy/: foo (200; 3.316252ms) Apr 11 13:09:12.547: INFO: (15) /api/v1/namespaces/proxy-6181/pods/https:proxy-service-8gscq-b9fsm:443/proxy/: test (200; 4.723206ms) Apr 11 13:09:12.549: INFO: (15) /api/v1/namespaces/proxy-6181/services/https:proxy-service-8gscq:tlsportname2/proxy/: tls qux (200; 4.917478ms) Apr 11 13:09:12.549: INFO: (15) /api/v1/namespaces/proxy-6181/services/https:proxy-service-8gscq:tlsportname1/proxy/: tls baz (200; 5.020901ms) Apr 11 13:09:12.549: INFO: (15) /api/v1/namespaces/proxy-6181/pods/http:proxy-service-8gscq-b9fsm:162/proxy/: bar (200; 5.093637ms) Apr 11 13:09:12.549: INFO: (15) /api/v1/namespaces/proxy-6181/services/proxy-service-8gscq:portname2/proxy/: bar (200; 5.114641ms) Apr 11 13:09:12.551: INFO: (16) /api/v1/namespaces/proxy-6181/pods/http:proxy-service-8gscq-b9fsm:162/proxy/: bar (200; 2.519139ms) Apr 11 13:09:12.551: INFO: (16) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm:160/proxy/: foo (200; 2.668012ms) Apr 11 13:09:12.552: INFO: (16) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm/proxy/: test (200; 2.686012ms) Apr 11 13:09:12.552: INFO: (16) /api/v1/namespaces/proxy-6181/pods/https:proxy-service-8gscq-b9fsm:462/proxy/: tls qux (200; 2.812412ms) Apr 11 13:09:12.553: INFO: (16) /api/v1/namespaces/proxy-6181/pods/http:proxy-service-8gscq-b9fsm:160/proxy/: foo (200; 4.299373ms) Apr 11 13:09:12.553: INFO: (16) /api/v1/namespaces/proxy-6181/services/http:proxy-service-8gscq:portname1/proxy/: foo (200; 4.515089ms) Apr 11 13:09:12.553: INFO: (16) /api/v1/namespaces/proxy-6181/pods/http:proxy-service-8gscq-b9fsm:1080/proxy/: ... (200; 4.473853ms) Apr 11 13:09:12.554: INFO: (16) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm:162/proxy/: bar (200; 4.618641ms) Apr 11 13:09:12.554: INFO: (16) /api/v1/namespaces/proxy-6181/pods/https:proxy-service-8gscq-b9fsm:443/proxy/: test<... (200; 4.749471ms) Apr 11 13:09:12.555: INFO: (16) /api/v1/namespaces/proxy-6181/services/proxy-service-8gscq:portname2/proxy/: bar (200; 6.655479ms) Apr 11 13:09:12.556: INFO: (16) /api/v1/namespaces/proxy-6181/services/proxy-service-8gscq:portname1/proxy/: foo (200; 6.930257ms) Apr 11 13:09:12.556: INFO: (16) /api/v1/namespaces/proxy-6181/services/https:proxy-service-8gscq:tlsportname2/proxy/: tls qux (200; 7.031917ms) Apr 11 13:09:12.556: INFO: (16) /api/v1/namespaces/proxy-6181/services/https:proxy-service-8gscq:tlsportname1/proxy/: tls baz (200; 7.096043ms) Apr 11 13:09:12.556: INFO: (16) /api/v1/namespaces/proxy-6181/services/http:proxy-service-8gscq:portname2/proxy/: bar (200; 7.310881ms) Apr 11 13:09:12.564: INFO: (17) /api/v1/namespaces/proxy-6181/pods/https:proxy-service-8gscq-b9fsm:462/proxy/: tls qux (200; 8.124911ms) Apr 11 13:09:12.564: INFO: (17) /api/v1/namespaces/proxy-6181/services/http:proxy-service-8gscq:portname2/proxy/: bar (200; 8.064993ms) Apr 11 13:09:12.564: INFO: (17) /api/v1/namespaces/proxy-6181/services/https:proxy-service-8gscq:tlsportname1/proxy/: tls baz (200; 8.15062ms) Apr 11 13:09:12.564: INFO: (17) /api/v1/namespaces/proxy-6181/services/proxy-service-8gscq:portname1/proxy/: foo (200; 8.180793ms) Apr 11 13:09:12.565: INFO: (17) /api/v1/namespaces/proxy-6181/pods/http:proxy-service-8gscq-b9fsm:160/proxy/: foo (200; 8.58891ms) Apr 11 13:09:12.565: INFO: (17) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm:1080/proxy/: test<... (200; 8.532415ms) Apr 11 13:09:12.565: INFO: (17) /api/v1/namespaces/proxy-6181/services/proxy-service-8gscq:portname2/proxy/: bar (200; 8.659698ms) Apr 11 13:09:12.565: INFO: (17) /api/v1/namespaces/proxy-6181/services/http:proxy-service-8gscq:portname1/proxy/: foo (200; 8.84899ms) Apr 11 13:09:12.565: INFO: (17) /api/v1/namespaces/proxy-6181/pods/http:proxy-service-8gscq-b9fsm:162/proxy/: bar (200; 8.888977ms) Apr 11 13:09:12.565: INFO: (17) /api/v1/namespaces/proxy-6181/pods/https:proxy-service-8gscq-b9fsm:443/proxy/: test (200; 8.991387ms) Apr 11 13:09:12.565: INFO: (17) /api/v1/namespaces/proxy-6181/services/https:proxy-service-8gscq:tlsportname2/proxy/: tls qux (200; 9.125207ms) Apr 11 13:09:12.565: INFO: (17) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm:160/proxy/: foo (200; 8.97196ms) Apr 11 13:09:12.565: INFO: (17) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm:162/proxy/: bar (200; 8.992638ms) Apr 11 13:09:12.566: INFO: (17) /api/v1/namespaces/proxy-6181/pods/http:proxy-service-8gscq-b9fsm:1080/proxy/: ... (200; 9.924922ms) Apr 11 13:09:12.569: INFO: (18) /api/v1/namespaces/proxy-6181/pods/http:proxy-service-8gscq-b9fsm:160/proxy/: foo (200; 2.616371ms) Apr 11 13:09:12.571: INFO: (18) /api/v1/namespaces/proxy-6181/pods/https:proxy-service-8gscq-b9fsm:443/proxy/: test (200; 4.942933ms) Apr 11 13:09:12.571: INFO: (18) /api/v1/namespaces/proxy-6181/pods/http:proxy-service-8gscq-b9fsm:162/proxy/: bar (200; 5.000506ms) Apr 11 13:09:12.571: INFO: (18) /api/v1/namespaces/proxy-6181/services/http:proxy-service-8gscq:portname1/proxy/: foo (200; 5.147033ms) Apr 11 13:09:12.571: INFO: (18) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm:162/proxy/: bar (200; 5.164076ms) Apr 11 13:09:12.572: INFO: (18) /api/v1/namespaces/proxy-6181/services/https:proxy-service-8gscq:tlsportname2/proxy/: tls qux (200; 5.3183ms) Apr 11 13:09:12.572: INFO: (18) /api/v1/namespaces/proxy-6181/services/proxy-service-8gscq:portname1/proxy/: foo (200; 5.175476ms) Apr 11 13:09:12.572: INFO: (18) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm:160/proxy/: foo (200; 5.200057ms) Apr 11 13:09:12.572: INFO: (18) /api/v1/namespaces/proxy-6181/pods/http:proxy-service-8gscq-b9fsm:1080/proxy/: ... (200; 5.252999ms) Apr 11 13:09:12.572: INFO: (18) /api/v1/namespaces/proxy-6181/services/https:proxy-service-8gscq:tlsportname1/proxy/: tls baz (200; 5.489172ms) Apr 11 13:09:12.572: INFO: (18) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm:1080/proxy/: test<... (200; 5.549877ms) Apr 11 13:09:12.572: INFO: (18) /api/v1/namespaces/proxy-6181/pods/https:proxy-service-8gscq-b9fsm:460/proxy/: tls baz (200; 5.882785ms) Apr 11 13:09:12.573: INFO: (18) /api/v1/namespaces/proxy-6181/services/http:proxy-service-8gscq:portname2/proxy/: bar (200; 6.463909ms) Apr 11 13:09:12.573: INFO: (18) /api/v1/namespaces/proxy-6181/services/proxy-service-8gscq:portname2/proxy/: bar (200; 6.55061ms) Apr 11 13:09:12.577: INFO: (19) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm:160/proxy/: foo (200; 4.487976ms) Apr 11 13:09:12.577: INFO: (19) /api/v1/namespaces/proxy-6181/pods/https:proxy-service-8gscq-b9fsm:460/proxy/: tls baz (200; 4.388521ms) Apr 11 13:09:12.577: INFO: (19) /api/v1/namespaces/proxy-6181/pods/https:proxy-service-8gscq-b9fsm:462/proxy/: tls qux (200; 4.406794ms) Apr 11 13:09:12.577: INFO: (19) /api/v1/namespaces/proxy-6181/pods/http:proxy-service-8gscq-b9fsm:1080/proxy/: ... (200; 4.478693ms) Apr 11 13:09:12.578: INFO: (19) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm/proxy/: test (200; 4.57922ms) Apr 11 13:09:12.579: INFO: (19) /api/v1/namespaces/proxy-6181/services/https:proxy-service-8gscq:tlsportname1/proxy/: tls baz (200; 5.577034ms) Apr 11 13:09:12.579: INFO: (19) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm:162/proxy/: bar (200; 5.459283ms) Apr 11 13:09:12.579: INFO: (19) /api/v1/namespaces/proxy-6181/pods/proxy-service-8gscq-b9fsm:1080/proxy/: test<... (200; 5.494553ms) Apr 11 13:09:12.579: INFO: (19) /api/v1/namespaces/proxy-6181/pods/http:proxy-service-8gscq-b9fsm:160/proxy/: foo (200; 5.642464ms) Apr 11 13:09:12.579: INFO: (19) /api/v1/namespaces/proxy-6181/services/https:proxy-service-8gscq:tlsportname2/proxy/: tls qux (200; 5.801253ms) Apr 11 13:09:12.579: INFO: (19) /api/v1/namespaces/proxy-6181/pods/https:proxy-service-8gscq-b9fsm:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-87388582-85e1-49bb-9159-8e1089b0ea24 STEP: Creating a pod to test consume configMaps Apr 11 13:09:28.444: INFO: Waiting up to 5m0s for pod "pod-configmaps-f499e910-6a0d-4c21-a94c-06097254adef" in namespace "configmap-6059" to be "success or failure" Apr 11 13:09:28.448: INFO: Pod "pod-configmaps-f499e910-6a0d-4c21-a94c-06097254adef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100291ms Apr 11 13:09:30.452: INFO: Pod "pod-configmaps-f499e910-6a0d-4c21-a94c-06097254adef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008345537s Apr 11 13:09:32.456: INFO: Pod "pod-configmaps-f499e910-6a0d-4c21-a94c-06097254adef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012822533s STEP: Saw pod success Apr 11 13:09:32.456: INFO: Pod "pod-configmaps-f499e910-6a0d-4c21-a94c-06097254adef" satisfied condition "success or failure" Apr 11 13:09:32.459: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-f499e910-6a0d-4c21-a94c-06097254adef container configmap-volume-test: STEP: delete the pod Apr 11 13:09:32.479: INFO: Waiting for pod pod-configmaps-f499e910-6a0d-4c21-a94c-06097254adef to disappear Apr 11 13:09:32.484: INFO: Pod pod-configmaps-f499e910-6a0d-4c21-a94c-06097254adef no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:09:32.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6059" for this suite. Apr 11 13:09:38.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:09:38.585: INFO: namespace configmap-6059 deletion completed in 6.097145251s • [SLOW TEST:10.222 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:09:38.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-nrt4 STEP: Creating a pod to test atomic-volume-subpath Apr 11 13:09:38.684: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-nrt4" in namespace "subpath-4069" to be "success or failure" Apr 11 13:09:38.687: INFO: Pod "pod-subpath-test-projected-nrt4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.075932ms Apr 11 13:09:40.691: INFO: Pod "pod-subpath-test-projected-nrt4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006478898s Apr 11 13:09:42.695: INFO: Pod "pod-subpath-test-projected-nrt4": Phase="Running", Reason="", readiness=true. Elapsed: 4.010679847s Apr 11 13:09:44.700: INFO: Pod "pod-subpath-test-projected-nrt4": Phase="Running", Reason="", readiness=true. Elapsed: 6.015488236s Apr 11 13:09:46.704: INFO: Pod "pod-subpath-test-projected-nrt4": Phase="Running", Reason="", readiness=true. Elapsed: 8.020000868s Apr 11 13:09:48.708: INFO: Pod "pod-subpath-test-projected-nrt4": Phase="Running", Reason="", readiness=true. Elapsed: 10.023864183s Apr 11 13:09:50.712: INFO: Pod "pod-subpath-test-projected-nrt4": Phase="Running", Reason="", readiness=true. Elapsed: 12.02767745s Apr 11 13:09:52.717: INFO: Pod "pod-subpath-test-projected-nrt4": Phase="Running", Reason="", readiness=true. Elapsed: 14.032416043s Apr 11 13:09:54.721: INFO: Pod "pod-subpath-test-projected-nrt4": Phase="Running", Reason="", readiness=true. Elapsed: 16.036926062s Apr 11 13:09:56.726: INFO: Pod "pod-subpath-test-projected-nrt4": Phase="Running", Reason="", readiness=true. Elapsed: 18.041748931s Apr 11 13:09:58.730: INFO: Pod "pod-subpath-test-projected-nrt4": Phase="Running", Reason="", readiness=true. Elapsed: 20.046142443s Apr 11 13:10:00.735: INFO: Pod "pod-subpath-test-projected-nrt4": Phase="Running", Reason="", readiness=true. Elapsed: 22.050656786s Apr 11 13:10:02.739: INFO: Pod "pod-subpath-test-projected-nrt4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.055211068s STEP: Saw pod success Apr 11 13:10:02.739: INFO: Pod "pod-subpath-test-projected-nrt4" satisfied condition "success or failure" Apr 11 13:10:02.742: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-projected-nrt4 container test-container-subpath-projected-nrt4: STEP: delete the pod Apr 11 13:10:02.780: INFO: Waiting for pod pod-subpath-test-projected-nrt4 to disappear Apr 11 13:10:02.816: INFO: Pod pod-subpath-test-projected-nrt4 no longer exists STEP: Deleting pod pod-subpath-test-projected-nrt4 Apr 11 13:10:02.816: INFO: Deleting pod "pod-subpath-test-projected-nrt4" in namespace "subpath-4069" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:10:02.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4069" for this suite. Apr 11 13:10:08.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:10:08.982: INFO: namespace subpath-4069 deletion completed in 6.159882848s • [SLOW TEST:30.396 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:10:08.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-d89479f8-697c-4c50-9aee-d8bc58f8abd2 STEP: Creating a pod to test consume configMaps Apr 11 13:10:09.067: INFO: Waiting up to 5m0s for pod "pod-configmaps-991a409d-4144-4f4e-a7e2-0091ab86f90f" in namespace "configmap-3114" to be "success or failure" Apr 11 13:10:09.071: INFO: Pod "pod-configmaps-991a409d-4144-4f4e-a7e2-0091ab86f90f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.969861ms Apr 11 13:10:11.075: INFO: Pod "pod-configmaps-991a409d-4144-4f4e-a7e2-0091ab86f90f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007530392s Apr 11 13:10:13.079: INFO: Pod "pod-configmaps-991a409d-4144-4f4e-a7e2-0091ab86f90f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011633745s STEP: Saw pod success Apr 11 13:10:13.079: INFO: Pod "pod-configmaps-991a409d-4144-4f4e-a7e2-0091ab86f90f" satisfied condition "success or failure" Apr 11 13:10:13.081: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-991a409d-4144-4f4e-a7e2-0091ab86f90f container configmap-volume-test: STEP: delete the pod Apr 11 13:10:13.234: INFO: Waiting for pod pod-configmaps-991a409d-4144-4f4e-a7e2-0091ab86f90f to disappear Apr 11 13:10:13.239: INFO: Pod pod-configmaps-991a409d-4144-4f4e-a7e2-0091ab86f90f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:10:13.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3114" for this suite. Apr 11 13:10:19.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:10:19.334: INFO: namespace configmap-3114 deletion completed in 6.091566024s • [SLOW TEST:10.352 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:10:19.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command Apr 11 13:10:19.427: INFO: Waiting up to 5m0s for pod "client-containers-52ca17a1-a3b4-4e48-a856-f868df8e5095" in namespace "containers-6629" to be "success or failure" Apr 11 13:10:19.448: INFO: Pod "client-containers-52ca17a1-a3b4-4e48-a856-f868df8e5095": Phase="Pending", Reason="", readiness=false. Elapsed: 21.05656ms Apr 11 13:10:21.453: INFO: Pod "client-containers-52ca17a1-a3b4-4e48-a856-f868df8e5095": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026438813s Apr 11 13:10:23.457: INFO: Pod "client-containers-52ca17a1-a3b4-4e48-a856-f868df8e5095": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030236172s STEP: Saw pod success Apr 11 13:10:23.457: INFO: Pod "client-containers-52ca17a1-a3b4-4e48-a856-f868df8e5095" satisfied condition "success or failure" Apr 11 13:10:23.460: INFO: Trying to get logs from node iruya-worker2 pod client-containers-52ca17a1-a3b4-4e48-a856-f868df8e5095 container test-container: STEP: delete the pod Apr 11 13:10:23.474: INFO: Waiting for pod client-containers-52ca17a1-a3b4-4e48-a856-f868df8e5095 to disappear Apr 11 13:10:23.478: INFO: Pod client-containers-52ca17a1-a3b4-4e48-a856-f868df8e5095 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:10:23.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6629" for this suite. Apr 11 13:10:29.494: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:10:29.562: INFO: namespace containers-6629 deletion completed in 6.080943663s • [SLOW TEST:10.228 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:10:29.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Apr 11 13:10:29.600: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix440816088/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:10:29.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3263" for this suite. Apr 11 13:10:35.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:10:35.768: INFO: namespace kubectl-3263 deletion completed in 6.092808876s • [SLOW TEST:6.205 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:10:35.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 11 13:10:35.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-8907' Apr 11 13:10:38.426: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 11 13:10:38.426: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Apr 11 13:10:40.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-8907' Apr 11 13:10:40.593: INFO: stderr: "" Apr 11 13:10:40.593: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:10:40.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8907" for this suite. Apr 11 13:12:42.605: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:12:42.700: INFO: namespace kubectl-8907 deletion completed in 2m2.103960524s • [SLOW TEST:126.931 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:12:42.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 11 13:12:42.750: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:12:50.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7097" for this suite. Apr 11 13:12:56.170: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:12:56.252: INFO: namespace init-container-7097 deletion completed in 6.11879524s • [SLOW TEST:13.553 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:12:56.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Apr 11 13:13:00.355: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-4e7fd553-43eb-4554-bbd4-1e6bfcb2b0ee,GenerateName:,Namespace:events-9864,SelfLink:/api/v1/namespaces/events-9864/pods/send-events-4e7fd553-43eb-4554-bbd4-1e6bfcb2b0ee,UID:0011b3da-d554-45c9-bfc8-616a3e99aadc,ResourceVersion:4841595,Generation:0,CreationTimestamp:2020-04-11 13:12:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 311345494,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xtfzw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xtfzw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-xtfzw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002bac890} {node.kubernetes.io/unreachable Exists NoExecute 0xc002bac8b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 13:12:56 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 13:12:59 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 13:12:59 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 13:12:56 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.60,StartTime:2020-04-11 13:12:56 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-04-11 13:12:58 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://c73c6302e3f0d500bda1fd68854c5bd43294e692a2e2d77f7758dca20aab353b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Apr 11 13:13:02.360: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Apr 11 13:13:04.366: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:13:04.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-9864" for this suite. Apr 11 13:13:42.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:13:42.487: INFO: namespace events-9864 deletion completed in 38.094667119s • [SLOW TEST:46.234 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:13:42.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-33e50753-0ca1-4757-903b-e2d4f5e0a5b7 STEP: Creating a pod to test consume configMaps Apr 11 13:13:42.659: INFO: Waiting up to 5m0s for pod "pod-configmaps-87c612bd-4838-41c6-a8e2-7e19a009ffd8" in namespace "configmap-2852" to be "success or failure" Apr 11 13:13:42.708: INFO: Pod "pod-configmaps-87c612bd-4838-41c6-a8e2-7e19a009ffd8": Phase="Pending", Reason="", readiness=false. Elapsed: 49.754579ms Apr 11 13:13:44.714: INFO: Pod "pod-configmaps-87c612bd-4838-41c6-a8e2-7e19a009ffd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05541662s Apr 11 13:13:46.719: INFO: Pod "pod-configmaps-87c612bd-4838-41c6-a8e2-7e19a009ffd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059889475s STEP: Saw pod success Apr 11 13:13:46.719: INFO: Pod "pod-configmaps-87c612bd-4838-41c6-a8e2-7e19a009ffd8" satisfied condition "success or failure" Apr 11 13:13:46.722: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-87c612bd-4838-41c6-a8e2-7e19a009ffd8 container configmap-volume-test: STEP: delete the pod Apr 11 13:13:46.757: INFO: Waiting for pod pod-configmaps-87c612bd-4838-41c6-a8e2-7e19a009ffd8 to disappear Apr 11 13:13:46.771: INFO: Pod pod-configmaps-87c612bd-4838-41c6-a8e2-7e19a009ffd8 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:13:46.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2852" for this suite. Apr 11 13:13:52.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:13:52.864: INFO: namespace configmap-2852 deletion completed in 6.090528932s • [SLOW TEST:10.377 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:13:52.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Apr 11 13:13:52.939: INFO: Pod name pod-release: Found 0 pods out of 1 Apr 11 13:13:57.944: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:13:58.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2489" for this suite. Apr 11 13:14:04.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:14:05.111: INFO: namespace replication-controller-2489 deletion completed in 6.131408887s • [SLOW TEST:12.246 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:14:05.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Apr 11 13:14:12.547: INFO: 0 pods remaining Apr 11 13:14:12.547: INFO: 0 pods has nil DeletionTimestamp Apr 11 13:14:12.547: INFO: STEP: Gathering metrics W0411 13:14:13.671564 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 11 13:14:13.671: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:14:13.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9445" for this suite. Apr 11 13:14:20.012: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:14:20.120: INFO: namespace gc-9445 deletion completed in 6.445776675s • [SLOW TEST:15.008 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:14:20.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 11 13:14:20.199: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d14a982f-ea7f-4a3d-bac6-1bfa19802311" in namespace "projected-4088" to be "success or failure" Apr 11 13:14:20.202: INFO: Pod "downwardapi-volume-d14a982f-ea7f-4a3d-bac6-1bfa19802311": Phase="Pending", Reason="", readiness=false. Elapsed: 3.149014ms Apr 11 13:14:22.206: INFO: Pod "downwardapi-volume-d14a982f-ea7f-4a3d-bac6-1bfa19802311": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006745112s Apr 11 13:14:24.210: INFO: Pod "downwardapi-volume-d14a982f-ea7f-4a3d-bac6-1bfa19802311": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010545205s STEP: Saw pod success Apr 11 13:14:24.210: INFO: Pod "downwardapi-volume-d14a982f-ea7f-4a3d-bac6-1bfa19802311" satisfied condition "success or failure" Apr 11 13:14:24.212: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-d14a982f-ea7f-4a3d-bac6-1bfa19802311 container client-container: STEP: delete the pod Apr 11 13:14:24.234: INFO: Waiting for pod downwardapi-volume-d14a982f-ea7f-4a3d-bac6-1bfa19802311 to disappear Apr 11 13:14:24.238: INFO: Pod downwardapi-volume-d14a982f-ea7f-4a3d-bac6-1bfa19802311 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:14:24.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4088" for this suite. Apr 11 13:14:30.253: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:14:30.331: INFO: namespace projected-4088 deletion completed in 6.090086581s • [SLOW TEST:10.211 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:14:30.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0411 13:14:31.446982 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 11 13:14:31.447: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:14:31.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3166" for this suite. Apr 11 13:14:37.465: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:14:37.539: INFO: namespace gc-3166 deletion completed in 6.089303162s • [SLOW TEST:7.207 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:14:37.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-e08e9ee1-4784-45a9-b9d0-609b4eab44dc STEP: Creating a pod to test consume secrets Apr 11 13:14:37.624: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f3c81bee-6da2-4db6-a981-51c03d7ef9d9" in namespace "projected-9041" to be "success or failure" Apr 11 13:14:37.647: INFO: Pod "pod-projected-secrets-f3c81bee-6da2-4db6-a981-51c03d7ef9d9": Phase="Pending", Reason="", readiness=false. Elapsed: 22.772179ms Apr 11 13:14:39.651: INFO: Pod "pod-projected-secrets-f3c81bee-6da2-4db6-a981-51c03d7ef9d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02708111s Apr 11 13:14:41.656: INFO: Pod "pod-projected-secrets-f3c81bee-6da2-4db6-a981-51c03d7ef9d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03153421s STEP: Saw pod success Apr 11 13:14:41.656: INFO: Pod "pod-projected-secrets-f3c81bee-6da2-4db6-a981-51c03d7ef9d9" satisfied condition "success or failure" Apr 11 13:14:41.658: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-f3c81bee-6da2-4db6-a981-51c03d7ef9d9 container projected-secret-volume-test: STEP: delete the pod Apr 11 13:14:41.837: INFO: Waiting for pod pod-projected-secrets-f3c81bee-6da2-4db6-a981-51c03d7ef9d9 to disappear Apr 11 13:14:41.874: INFO: Pod pod-projected-secrets-f3c81bee-6da2-4db6-a981-51c03d7ef9d9 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:14:41.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9041" for this suite. Apr 11 13:14:47.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:14:47.994: INFO: namespace projected-9041 deletion completed in 6.115732579s • [SLOW TEST:10.454 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:14:47.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 11 13:14:48.052: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e7e98980-b9c7-4568-9dfe-22698de4fa5f" in namespace "projected-7439" to be "success or failure" Apr 11 13:14:48.055: INFO: Pod "downwardapi-volume-e7e98980-b9c7-4568-9dfe-22698de4fa5f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.251739ms Apr 11 13:14:50.058: INFO: Pod "downwardapi-volume-e7e98980-b9c7-4568-9dfe-22698de4fa5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006911188s Apr 11 13:14:52.062: INFO: Pod "downwardapi-volume-e7e98980-b9c7-4568-9dfe-22698de4fa5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010434551s STEP: Saw pod success Apr 11 13:14:52.062: INFO: Pod "downwardapi-volume-e7e98980-b9c7-4568-9dfe-22698de4fa5f" satisfied condition "success or failure" Apr 11 13:14:52.065: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-e7e98980-b9c7-4568-9dfe-22698de4fa5f container client-container: STEP: delete the pod Apr 11 13:14:52.101: INFO: Waiting for pod downwardapi-volume-e7e98980-b9c7-4568-9dfe-22698de4fa5f to disappear Apr 11 13:14:52.134: INFO: Pod downwardapi-volume-e7e98980-b9c7-4568-9dfe-22698de4fa5f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:14:52.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7439" for this suite. Apr 11 13:14:58.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:14:58.249: INFO: namespace projected-7439 deletion completed in 6.111607457s • [SLOW TEST:10.255 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:14:58.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-566bb52d-b9d4-4fbe-b4da-371f4addf544 in namespace container-probe-6156 Apr 11 13:15:02.385: INFO: Started pod liveness-566bb52d-b9d4-4fbe-b4da-371f4addf544 in namespace container-probe-6156 STEP: checking the pod's current state and verifying that restartCount is present Apr 11 13:15:02.387: INFO: Initial restart count of pod liveness-566bb52d-b9d4-4fbe-b4da-371f4addf544 is 0 Apr 11 13:15:20.477: INFO: Restart count of pod container-probe-6156/liveness-566bb52d-b9d4-4fbe-b4da-371f4addf544 is now 1 (18.089673696s elapsed) Apr 11 13:15:40.569: INFO: Restart count of pod container-probe-6156/liveness-566bb52d-b9d4-4fbe-b4da-371f4addf544 is now 2 (38.182062481s elapsed) Apr 11 13:16:00.689: INFO: Restart count of pod container-probe-6156/liveness-566bb52d-b9d4-4fbe-b4da-371f4addf544 is now 3 (58.301812976s elapsed) Apr 11 13:16:20.752: INFO: Restart count of pod container-probe-6156/liveness-566bb52d-b9d4-4fbe-b4da-371f4addf544 is now 4 (1m18.364978875s elapsed) Apr 11 13:17:22.894: INFO: Restart count of pod container-probe-6156/liveness-566bb52d-b9d4-4fbe-b4da-371f4addf544 is now 5 (2m20.50664962s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:17:22.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6156" for this suite. Apr 11 13:17:28.929: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:17:29.006: INFO: namespace container-probe-6156 deletion completed in 6.087226175s • [SLOW TEST:150.757 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:17:29.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 11 13:17:29.111: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Apr 11 13:17:29.122: INFO: Number of nodes with available pods: 0 Apr 11 13:17:29.122: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Apr 11 13:17:29.199: INFO: Number of nodes with available pods: 0 Apr 11 13:17:29.199: INFO: Node iruya-worker is running more than one daemon pod Apr 11 13:17:30.206: INFO: Number of nodes with available pods: 0 Apr 11 13:17:30.206: INFO: Node iruya-worker is running more than one daemon pod Apr 11 13:17:31.202: INFO: Number of nodes with available pods: 0 Apr 11 13:17:31.202: INFO: Node iruya-worker is running more than one daemon pod Apr 11 13:17:32.206: INFO: Number of nodes with available pods: 1 Apr 11 13:17:32.206: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Apr 11 13:17:32.264: INFO: Number of nodes with available pods: 1 Apr 11 13:17:32.264: INFO: Number of running nodes: 0, number of available pods: 1 Apr 11 13:17:33.269: INFO: Number of nodes with available pods: 0 Apr 11 13:17:33.269: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Apr 11 13:17:33.322: INFO: Number of nodes with available pods: 0 Apr 11 13:17:33.322: INFO: Node iruya-worker is running more than one daemon pod Apr 11 13:17:34.327: INFO: Number of nodes with available pods: 0 Apr 11 13:17:34.327: INFO: Node iruya-worker is running more than one daemon pod Apr 11 13:17:35.327: INFO: Number of nodes with available pods: 0 Apr 11 13:17:35.327: INFO: Node iruya-worker is running more than one daemon pod Apr 11 13:17:36.327: INFO: Number of nodes with available pods: 0 Apr 11 13:17:36.327: INFO: Node iruya-worker is running more than one daemon pod Apr 11 13:17:37.327: INFO: Number of nodes with available pods: 0 Apr 11 13:17:37.327: INFO: Node iruya-worker is running more than one daemon pod Apr 11 13:17:38.327: INFO: Number of nodes with available pods: 0 Apr 11 13:17:38.327: INFO: Node iruya-worker is running more than one daemon pod Apr 11 13:17:39.327: INFO: Number of nodes with available pods: 0 Apr 11 13:17:39.327: INFO: Node iruya-worker is running more than one daemon pod Apr 11 13:17:40.327: INFO: Number of nodes with available pods: 0 Apr 11 13:17:40.328: INFO: Node iruya-worker is running more than one daemon pod Apr 11 13:17:41.327: INFO: Number of nodes with available pods: 0 Apr 11 13:17:41.327: INFO: Node iruya-worker is running more than one daemon pod Apr 11 13:17:42.326: INFO: Number of nodes with available pods: 0 Apr 11 13:17:42.326: INFO: Node iruya-worker is running more than one daemon pod Apr 11 13:17:43.327: INFO: Number of nodes with available pods: 0 Apr 11 13:17:43.327: INFO: Node iruya-worker is running more than one daemon pod Apr 11 13:17:44.327: INFO: Number of nodes with available pods: 0 Apr 11 13:17:44.327: INFO: Node iruya-worker is running more than one daemon pod Apr 11 13:17:45.327: INFO: Number of nodes with available pods: 1 Apr 11 13:17:45.327: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5361, will wait for the garbage collector to delete the pods Apr 11 13:17:45.392: INFO: Deleting DaemonSet.extensions daemon-set took: 6.68261ms Apr 11 13:17:45.692: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.289036ms Apr 11 13:17:52.196: INFO: Number of nodes with available pods: 0 Apr 11 13:17:52.196: INFO: Number of running nodes: 0, number of available pods: 0 Apr 11 13:17:52.201: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5361/daemonsets","resourceVersion":"4842611"},"items":null} Apr 11 13:17:52.204: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5361/pods","resourceVersion":"4842611"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:17:52.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5361" for this suite. Apr 11 13:17:58.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:17:58.337: INFO: namespace daemonsets-5361 deletion completed in 6.10123251s • [SLOW TEST:29.330 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:17:58.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:18:03.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6183" for this suite. Apr 11 13:18:10.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:18:10.171: INFO: namespace watch-6183 deletion completed in 6.176484221s • [SLOW TEST:11.833 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:18:10.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Apr 11 13:18:10.246: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7372,SelfLink:/api/v1/namespaces/watch-7372/configmaps/e2e-watch-test-configmap-a,UID:b0a4f565-122b-494e-8af3-21e5cbfd6232,ResourceVersion:4842785,Generation:0,CreationTimestamp:2020-04-11 13:18:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 11 13:18:10.246: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7372,SelfLink:/api/v1/namespaces/watch-7372/configmaps/e2e-watch-test-configmap-a,UID:b0a4f565-122b-494e-8af3-21e5cbfd6232,ResourceVersion:4842785,Generation:0,CreationTimestamp:2020-04-11 13:18:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Apr 11 13:18:20.254: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7372,SelfLink:/api/v1/namespaces/watch-7372/configmaps/e2e-watch-test-configmap-a,UID:b0a4f565-122b-494e-8af3-21e5cbfd6232,ResourceVersion:4842805,Generation:0,CreationTimestamp:2020-04-11 13:18:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Apr 11 13:18:20.255: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7372,SelfLink:/api/v1/namespaces/watch-7372/configmaps/e2e-watch-test-configmap-a,UID:b0a4f565-122b-494e-8af3-21e5cbfd6232,ResourceVersion:4842805,Generation:0,CreationTimestamp:2020-04-11 13:18:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Apr 11 13:18:30.263: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7372,SelfLink:/api/v1/namespaces/watch-7372/configmaps/e2e-watch-test-configmap-a,UID:b0a4f565-122b-494e-8af3-21e5cbfd6232,ResourceVersion:4842825,Generation:0,CreationTimestamp:2020-04-11 13:18:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 11 13:18:30.263: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7372,SelfLink:/api/v1/namespaces/watch-7372/configmaps/e2e-watch-test-configmap-a,UID:b0a4f565-122b-494e-8af3-21e5cbfd6232,ResourceVersion:4842825,Generation:0,CreationTimestamp:2020-04-11 13:18:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Apr 11 13:18:40.270: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7372,SelfLink:/api/v1/namespaces/watch-7372/configmaps/e2e-watch-test-configmap-a,UID:b0a4f565-122b-494e-8af3-21e5cbfd6232,ResourceVersion:4842846,Generation:0,CreationTimestamp:2020-04-11 13:18:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 11 13:18:40.270: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7372,SelfLink:/api/v1/namespaces/watch-7372/configmaps/e2e-watch-test-configmap-a,UID:b0a4f565-122b-494e-8af3-21e5cbfd6232,ResourceVersion:4842846,Generation:0,CreationTimestamp:2020-04-11 13:18:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Apr 11 13:18:50.279: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7372,SelfLink:/api/v1/namespaces/watch-7372/configmaps/e2e-watch-test-configmap-b,UID:6764db6b-c8b4-4adb-95ba-2461089b9e27,ResourceVersion:4842866,Generation:0,CreationTimestamp:2020-04-11 13:18:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 11 13:18:50.279: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7372,SelfLink:/api/v1/namespaces/watch-7372/configmaps/e2e-watch-test-configmap-b,UID:6764db6b-c8b4-4adb-95ba-2461089b9e27,ResourceVersion:4842866,Generation:0,CreationTimestamp:2020-04-11 13:18:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Apr 11 13:19:00.286: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7372,SelfLink:/api/v1/namespaces/watch-7372/configmaps/e2e-watch-test-configmap-b,UID:6764db6b-c8b4-4adb-95ba-2461089b9e27,ResourceVersion:4842887,Generation:0,CreationTimestamp:2020-04-11 13:18:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 11 13:19:00.287: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7372,SelfLink:/api/v1/namespaces/watch-7372/configmaps/e2e-watch-test-configmap-b,UID:6764db6b-c8b4-4adb-95ba-2461089b9e27,ResourceVersion:4842887,Generation:0,CreationTimestamp:2020-04-11 13:18:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:19:10.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7372" for this suite. Apr 11 13:19:16.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:19:16.379: INFO: namespace watch-7372 deletion completed in 6.087466249s • [SLOW TEST:66.208 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:19:16.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Apr 11 13:19:16.457: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 11 13:19:16.465: INFO: Waiting for terminating namespaces to be deleted... Apr 11 13:19:16.467: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Apr 11 13:19:16.475: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 11 13:19:16.475: INFO: Container kube-proxy ready: true, restart count 0 Apr 11 13:19:16.475: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 11 13:19:16.475: INFO: Container kindnet-cni ready: true, restart count 0 Apr 11 13:19:16.475: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Apr 11 13:19:16.481: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Apr 11 13:19:16.481: INFO: Container kube-proxy ready: true, restart count 0 Apr 11 13:19:16.481: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Apr 11 13:19:16.481: INFO: Container kindnet-cni ready: true, restart count 0 Apr 11 13:19:16.481: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Apr 11 13:19:16.481: INFO: Container coredns ready: true, restart count 0 Apr 11 13:19:16.481: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Apr 11 13:19:16.481: INFO: Container coredns ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-worker STEP: verifying the node has the label node iruya-worker2 Apr 11 13:19:16.541: INFO: Pod coredns-5d4dd4b4db-6jcgz requesting resource cpu=100m on Node iruya-worker2 Apr 11 13:19:16.541: INFO: Pod coredns-5d4dd4b4db-gm7vr requesting resource cpu=100m on Node iruya-worker2 Apr 11 13:19:16.541: INFO: Pod kindnet-gwz5g requesting resource cpu=100m on Node iruya-worker Apr 11 13:19:16.541: INFO: Pod kindnet-mgd8b requesting resource cpu=100m on Node iruya-worker2 Apr 11 13:19:16.541: INFO: Pod kube-proxy-pmz4p requesting resource cpu=0m on Node iruya-worker Apr 11 13:19:16.541: INFO: Pod kube-proxy-vwbcj requesting resource cpu=0m on Node iruya-worker2 STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-7d3163b0-6db4-407d-8941-bf4ee097486d.1604c670d2845a02], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7548/filler-pod-7d3163b0-6db4-407d-8941-bf4ee097486d to iruya-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-7d3163b0-6db4-407d-8941-bf4ee097486d.1604c671545c07c2], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-7d3163b0-6db4-407d-8941-bf4ee097486d.1604c671843a99fd], Reason = [Created], Message = [Created container filler-pod-7d3163b0-6db4-407d-8941-bf4ee097486d] STEP: Considering event: Type = [Normal], Name = [filler-pod-7d3163b0-6db4-407d-8941-bf4ee097486d.1604c671939c068e], Reason = [Started], Message = [Started container filler-pod-7d3163b0-6db4-407d-8941-bf4ee097486d] STEP: Considering event: Type = [Normal], Name = [filler-pod-86ca8174-2f30-48ec-b865-ae830d6e70a1.1604c670cf29e096], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7548/filler-pod-86ca8174-2f30-48ec-b865-ae830d6e70a1 to iruya-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-86ca8174-2f30-48ec-b865-ae830d6e70a1.1604c6711c79ca3e], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-86ca8174-2f30-48ec-b865-ae830d6e70a1.1604c6716d4cc64d], Reason = [Created], Message = [Created container filler-pod-86ca8174-2f30-48ec-b865-ae830d6e70a1] STEP: Considering event: Type = [Normal], Name = [filler-pod-86ca8174-2f30-48ec-b865-ae830d6e70a1.1604c671843c2dd4], Reason = [Started], Message = [Started container filler-pod-86ca8174-2f30-48ec-b865-ae830d6e70a1] STEP: Considering event: Type = [Warning], Name = [additional-pod.1604c671c1f74f8c], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node iruya-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:19:21.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7548" for this suite. Apr 11 13:19:27.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:19:27.850: INFO: namespace sched-pred-7548 deletion completed in 6.072477352s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:11.471 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:19:27.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 11 13:19:27.966: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:19:35.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7373" for this suite. Apr 11 13:20:03.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:20:03.403: INFO: namespace init-container-7373 deletion completed in 28.096015997s • [SLOW TEST:35.552 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:20:03.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Apr 11 13:20:08.535: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:20:09.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-39" for this suite. Apr 11 13:20:31.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:20:31.733: INFO: namespace replicaset-39 deletion completed in 22.152854773s • [SLOW TEST:28.330 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:20:31.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-9689/secret-test-08510851-60f2-4a9c-91a8-5d23e9d2499a STEP: Creating a pod to test consume secrets Apr 11 13:20:31.822: INFO: Waiting up to 5m0s for pod "pod-configmaps-3f80176d-3fcf-4e87-8c02-781e9f16e3e7" in namespace "secrets-9689" to be "success or failure" Apr 11 13:20:31.835: INFO: Pod "pod-configmaps-3f80176d-3fcf-4e87-8c02-781e9f16e3e7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.078199ms Apr 11 13:20:33.857: INFO: Pod "pod-configmaps-3f80176d-3fcf-4e87-8c02-781e9f16e3e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034787953s Apr 11 13:20:35.860: INFO: Pod "pod-configmaps-3f80176d-3fcf-4e87-8c02-781e9f16e3e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037639012s STEP: Saw pod success Apr 11 13:20:35.860: INFO: Pod "pod-configmaps-3f80176d-3fcf-4e87-8c02-781e9f16e3e7" satisfied condition "success or failure" Apr 11 13:20:35.862: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-3f80176d-3fcf-4e87-8c02-781e9f16e3e7 container env-test: STEP: delete the pod Apr 11 13:20:35.881: INFO: Waiting for pod pod-configmaps-3f80176d-3fcf-4e87-8c02-781e9f16e3e7 to disappear Apr 11 13:20:35.911: INFO: Pod pod-configmaps-3f80176d-3fcf-4e87-8c02-781e9f16e3e7 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:20:35.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9689" for this suite. Apr 11 13:20:41.925: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:20:42.003: INFO: namespace secrets-9689 deletion completed in 6.089051051s • [SLOW TEST:10.270 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:20:42.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-b622c7b2-538c-43cf-8a69-5bad14aedcda STEP: Creating secret with name secret-projected-all-test-volume-e2e4feef-1435-4d39-ba31-ee872e71c016 STEP: Creating a pod to test Check all projections for projected volume plugin Apr 11 13:20:42.097: INFO: Waiting up to 5m0s for pod "projected-volume-19d241cd-fd3d-4391-b4c4-257a242af31f" in namespace "projected-2368" to be "success or failure" Apr 11 13:20:42.101: INFO: Pod "projected-volume-19d241cd-fd3d-4391-b4c4-257a242af31f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.654798ms Apr 11 13:20:44.106: INFO: Pod "projected-volume-19d241cd-fd3d-4391-b4c4-257a242af31f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008113624s Apr 11 13:20:46.110: INFO: Pod "projected-volume-19d241cd-fd3d-4391-b4c4-257a242af31f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012299072s STEP: Saw pod success Apr 11 13:20:46.110: INFO: Pod "projected-volume-19d241cd-fd3d-4391-b4c4-257a242af31f" satisfied condition "success or failure" Apr 11 13:20:46.112: INFO: Trying to get logs from node iruya-worker2 pod projected-volume-19d241cd-fd3d-4391-b4c4-257a242af31f container projected-all-volume-test: STEP: delete the pod Apr 11 13:20:46.149: INFO: Waiting for pod projected-volume-19d241cd-fd3d-4391-b4c4-257a242af31f to disappear Apr 11 13:20:46.151: INFO: Pod projected-volume-19d241cd-fd3d-4391-b4c4-257a242af31f no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:20:46.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2368" for this suite. Apr 11 13:20:52.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:20:52.251: INFO: namespace projected-2368 deletion completed in 6.095692417s • [SLOW TEST:10.247 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:20:52.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 11 13:20:52.319: INFO: Waiting up to 5m0s for pod "downwardapi-volume-578ced0f-d207-4395-a7f4-c7879812b3e2" in namespace "downward-api-8421" to be "success or failure" Apr 11 13:20:52.323: INFO: Pod "downwardapi-volume-578ced0f-d207-4395-a7f4-c7879812b3e2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.48301ms Apr 11 13:20:54.327: INFO: Pod "downwardapi-volume-578ced0f-d207-4395-a7f4-c7879812b3e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007789379s Apr 11 13:20:56.332: INFO: Pod "downwardapi-volume-578ced0f-d207-4395-a7f4-c7879812b3e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012298021s STEP: Saw pod success Apr 11 13:20:56.332: INFO: Pod "downwardapi-volume-578ced0f-d207-4395-a7f4-c7879812b3e2" satisfied condition "success or failure" Apr 11 13:20:56.336: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-578ced0f-d207-4395-a7f4-c7879812b3e2 container client-container: STEP: delete the pod Apr 11 13:20:56.356: INFO: Waiting for pod downwardapi-volume-578ced0f-d207-4395-a7f4-c7879812b3e2 to disappear Apr 11 13:20:56.360: INFO: Pod downwardapi-volume-578ced0f-d207-4395-a7f4-c7879812b3e2 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:20:56.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8421" for this suite. Apr 11 13:21:02.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:21:02.478: INFO: namespace downward-api-8421 deletion completed in 6.113991781s • [SLOW TEST:10.227 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:21:02.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Apr 11 13:21:06.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-42845743-ee2b-4b94-a987-2a1bf678ae5c -c busybox-main-container --namespace=emptydir-4561 -- cat /usr/share/volumeshare/shareddata.txt' Apr 11 13:21:09.274: INFO: stderr: "I0411 13:21:09.173067 874 log.go:172] (0xc000bac4d0) (0xc000b82960) Create stream\nI0411 13:21:09.173099 874 log.go:172] (0xc000bac4d0) (0xc000b82960) Stream added, broadcasting: 1\nI0411 13:21:09.176692 874 log.go:172] (0xc000bac4d0) Reply frame received for 1\nI0411 13:21:09.176745 874 log.go:172] (0xc000bac4d0) (0xc000b82000) Create stream\nI0411 13:21:09.176761 874 log.go:172] (0xc000bac4d0) (0xc000b82000) Stream added, broadcasting: 3\nI0411 13:21:09.177790 874 log.go:172] (0xc000bac4d0) Reply frame received for 3\nI0411 13:21:09.177817 874 log.go:172] (0xc000bac4d0) (0xc000b820a0) Create stream\nI0411 13:21:09.177828 874 log.go:172] (0xc000bac4d0) (0xc000b820a0) Stream added, broadcasting: 5\nI0411 13:21:09.178510 874 log.go:172] (0xc000bac4d0) Reply frame received for 5\nI0411 13:21:09.266766 874 log.go:172] (0xc000bac4d0) Data frame received for 5\nI0411 13:21:09.266807 874 log.go:172] (0xc000b820a0) (5) Data frame handling\nI0411 13:21:09.266849 874 log.go:172] (0xc000bac4d0) Data frame received for 3\nI0411 13:21:09.266886 874 log.go:172] (0xc000b82000) (3) Data frame handling\nI0411 13:21:09.266923 874 log.go:172] (0xc000b82000) (3) Data frame sent\nI0411 13:21:09.266947 874 log.go:172] (0xc000bac4d0) Data frame received for 3\nI0411 13:21:09.266962 874 log.go:172] (0xc000b82000) (3) Data frame handling\nI0411 13:21:09.268824 874 log.go:172] (0xc000bac4d0) Data frame received for 1\nI0411 13:21:09.268849 874 log.go:172] (0xc000b82960) (1) Data frame handling\nI0411 13:21:09.268876 874 log.go:172] (0xc000b82960) (1) Data frame sent\nI0411 13:21:09.268893 874 log.go:172] (0xc000bac4d0) (0xc000b82960) Stream removed, broadcasting: 1\nI0411 13:21:09.268905 874 log.go:172] (0xc000bac4d0) Go away received\nI0411 13:21:09.269434 874 log.go:172] (0xc000bac4d0) (0xc000b82960) Stream removed, broadcasting: 1\nI0411 13:21:09.269456 874 log.go:172] (0xc000bac4d0) (0xc000b82000) Stream removed, broadcasting: 3\nI0411 13:21:09.269464 874 log.go:172] (0xc000bac4d0) (0xc000b820a0) Stream removed, broadcasting: 5\n" Apr 11 13:21:09.274: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:21:09.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4561" for this suite. Apr 11 13:21:15.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:21:15.367: INFO: namespace emptydir-4561 deletion completed in 6.089058376s • [SLOW TEST:12.889 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:21:15.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 11 13:21:15.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-6958' Apr 11 13:21:15.536: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 11 13:21:15.536: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Apr 11 13:21:15.542: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Apr 11 13:21:15.616: INFO: scanned /root for discovery docs: Apr 11 13:21:15.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-6958' Apr 11 13:21:31.457: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Apr 11 13:21:31.457: INFO: stdout: "Created e2e-test-nginx-rc-4d2d853a2b018918c68f52ee029bafc8\nScaling up e2e-test-nginx-rc-4d2d853a2b018918c68f52ee029bafc8 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-4d2d853a2b018918c68f52ee029bafc8 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-4d2d853a2b018918c68f52ee029bafc8 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Apr 11 13:21:31.457: INFO: stdout: "Created e2e-test-nginx-rc-4d2d853a2b018918c68f52ee029bafc8\nScaling up e2e-test-nginx-rc-4d2d853a2b018918c68f52ee029bafc8 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-4d2d853a2b018918c68f52ee029bafc8 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-4d2d853a2b018918c68f52ee029bafc8 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Apr 11 13:21:31.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-6958' Apr 11 13:21:31.553: INFO: stderr: "" Apr 11 13:21:31.553: INFO: stdout: "e2e-test-nginx-rc-4d2d853a2b018918c68f52ee029bafc8-cz9lp " Apr 11 13:21:31.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-4d2d853a2b018918c68f52ee029bafc8-cz9lp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6958' Apr 11 13:21:31.652: INFO: stderr: "" Apr 11 13:21:31.652: INFO: stdout: "true" Apr 11 13:21:31.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-4d2d853a2b018918c68f52ee029bafc8-cz9lp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6958' Apr 11 13:21:31.745: INFO: stderr: "" Apr 11 13:21:31.745: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Apr 11 13:21:31.745: INFO: e2e-test-nginx-rc-4d2d853a2b018918c68f52ee029bafc8-cz9lp is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Apr 11 13:21:31.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-6958' Apr 11 13:21:31.851: INFO: stderr: "" Apr 11 13:21:31.851: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:21:31.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6958" for this suite. Apr 11 13:21:37.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:21:37.997: INFO: namespace kubectl-6958 deletion completed in 6.138270535s • [SLOW TEST:22.629 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:21:37.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-70a53d19-a2d2-46f7-9f3f-c1ca6560af4b STEP: Creating a pod to test consume configMaps Apr 11 13:21:38.071: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e2784602-0891-4b3b-b81b-224427727cc8" in namespace "projected-4875" to be "success or failure" Apr 11 13:21:38.124: INFO: Pod "pod-projected-configmaps-e2784602-0891-4b3b-b81b-224427727cc8": Phase="Pending", Reason="", readiness=false. Elapsed: 52.593065ms Apr 11 13:21:40.128: INFO: Pod "pod-projected-configmaps-e2784602-0891-4b3b-b81b-224427727cc8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057083283s Apr 11 13:21:42.132: INFO: Pod "pod-projected-configmaps-e2784602-0891-4b3b-b81b-224427727cc8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060794163s STEP: Saw pod success Apr 11 13:21:42.132: INFO: Pod "pod-projected-configmaps-e2784602-0891-4b3b-b81b-224427727cc8" satisfied condition "success or failure" Apr 11 13:21:42.135: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-e2784602-0891-4b3b-b81b-224427727cc8 container projected-configmap-volume-test: STEP: delete the pod Apr 11 13:21:42.165: INFO: Waiting for pod pod-projected-configmaps-e2784602-0891-4b3b-b81b-224427727cc8 to disappear Apr 11 13:21:42.174: INFO: Pod pod-projected-configmaps-e2784602-0891-4b3b-b81b-224427727cc8 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:21:42.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4875" for this suite. Apr 11 13:21:48.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:21:48.274: INFO: namespace projected-4875 deletion completed in 6.095966862s • [SLOW TEST:10.277 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:21:48.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-e8b6d102-8889-4a58-9962-fd54887f2dc3 STEP: Creating a pod to test consume secrets Apr 11 13:21:48.373: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-335f6598-4432-49f2-a48d-f992235da711" in namespace "projected-5954" to be "success or failure" Apr 11 13:21:48.378: INFO: Pod "pod-projected-secrets-335f6598-4432-49f2-a48d-f992235da711": Phase="Pending", Reason="", readiness=false. Elapsed: 4.194596ms Apr 11 13:21:50.382: INFO: Pod "pod-projected-secrets-335f6598-4432-49f2-a48d-f992235da711": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008555066s Apr 11 13:21:52.387: INFO: Pod "pod-projected-secrets-335f6598-4432-49f2-a48d-f992235da711": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013292719s STEP: Saw pod success Apr 11 13:21:52.387: INFO: Pod "pod-projected-secrets-335f6598-4432-49f2-a48d-f992235da711" satisfied condition "success or failure" Apr 11 13:21:52.390: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-335f6598-4432-49f2-a48d-f992235da711 container secret-volume-test: STEP: delete the pod Apr 11 13:21:52.422: INFO: Waiting for pod pod-projected-secrets-335f6598-4432-49f2-a48d-f992235da711 to disappear Apr 11 13:21:52.432: INFO: Pod pod-projected-secrets-335f6598-4432-49f2-a48d-f992235da711 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:21:52.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5954" for this suite. Apr 11 13:21:58.459: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:21:58.537: INFO: namespace projected-5954 deletion completed in 6.10113633s • [SLOW TEST:10.262 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:21:58.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Apr 11 13:21:58.596: INFO: Waiting up to 5m0s for pod "pod-b3940e69-4a02-49eb-8ef0-321d99f694d3" in namespace "emptydir-2248" to be "success or failure" Apr 11 13:21:58.599: INFO: Pod "pod-b3940e69-4a02-49eb-8ef0-321d99f694d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.724272ms Apr 11 13:22:00.602: INFO: Pod "pod-b3940e69-4a02-49eb-8ef0-321d99f694d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00553254s Apr 11 13:22:02.607: INFO: Pod "pod-b3940e69-4a02-49eb-8ef0-321d99f694d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010158857s STEP: Saw pod success Apr 11 13:22:02.607: INFO: Pod "pod-b3940e69-4a02-49eb-8ef0-321d99f694d3" satisfied condition "success or failure" Apr 11 13:22:02.610: INFO: Trying to get logs from node iruya-worker2 pod pod-b3940e69-4a02-49eb-8ef0-321d99f694d3 container test-container: STEP: delete the pod Apr 11 13:22:02.627: INFO: Waiting for pod pod-b3940e69-4a02-49eb-8ef0-321d99f694d3 to disappear Apr 11 13:22:02.641: INFO: Pod pod-b3940e69-4a02-49eb-8ef0-321d99f694d3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:22:02.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2248" for this suite. Apr 11 13:22:08.671: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:22:08.747: INFO: namespace emptydir-2248 deletion completed in 6.10193003s • [SLOW TEST:10.210 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:22:08.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 11 13:22:08.813: INFO: Waiting up to 5m0s for pod "pod-9fe997ae-1299-48ab-a4aa-be6e0db2e780" in namespace "emptydir-4218" to be "success or failure" Apr 11 13:22:08.831: INFO: Pod "pod-9fe997ae-1299-48ab-a4aa-be6e0db2e780": Phase="Pending", Reason="", readiness=false. Elapsed: 17.743271ms Apr 11 13:22:10.848: INFO: Pod "pod-9fe997ae-1299-48ab-a4aa-be6e0db2e780": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034868326s Apr 11 13:22:12.852: INFO: Pod "pod-9fe997ae-1299-48ab-a4aa-be6e0db2e780": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039175397s STEP: Saw pod success Apr 11 13:22:12.852: INFO: Pod "pod-9fe997ae-1299-48ab-a4aa-be6e0db2e780" satisfied condition "success or failure" Apr 11 13:22:12.856: INFO: Trying to get logs from node iruya-worker2 pod pod-9fe997ae-1299-48ab-a4aa-be6e0db2e780 container test-container: STEP: delete the pod Apr 11 13:22:12.884: INFO: Waiting for pod pod-9fe997ae-1299-48ab-a4aa-be6e0db2e780 to disappear Apr 11 13:22:12.895: INFO: Pod pod-9fe997ae-1299-48ab-a4aa-be6e0db2e780 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:22:12.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4218" for this suite. Apr 11 13:22:18.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:22:19.047: INFO: namespace emptydir-4218 deletion completed in 6.144767454s • [SLOW TEST:10.300 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:22:19.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 11 13:22:19.144: INFO: Create a RollingUpdate DaemonSet Apr 11 13:22:19.148: INFO: Check that daemon pods launch on every node of the cluster Apr 11 13:22:19.166: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 13:22:19.177: INFO: Number of nodes with available pods: 0 Apr 11 13:22:19.177: INFO: Node iruya-worker is running more than one daemon pod Apr 11 13:22:20.255: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 13:22:20.259: INFO: Number of nodes with available pods: 0 Apr 11 13:22:20.259: INFO: Node iruya-worker is running more than one daemon pod Apr 11 13:22:21.181: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 13:22:21.184: INFO: Number of nodes with available pods: 0 Apr 11 13:22:21.184: INFO: Node iruya-worker is running more than one daemon pod Apr 11 13:22:22.182: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 13:22:22.185: INFO: Number of nodes with available pods: 0 Apr 11 13:22:22.185: INFO: Node iruya-worker is running more than one daemon pod Apr 11 13:22:23.182: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 13:22:23.185: INFO: Number of nodes with available pods: 2 Apr 11 13:22:23.186: INFO: Number of running nodes: 2, number of available pods: 2 Apr 11 13:22:23.186: INFO: Update the DaemonSet to trigger a rollout Apr 11 13:22:23.192: INFO: Updating DaemonSet daemon-set Apr 11 13:22:27.221: INFO: Roll back the DaemonSet before rollout is complete Apr 11 13:22:27.227: INFO: Updating DaemonSet daemon-set Apr 11 13:22:27.227: INFO: Make sure DaemonSet rollback is complete Apr 11 13:22:27.235: INFO: Wrong image for pod: daemon-set-cgk5t. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Apr 11 13:22:27.235: INFO: Pod daemon-set-cgk5t is not available Apr 11 13:22:27.258: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 13:22:28.380: INFO: Wrong image for pod: daemon-set-cgk5t. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Apr 11 13:22:28.380: INFO: Pod daemon-set-cgk5t is not available Apr 11 13:22:28.385: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 13:22:29.262: INFO: Pod daemon-set-bwl72 is not available Apr 11 13:22:29.266: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7768, will wait for the garbage collector to delete the pods Apr 11 13:22:29.334: INFO: Deleting DaemonSet.extensions daemon-set took: 8.072823ms Apr 11 13:22:29.634: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.247178ms Apr 11 13:22:32.638: INFO: Number of nodes with available pods: 0 Apr 11 13:22:32.638: INFO: Number of running nodes: 0, number of available pods: 0 Apr 11 13:22:32.641: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7768/daemonsets","resourceVersion":"4843802"},"items":null} Apr 11 13:22:32.644: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7768/pods","resourceVersion":"4843802"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:22:32.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7768" for this suite. Apr 11 13:22:38.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:22:38.757: INFO: namespace daemonsets-7768 deletion completed in 6.102435729s • [SLOW TEST:19.711 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:22:38.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 11 13:22:42.907: INFO: Waiting up to 5m0s for pod "client-envvars-b30d9abb-2c3d-46c3-b30c-d0d9f07034da" in namespace "pods-992" to be "success or failure" Apr 11 13:22:42.912: INFO: Pod "client-envvars-b30d9abb-2c3d-46c3-b30c-d0d9f07034da": Phase="Pending", Reason="", readiness=false. Elapsed: 4.475408ms Apr 11 13:22:45.051: INFO: Pod "client-envvars-b30d9abb-2c3d-46c3-b30c-d0d9f07034da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.143249004s Apr 11 13:22:47.055: INFO: Pod "client-envvars-b30d9abb-2c3d-46c3-b30c-d0d9f07034da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.147751191s STEP: Saw pod success Apr 11 13:22:47.055: INFO: Pod "client-envvars-b30d9abb-2c3d-46c3-b30c-d0d9f07034da" satisfied condition "success or failure" Apr 11 13:22:47.058: INFO: Trying to get logs from node iruya-worker2 pod client-envvars-b30d9abb-2c3d-46c3-b30c-d0d9f07034da container env3cont: STEP: delete the pod Apr 11 13:22:47.095: INFO: Waiting for pod client-envvars-b30d9abb-2c3d-46c3-b30c-d0d9f07034da to disappear Apr 11 13:22:47.110: INFO: Pod client-envvars-b30d9abb-2c3d-46c3-b30c-d0d9f07034da no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:22:47.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-992" for this suite. Apr 11 13:23:37.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:23:37.266: INFO: namespace pods-992 deletion completed in 50.149302177s • [SLOW TEST:58.508 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:23:37.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0411 13:24:17.378864 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 11 13:24:17.378: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:24:17.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8614" for this suite. Apr 11 13:24:25.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:24:25.456: INFO: namespace gc-8614 deletion completed in 8.07414405s • [SLOW TEST:48.189 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:24:25.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Apr 11 13:24:25.745: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-7432,SelfLink:/api/v1/namespaces/watch-7432/configmaps/e2e-watch-test-label-changed,UID:0e323fc9-e01f-4d73-a694-f309b25fcb6e,ResourceVersion:4844286,Generation:0,CreationTimestamp:2020-04-11 13:24:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 11 13:24:25.745: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-7432,SelfLink:/api/v1/namespaces/watch-7432/configmaps/e2e-watch-test-label-changed,UID:0e323fc9-e01f-4d73-a694-f309b25fcb6e,ResourceVersion:4844287,Generation:0,CreationTimestamp:2020-04-11 13:24:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Apr 11 13:24:25.745: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-7432,SelfLink:/api/v1/namespaces/watch-7432/configmaps/e2e-watch-test-label-changed,UID:0e323fc9-e01f-4d73-a694-f309b25fcb6e,ResourceVersion:4844288,Generation:0,CreationTimestamp:2020-04-11 13:24:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Apr 11 13:24:36.021: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-7432,SelfLink:/api/v1/namespaces/watch-7432/configmaps/e2e-watch-test-label-changed,UID:0e323fc9-e01f-4d73-a694-f309b25fcb6e,ResourceVersion:4844309,Generation:0,CreationTimestamp:2020-04-11 13:24:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 11 13:24:36.021: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-7432,SelfLink:/api/v1/namespaces/watch-7432/configmaps/e2e-watch-test-label-changed,UID:0e323fc9-e01f-4d73-a694-f309b25fcb6e,ResourceVersion:4844310,Generation:0,CreationTimestamp:2020-04-11 13:24:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Apr 11 13:24:36.021: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-7432,SelfLink:/api/v1/namespaces/watch-7432/configmaps/e2e-watch-test-label-changed,UID:0e323fc9-e01f-4d73-a694-f309b25fcb6e,ResourceVersion:4844311,Generation:0,CreationTimestamp:2020-04-11 13:24:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:24:36.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7432" for this suite. Apr 11 13:24:42.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:24:42.170: INFO: namespace watch-7432 deletion completed in 6.139410386s • [SLOW TEST:16.714 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:24:42.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 11 13:24:42.259: INFO: Waiting up to 5m0s for pod "pod-4ca5c7c1-3603-4fbc-928b-a067be58ea1b" in namespace "emptydir-1764" to be "success or failure" Apr 11 13:24:42.264: INFO: Pod "pod-4ca5c7c1-3603-4fbc-928b-a067be58ea1b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.050974ms Apr 11 13:24:44.268: INFO: Pod "pod-4ca5c7c1-3603-4fbc-928b-a067be58ea1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009448208s Apr 11 13:24:46.274: INFO: Pod "pod-4ca5c7c1-3603-4fbc-928b-a067be58ea1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015660328s STEP: Saw pod success Apr 11 13:24:46.274: INFO: Pod "pod-4ca5c7c1-3603-4fbc-928b-a067be58ea1b" satisfied condition "success or failure" Apr 11 13:24:46.277: INFO: Trying to get logs from node iruya-worker pod pod-4ca5c7c1-3603-4fbc-928b-a067be58ea1b container test-container: STEP: delete the pod Apr 11 13:24:46.316: INFO: Waiting for pod pod-4ca5c7c1-3603-4fbc-928b-a067be58ea1b to disappear Apr 11 13:24:46.351: INFO: Pod pod-4ca5c7c1-3603-4fbc-928b-a067be58ea1b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:24:46.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1764" for this suite. Apr 11 13:24:52.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:24:52.448: INFO: namespace emptydir-1764 deletion completed in 6.092520619s • [SLOW TEST:10.277 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:24:52.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-2f87de70-ec16-4dff-ba35-76ed1becba84 STEP: Creating a pod to test consume secrets Apr 11 13:24:52.533: INFO: Waiting up to 5m0s for pod "pod-secrets-d4e3fc83-3730-447e-807c-67bb7e2e109f" in namespace "secrets-5309" to be "success or failure" Apr 11 13:24:52.543: INFO: Pod "pod-secrets-d4e3fc83-3730-447e-807c-67bb7e2e109f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.861955ms Apr 11 13:24:54.547: INFO: Pod "pod-secrets-d4e3fc83-3730-447e-807c-67bb7e2e109f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014267237s Apr 11 13:24:56.551: INFO: Pod "pod-secrets-d4e3fc83-3730-447e-807c-67bb7e2e109f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018528945s STEP: Saw pod success Apr 11 13:24:56.551: INFO: Pod "pod-secrets-d4e3fc83-3730-447e-807c-67bb7e2e109f" satisfied condition "success or failure" Apr 11 13:24:56.555: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-d4e3fc83-3730-447e-807c-67bb7e2e109f container secret-volume-test: STEP: delete the pod Apr 11 13:24:56.590: INFO: Waiting for pod pod-secrets-d4e3fc83-3730-447e-807c-67bb7e2e109f to disappear Apr 11 13:24:56.602: INFO: Pod pod-secrets-d4e3fc83-3730-447e-807c-67bb7e2e109f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:24:56.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5309" for this suite. Apr 11 13:25:02.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:25:02.680: INFO: namespace secrets-5309 deletion completed in 6.074795077s • [SLOW TEST:10.232 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:25:02.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 11 13:25:02.748: INFO: Waiting up to 5m0s for pod "pod-407e0603-65cb-4f1a-91f2-8adb48491280" in namespace "emptydir-2481" to be "success or failure" Apr 11 13:25:02.764: INFO: Pod "pod-407e0603-65cb-4f1a-91f2-8adb48491280": Phase="Pending", Reason="", readiness=false. Elapsed: 15.917965ms Apr 11 13:25:04.767: INFO: Pod "pod-407e0603-65cb-4f1a-91f2-8adb48491280": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019082279s Apr 11 13:25:06.771: INFO: Pod "pod-407e0603-65cb-4f1a-91f2-8adb48491280": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023258017s STEP: Saw pod success Apr 11 13:25:06.771: INFO: Pod "pod-407e0603-65cb-4f1a-91f2-8adb48491280" satisfied condition "success or failure" Apr 11 13:25:06.774: INFO: Trying to get logs from node iruya-worker pod pod-407e0603-65cb-4f1a-91f2-8adb48491280 container test-container: STEP: delete the pod Apr 11 13:25:06.864: INFO: Waiting for pod pod-407e0603-65cb-4f1a-91f2-8adb48491280 to disappear Apr 11 13:25:06.869: INFO: Pod pod-407e0603-65cb-4f1a-91f2-8adb48491280 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:25:06.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2481" for this suite. Apr 11 13:25:12.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:25:12.939: INFO: namespace emptydir-2481 deletion completed in 6.06715446s • [SLOW TEST:10.258 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:25:12.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Apr 11 13:25:12.972: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Apr 11 13:25:12.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9731' Apr 11 13:25:13.254: INFO: stderr: "" Apr 11 13:25:13.254: INFO: stdout: "service/redis-slave created\n" Apr 11 13:25:13.255: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Apr 11 13:25:13.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9731' Apr 11 13:25:13.487: INFO: stderr: "" Apr 11 13:25:13.487: INFO: stdout: "service/redis-master created\n" Apr 11 13:25:13.487: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Apr 11 13:25:13.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9731' Apr 11 13:25:13.738: INFO: stderr: "" Apr 11 13:25:13.738: INFO: stdout: "service/frontend created\n" Apr 11 13:25:13.738: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Apr 11 13:25:13.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9731' Apr 11 13:25:13.972: INFO: stderr: "" Apr 11 13:25:13.972: INFO: stdout: "deployment.apps/frontend created\n" Apr 11 13:25:13.972: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 11 13:25:13.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9731' Apr 11 13:25:14.220: INFO: stderr: "" Apr 11 13:25:14.220: INFO: stdout: "deployment.apps/redis-master created\n" Apr 11 13:25:14.221: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Apr 11 13:25:14.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9731' Apr 11 13:25:14.490: INFO: stderr: "" Apr 11 13:25:14.490: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Apr 11 13:25:14.490: INFO: Waiting for all frontend pods to be Running. Apr 11 13:25:24.541: INFO: Waiting for frontend to serve content. Apr 11 13:25:24.556: INFO: Trying to add a new entry to the guestbook. Apr 11 13:25:24.570: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Apr 11 13:25:24.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9731' Apr 11 13:25:24.722: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 11 13:25:24.722: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Apr 11 13:25:24.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9731' Apr 11 13:25:24.882: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 11 13:25:24.882: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Apr 11 13:25:24.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9731' Apr 11 13:25:24.996: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 11 13:25:24.996: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 11 13:25:24.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9731' Apr 11 13:25:25.117: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 11 13:25:25.118: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 11 13:25:25.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9731' Apr 11 13:25:25.321: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 11 13:25:25.321: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Apr 11 13:25:25.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9731' Apr 11 13:25:25.561: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 11 13:25:25.562: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:25:25.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9731" for this suite. Apr 11 13:26:03.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:26:03.726: INFO: namespace kubectl-9731 deletion completed in 38.091064066s • [SLOW TEST:50.786 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:26:03.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 11 13:26:11.815: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 11 13:26:11.820: INFO: Pod pod-with-prestop-exec-hook still exists Apr 11 13:26:13.820: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 11 13:26:13.825: INFO: Pod pod-with-prestop-exec-hook still exists Apr 11 13:26:15.820: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 11 13:26:15.824: INFO: Pod pod-with-prestop-exec-hook still exists Apr 11 13:26:17.820: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 11 13:26:17.824: INFO: Pod pod-with-prestop-exec-hook still exists Apr 11 13:26:19.820: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 11 13:26:19.824: INFO: Pod pod-with-prestop-exec-hook still exists Apr 11 13:26:21.820: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 11 13:26:21.824: INFO: Pod pod-with-prestop-exec-hook still exists Apr 11 13:26:23.820: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 11 13:26:23.825: INFO: Pod pod-with-prestop-exec-hook still exists Apr 11 13:26:25.820: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 11 13:26:25.824: INFO: Pod pod-with-prestop-exec-hook still exists Apr 11 13:26:27.820: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 11 13:26:27.824: INFO: Pod pod-with-prestop-exec-hook still exists Apr 11 13:26:29.820: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 11 13:26:29.824: INFO: Pod pod-with-prestop-exec-hook still exists Apr 11 13:26:31.820: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 11 13:26:31.824: INFO: Pod pod-with-prestop-exec-hook still exists Apr 11 13:26:33.820: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 11 13:26:33.825: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:26:33.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3028" for this suite. Apr 11 13:26:55.855: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:26:55.929: INFO: namespace container-lifecycle-hook-3028 deletion completed in 22.090883549s • [SLOW TEST:52.203 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:26:55.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-b02ba09a-10d0-4f91-abca-e57e82939d58 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:26:56.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8766" for this suite. Apr 11 13:27:02.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:27:02.118: INFO: namespace secrets-8766 deletion completed in 6.095718352s • [SLOW TEST:6.189 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:27:02.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-8076 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Apr 11 13:27:02.185: INFO: Found 0 stateful pods, waiting for 3 Apr 11 13:27:12.189: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 11 13:27:12.189: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 11 13:27:12.189: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Apr 11 13:27:12.214: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Apr 11 13:27:22.267: INFO: Updating stateful set ss2 Apr 11 13:27:22.274: INFO: Waiting for Pod statefulset-8076/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Apr 11 13:27:32.762: INFO: Found 2 stateful pods, waiting for 3 Apr 11 13:27:42.767: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 11 13:27:42.767: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 11 13:27:42.767: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Apr 11 13:27:42.795: INFO: Updating stateful set ss2 Apr 11 13:27:42.822: INFO: Waiting for Pod statefulset-8076/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 11 13:27:52.847: INFO: Updating stateful set ss2 Apr 11 13:27:52.869: INFO: Waiting for StatefulSet statefulset-8076/ss2 to complete update Apr 11 13:27:52.869: INFO: Waiting for Pod statefulset-8076/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 11 13:28:02.877: INFO: Deleting all statefulset in ns statefulset-8076 Apr 11 13:28:02.879: INFO: Scaling statefulset ss2 to 0 Apr 11 13:28:32.898: INFO: Waiting for statefulset status.replicas updated to 0 Apr 11 13:28:32.901: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:28:32.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8076" for this suite. Apr 11 13:28:38.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:28:39.012: INFO: namespace statefulset-8076 deletion completed in 6.094365389s • [SLOW TEST:96.892 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:28:39.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 11 13:28:39.079: INFO: PodSpec: initContainers in spec.initContainers Apr 11 13:29:30.215: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-c00e39b7-c7c8-4e6c-b06a-a5322c88684a", GenerateName:"", Namespace:"init-container-6274", SelfLink:"/api/v1/namespaces/init-container-6274/pods/pod-init-c00e39b7-c7c8-4e6c-b06a-a5322c88684a", UID:"32d01c84-6415-47af-986e-829402af7928", ResourceVersion:"4845482", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63722208519, loc:(*time.Location)(0x7ead8c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"79501052"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-hdcj8", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002ccf440), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-hdcj8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-hdcj8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-hdcj8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002b07e98), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002cbb740), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002b07f30)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002b07f50)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002b07f58), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002b07f5c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722208519, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722208519, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722208519, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722208519, loc:(*time.Location)(0x7ead8c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.5", PodIP:"10.244.1.95", StartTime:(*v1.Time)(0xc002b63820), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc002b63860), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002622000)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://633fb27012cc8ae43e313ab8f75d28152265dc923651cc5c7c26bbccb48c2d89"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002b63880), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002b63840), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:29:30.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6274" for this suite. Apr 11 13:29:48.349: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:29:48.428: INFO: namespace init-container-6274 deletion completed in 18.113173103s • [SLOW TEST:69.415 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:29:48.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 11 13:29:48.513: INFO: Waiting up to 5m0s for pod "downwardapi-volume-28a442c5-ac69-499f-ae70-0f2492860be8" in namespace "downward-api-2581" to be "success or failure" Apr 11 13:29:48.518: INFO: Pod "downwardapi-volume-28a442c5-ac69-499f-ae70-0f2492860be8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.869182ms Apr 11 13:29:50.522: INFO: Pod "downwardapi-volume-28a442c5-ac69-499f-ae70-0f2492860be8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009126095s Apr 11 13:29:52.527: INFO: Pod "downwardapi-volume-28a442c5-ac69-499f-ae70-0f2492860be8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013523727s STEP: Saw pod success Apr 11 13:29:52.527: INFO: Pod "downwardapi-volume-28a442c5-ac69-499f-ae70-0f2492860be8" satisfied condition "success or failure" Apr 11 13:29:52.530: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-28a442c5-ac69-499f-ae70-0f2492860be8 container client-container: STEP: delete the pod Apr 11 13:29:52.631: INFO: Waiting for pod downwardapi-volume-28a442c5-ac69-499f-ae70-0f2492860be8 to disappear Apr 11 13:29:52.643: INFO: Pod downwardapi-volume-28a442c5-ac69-499f-ae70-0f2492860be8 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:29:52.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2581" for this suite. Apr 11 13:29:58.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:29:58.746: INFO: namespace downward-api-2581 deletion completed in 6.099351242s • [SLOW TEST:10.318 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:29:58.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium Apr 11 13:29:58.824: INFO: Waiting up to 5m0s for pod "pod-71906968-68fc-417b-a9ca-96cdf94d1bd6" in namespace "emptydir-7190" to be "success or failure" Apr 11 13:29:58.842: INFO: Pod "pod-71906968-68fc-417b-a9ca-96cdf94d1bd6": Phase="Pending", Reason="", readiness=false. Elapsed: 18.057546ms Apr 11 13:30:00.846: INFO: Pod "pod-71906968-68fc-417b-a9ca-96cdf94d1bd6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022223157s Apr 11 13:30:02.851: INFO: Pod "pod-71906968-68fc-417b-a9ca-96cdf94d1bd6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026772568s STEP: Saw pod success Apr 11 13:30:02.851: INFO: Pod "pod-71906968-68fc-417b-a9ca-96cdf94d1bd6" satisfied condition "success or failure" Apr 11 13:30:02.854: INFO: Trying to get logs from node iruya-worker2 pod pod-71906968-68fc-417b-a9ca-96cdf94d1bd6 container test-container: STEP: delete the pod Apr 11 13:30:02.920: INFO: Waiting for pod pod-71906968-68fc-417b-a9ca-96cdf94d1bd6 to disappear Apr 11 13:30:02.926: INFO: Pod pod-71906968-68fc-417b-a9ca-96cdf94d1bd6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:30:02.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7190" for this suite. Apr 11 13:30:08.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:30:09.033: INFO: namespace emptydir-7190 deletion completed in 6.103105691s • [SLOW TEST:10.287 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:30:09.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 11 13:30:09.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-5766' Apr 11 13:30:09.184: INFO: stderr: "" Apr 11 13:30:09.184: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Apr 11 13:30:09.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-5766' Apr 11 13:30:21.861: INFO: stderr: "" Apr 11 13:30:21.861: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:30:21.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5766" for this suite. Apr 11 13:30:27.876: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:30:27.952: INFO: namespace kubectl-5766 deletion completed in 6.08691178s • [SLOW TEST:18.919 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:30:27.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 11 13:30:28.030: INFO: Waiting up to 5m0s for pod "pod-c2e6f47f-07fb-4ed8-b2ed-4797906f4531" in namespace "emptydir-672" to be "success or failure" Apr 11 13:30:28.034: INFO: Pod "pod-c2e6f47f-07fb-4ed8-b2ed-4797906f4531": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008121ms Apr 11 13:30:30.038: INFO: Pod "pod-c2e6f47f-07fb-4ed8-b2ed-4797906f4531": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007666355s Apr 11 13:30:32.042: INFO: Pod "pod-c2e6f47f-07fb-4ed8-b2ed-4797906f4531": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011878549s STEP: Saw pod success Apr 11 13:30:32.042: INFO: Pod "pod-c2e6f47f-07fb-4ed8-b2ed-4797906f4531" satisfied condition "success or failure" Apr 11 13:30:32.045: INFO: Trying to get logs from node iruya-worker2 pod pod-c2e6f47f-07fb-4ed8-b2ed-4797906f4531 container test-container: STEP: delete the pod Apr 11 13:30:32.100: INFO: Waiting for pod pod-c2e6f47f-07fb-4ed8-b2ed-4797906f4531 to disappear Apr 11 13:30:32.106: INFO: Pod pod-c2e6f47f-07fb-4ed8-b2ed-4797906f4531 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:30:32.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-672" for this suite. Apr 11 13:30:38.122: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:30:38.199: INFO: namespace emptydir-672 deletion completed in 6.090123123s • [SLOW TEST:10.247 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:30:38.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 11 13:30:58.262: INFO: Container started at 2020-04-11 13:30:40 +0000 UTC, pod became ready at 2020-04-11 13:30:58 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:30:58.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2432" for this suite. Apr 11 13:31:20.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:31:20.371: INFO: namespace container-probe-2432 deletion completed in 22.105504119s • [SLOW TEST:42.171 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:31:20.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-8a45b5c6-5086-4575-baf6-f4a3ce876a0e STEP: Creating a pod to test consume secrets Apr 11 13:31:20.439: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-156b5c67-9481-40a6-b516-979ae8ea3a53" in namespace "projected-8653" to be "success or failure" Apr 11 13:31:20.442: INFO: Pod "pod-projected-secrets-156b5c67-9481-40a6-b516-979ae8ea3a53": Phase="Pending", Reason="", readiness=false. Elapsed: 3.223676ms Apr 11 13:31:22.446: INFO: Pod "pod-projected-secrets-156b5c67-9481-40a6-b516-979ae8ea3a53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007489981s Apr 11 13:31:24.450: INFO: Pod "pod-projected-secrets-156b5c67-9481-40a6-b516-979ae8ea3a53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011665673s STEP: Saw pod success Apr 11 13:31:24.450: INFO: Pod "pod-projected-secrets-156b5c67-9481-40a6-b516-979ae8ea3a53" satisfied condition "success or failure" Apr 11 13:31:24.453: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-156b5c67-9481-40a6-b516-979ae8ea3a53 container projected-secret-volume-test: STEP: delete the pod Apr 11 13:31:24.479: INFO: Waiting for pod pod-projected-secrets-156b5c67-9481-40a6-b516-979ae8ea3a53 to disappear Apr 11 13:31:24.507: INFO: Pod pod-projected-secrets-156b5c67-9481-40a6-b516-979ae8ea3a53 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:31:24.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8653" for this suite. Apr 11 13:31:30.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:31:30.601: INFO: namespace projected-8653 deletion completed in 6.090502022s • [SLOW TEST:10.230 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:31:30.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 11 13:31:30.690: INFO: Waiting up to 5m0s for pod "pod-3260f8a0-adf4-44d6-8b2c-138cf14bd1b4" in namespace "emptydir-8776" to be "success or failure" Apr 11 13:31:30.694: INFO: Pod "pod-3260f8a0-adf4-44d6-8b2c-138cf14bd1b4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.216054ms Apr 11 13:31:32.697: INFO: Pod "pod-3260f8a0-adf4-44d6-8b2c-138cf14bd1b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007732657s Apr 11 13:31:34.701: INFO: Pod "pod-3260f8a0-adf4-44d6-8b2c-138cf14bd1b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011710298s STEP: Saw pod success Apr 11 13:31:34.701: INFO: Pod "pod-3260f8a0-adf4-44d6-8b2c-138cf14bd1b4" satisfied condition "success or failure" Apr 11 13:31:34.705: INFO: Trying to get logs from node iruya-worker pod pod-3260f8a0-adf4-44d6-8b2c-138cf14bd1b4 container test-container: STEP: delete the pod Apr 11 13:31:34.726: INFO: Waiting for pod pod-3260f8a0-adf4-44d6-8b2c-138cf14bd1b4 to disappear Apr 11 13:31:34.730: INFO: Pod pod-3260f8a0-adf4-44d6-8b2c-138cf14bd1b4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:31:34.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8776" for this suite. Apr 11 13:31:40.741: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:31:40.816: INFO: namespace emptydir-8776 deletion completed in 6.083949471s • [SLOW TEST:10.215 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:31:40.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-7619 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-7619 STEP: Creating statefulset with conflicting port in namespace statefulset-7619 STEP: Waiting until pod test-pod will start running in namespace statefulset-7619 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7619 Apr 11 13:31:44.939: INFO: Observed stateful pod in namespace: statefulset-7619, name: ss-0, uid: 2783e414-e1b6-4234-b7ad-55a71d1b21d2, status phase: Pending. Waiting for statefulset controller to delete. Apr 11 13:31:45.317: INFO: Observed stateful pod in namespace: statefulset-7619, name: ss-0, uid: 2783e414-e1b6-4234-b7ad-55a71d1b21d2, status phase: Failed. Waiting for statefulset controller to delete. Apr 11 13:31:45.339: INFO: Observed stateful pod in namespace: statefulset-7619, name: ss-0, uid: 2783e414-e1b6-4234-b7ad-55a71d1b21d2, status phase: Failed. Waiting for statefulset controller to delete. Apr 11 13:31:45.359: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-7619 STEP: Removing pod with conflicting port in namespace statefulset-7619 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-7619 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 11 13:31:49.451: INFO: Deleting all statefulset in ns statefulset-7619 Apr 11 13:31:49.454: INFO: Scaling statefulset ss to 0 Apr 11 13:32:09.471: INFO: Waiting for statefulset status.replicas updated to 0 Apr 11 13:32:09.475: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:32:09.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7619" for this suite. Apr 11 13:32:15.500: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:32:15.572: INFO: namespace statefulset-7619 deletion completed in 6.080589649s • [SLOW TEST:34.755 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:32:15.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 11 13:32:15.655: INFO: Waiting up to 5m0s for pod "downwardapi-volume-eb505577-5f57-4b37-8aae-fdcd69d71c31" in namespace "downward-api-6103" to be "success or failure" Apr 11 13:32:15.671: INFO: Pod "downwardapi-volume-eb505577-5f57-4b37-8aae-fdcd69d71c31": Phase="Pending", Reason="", readiness=false. Elapsed: 15.874143ms Apr 11 13:32:17.675: INFO: Pod "downwardapi-volume-eb505577-5f57-4b37-8aae-fdcd69d71c31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019934554s Apr 11 13:32:19.680: INFO: Pod "downwardapi-volume-eb505577-5f57-4b37-8aae-fdcd69d71c31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024484643s STEP: Saw pod success Apr 11 13:32:19.680: INFO: Pod "downwardapi-volume-eb505577-5f57-4b37-8aae-fdcd69d71c31" satisfied condition "success or failure" Apr 11 13:32:19.683: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-eb505577-5f57-4b37-8aae-fdcd69d71c31 container client-container: STEP: delete the pod Apr 11 13:32:19.744: INFO: Waiting for pod downwardapi-volume-eb505577-5f57-4b37-8aae-fdcd69d71c31 to disappear Apr 11 13:32:19.749: INFO: Pod downwardapi-volume-eb505577-5f57-4b37-8aae-fdcd69d71c31 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:32:19.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6103" for this suite. Apr 11 13:32:25.764: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:32:25.835: INFO: namespace downward-api-6103 deletion completed in 6.082808759s • [SLOW TEST:10.264 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:32:25.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:32:29.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2625" for this suite. Apr 11 13:32:35.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:32:36.068: INFO: namespace kubelet-test-2625 deletion completed in 6.108372961s • [SLOW TEST:10.232 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:32:36.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 11 13:32:36.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-6903' Apr 11 13:32:38.713: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 11 13:32:38.713: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 Apr 11 13:32:40.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-6903' Apr 11 13:32:40.874: INFO: stderr: "" Apr 11 13:32:40.874: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:32:40.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6903" for this suite. Apr 11 13:34:42.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:34:43.055: INFO: namespace kubectl-6903 deletion completed in 2m2.178549886s • [SLOW TEST:126.987 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:34:43.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 11 13:34:43.124: INFO: Waiting up to 5m0s for pod "downward-api-14219d81-00a6-40e5-9533-77d7e35fcc30" in namespace "downward-api-1534" to be "success or failure" Apr 11 13:34:43.128: INFO: Pod "downward-api-14219d81-00a6-40e5-9533-77d7e35fcc30": Phase="Pending", Reason="", readiness=false. Elapsed: 3.767848ms Apr 11 13:34:45.131: INFO: Pod "downward-api-14219d81-00a6-40e5-9533-77d7e35fcc30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007359634s Apr 11 13:34:47.136: INFO: Pod "downward-api-14219d81-00a6-40e5-9533-77d7e35fcc30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011834499s STEP: Saw pod success Apr 11 13:34:47.136: INFO: Pod "downward-api-14219d81-00a6-40e5-9533-77d7e35fcc30" satisfied condition "success or failure" Apr 11 13:34:47.139: INFO: Trying to get logs from node iruya-worker pod downward-api-14219d81-00a6-40e5-9533-77d7e35fcc30 container dapi-container: STEP: delete the pod Apr 11 13:34:47.159: INFO: Waiting for pod downward-api-14219d81-00a6-40e5-9533-77d7e35fcc30 to disappear Apr 11 13:34:47.164: INFO: Pod downward-api-14219d81-00a6-40e5-9533-77d7e35fcc30 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:34:47.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1534" for this suite. Apr 11 13:34:53.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:34:53.259: INFO: namespace downward-api-1534 deletion completed in 6.091925082s • [SLOW TEST:10.203 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:34:53.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-68d6556d-88a6-4141-81ad-8d4de46c821e in namespace container-probe-9688 Apr 11 13:34:57.331: INFO: Started pod busybox-68d6556d-88a6-4141-81ad-8d4de46c821e in namespace container-probe-9688 STEP: checking the pod's current state and verifying that restartCount is present Apr 11 13:34:57.334: INFO: Initial restart count of pod busybox-68d6556d-88a6-4141-81ad-8d4de46c821e is 0 Apr 11 13:35:49.452: INFO: Restart count of pod container-probe-9688/busybox-68d6556d-88a6-4141-81ad-8d4de46c821e is now 1 (52.118477618s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:35:49.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9688" for this suite. Apr 11 13:35:55.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:35:55.600: INFO: namespace container-probe-9688 deletion completed in 6.110975073s • [SLOW TEST:62.341 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:35:55.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 11 13:35:55.668: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:35:56.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8354" for this suite. Apr 11 13:36:02.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:36:02.866: INFO: namespace custom-resource-definition-8354 deletion completed in 6.094490494s • [SLOW TEST:7.266 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:36:02.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Apr 11 13:36:02.923: INFO: namespace kubectl-4002 Apr 11 13:36:02.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4002' Apr 11 13:36:03.226: INFO: stderr: "" Apr 11 13:36:03.226: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Apr 11 13:36:04.231: INFO: Selector matched 1 pods for map[app:redis] Apr 11 13:36:04.231: INFO: Found 0 / 1 Apr 11 13:36:05.230: INFO: Selector matched 1 pods for map[app:redis] Apr 11 13:36:05.230: INFO: Found 0 / 1 Apr 11 13:36:06.230: INFO: Selector matched 1 pods for map[app:redis] Apr 11 13:36:06.230: INFO: Found 0 / 1 Apr 11 13:36:07.231: INFO: Selector matched 1 pods for map[app:redis] Apr 11 13:36:07.231: INFO: Found 1 / 1 Apr 11 13:36:07.231: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 11 13:36:07.234: INFO: Selector matched 1 pods for map[app:redis] Apr 11 13:36:07.234: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 11 13:36:07.234: INFO: wait on redis-master startup in kubectl-4002 Apr 11 13:36:07.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-fqgsj redis-master --namespace=kubectl-4002' Apr 11 13:36:07.376: INFO: stderr: "" Apr 11 13:36:07.376: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 11 Apr 13:36:05.535 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 11 Apr 13:36:05.535 # Server started, Redis version 3.2.12\n1:M 11 Apr 13:36:05.536 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 11 Apr 13:36:05.536 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Apr 11 13:36:07.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-4002' Apr 11 13:36:07.492: INFO: stderr: "" Apr 11 13:36:07.492: INFO: stdout: "service/rm2 exposed\n" Apr 11 13:36:07.500: INFO: Service rm2 in namespace kubectl-4002 found. STEP: exposing service Apr 11 13:36:09.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-4002' Apr 11 13:36:09.660: INFO: stderr: "" Apr 11 13:36:09.661: INFO: stdout: "service/rm3 exposed\n" Apr 11 13:36:09.679: INFO: Service rm3 in namespace kubectl-4002 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:36:11.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4002" for this suite. Apr 11 13:36:33.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:36:33.832: INFO: namespace kubectl-4002 deletion completed in 22.126289494s • [SLOW TEST:30.965 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:36:33.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Apr 11 13:36:33.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Apr 11 13:36:34.039: INFO: stderr: "" Apr 11 13:36:34.039: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:36:34.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2271" for this suite. Apr 11 13:36:40.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:36:40.138: INFO: namespace kubectl-2271 deletion completed in 6.09392021s • [SLOW TEST:6.306 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:36:40.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-9584, will wait for the garbage collector to delete the pods Apr 11 13:36:44.296: INFO: Deleting Job.batch foo took: 6.607941ms Apr 11 13:36:44.597: INFO: Terminating Job.batch foo pods took: 300.467509ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:37:22.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9584" for this suite. Apr 11 13:37:28.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:37:28.100: INFO: namespace job-9584 deletion completed in 6.097058933s • [SLOW TEST:47.961 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:37:28.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-a9700495-b831-4064-b6a3-474dea89acef STEP: Creating a pod to test consume secrets Apr 11 13:37:28.188: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-546090c8-7a71-48cf-a77f-4ae29975415c" in namespace "projected-6743" to be "success or failure" Apr 11 13:37:28.193: INFO: Pod "pod-projected-secrets-546090c8-7a71-48cf-a77f-4ae29975415c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.769912ms Apr 11 13:37:30.196: INFO: Pod "pod-projected-secrets-546090c8-7a71-48cf-a77f-4ae29975415c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007910277s Apr 11 13:37:32.200: INFO: Pod "pod-projected-secrets-546090c8-7a71-48cf-a77f-4ae29975415c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012198729s STEP: Saw pod success Apr 11 13:37:32.200: INFO: Pod "pod-projected-secrets-546090c8-7a71-48cf-a77f-4ae29975415c" satisfied condition "success or failure" Apr 11 13:37:32.203: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-546090c8-7a71-48cf-a77f-4ae29975415c container projected-secret-volume-test: STEP: delete the pod Apr 11 13:37:32.224: INFO: Waiting for pod pod-projected-secrets-546090c8-7a71-48cf-a77f-4ae29975415c to disappear Apr 11 13:37:32.228: INFO: Pod pod-projected-secrets-546090c8-7a71-48cf-a77f-4ae29975415c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:37:32.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6743" for this suite. Apr 11 13:37:38.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:37:38.326: INFO: namespace projected-6743 deletion completed in 6.094621011s • [SLOW TEST:10.225 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:37:38.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:37:38.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2151" for this suite. Apr 11 13:38:00.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:38:00.536: INFO: namespace pods-2151 deletion completed in 22.122985012s • [SLOW TEST:22.209 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:38:00.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 11 13:38:00.607: INFO: Waiting up to 5m0s for pod "pod-e9a999ac-0a42-4c5e-a743-4286171bd4e0" in namespace "emptydir-3014" to be "success or failure" Apr 11 13:38:00.620: INFO: Pod "pod-e9a999ac-0a42-4c5e-a743-4286171bd4e0": Phase="Pending", Reason="", readiness=false. Elapsed: 13.501768ms Apr 11 13:38:02.625: INFO: Pod "pod-e9a999ac-0a42-4c5e-a743-4286171bd4e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018057789s Apr 11 13:38:04.630: INFO: Pod "pod-e9a999ac-0a42-4c5e-a743-4286171bd4e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022800187s STEP: Saw pod success Apr 11 13:38:04.630: INFO: Pod "pod-e9a999ac-0a42-4c5e-a743-4286171bd4e0" satisfied condition "success or failure" Apr 11 13:38:04.632: INFO: Trying to get logs from node iruya-worker2 pod pod-e9a999ac-0a42-4c5e-a743-4286171bd4e0 container test-container: STEP: delete the pod Apr 11 13:38:04.654: INFO: Waiting for pod pod-e9a999ac-0a42-4c5e-a743-4286171bd4e0 to disappear Apr 11 13:38:04.660: INFO: Pod pod-e9a999ac-0a42-4c5e-a743-4286171bd4e0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:38:04.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3014" for this suite. Apr 11 13:38:10.692: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:38:10.774: INFO: namespace emptydir-3014 deletion completed in 6.110315725s • [SLOW TEST:10.238 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:38:10.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 11 13:38:10.884: INFO: Creating deployment "test-recreate-deployment" Apr 11 13:38:10.889: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Apr 11 13:38:10.919: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Apr 11 13:38:12.926: INFO: Waiting deployment "test-recreate-deployment" to complete Apr 11 13:38:12.928: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722209090, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722209090, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722209090, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722209090, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 11 13:38:14.932: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Apr 11 13:38:14.954: INFO: Updating deployment test-recreate-deployment Apr 11 13:38:14.954: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 11 13:38:15.397: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-4519,SelfLink:/apis/apps/v1/namespaces/deployment-4519/deployments/test-recreate-deployment,UID:ba05bc16-567d-4e5c-a87e-b58aedb8e594,ResourceVersion:4847190,Generation:2,CreationTimestamp:2020-04-11 13:38:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-04-11 13:38:15 +0000 UTC 2020-04-11 13:38:15 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-04-11 13:38:15 +0000 UTC 2020-04-11 13:38:10 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Apr 11 13:38:15.435: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-4519,SelfLink:/apis/apps/v1/namespaces/deployment-4519/replicasets/test-recreate-deployment-5c8c9cc69d,UID:a161659a-3d60-4dd7-942e-7eaf88fd5bb6,ResourceVersion:4847188,Generation:1,CreationTimestamp:2020-04-11 13:38:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment ba05bc16-567d-4e5c-a87e-b58aedb8e594 0xc0024e9757 0xc0024e9758}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 11 13:38:15.435: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Apr 11 13:38:15.435: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-4519,SelfLink:/apis/apps/v1/namespaces/deployment-4519/replicasets/test-recreate-deployment-6df85df6b9,UID:4e5ad3ed-6ab5-4ced-a891-209c2ebeb9cd,ResourceVersion:4847179,Generation:2,CreationTimestamp:2020-04-11 13:38:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment ba05bc16-567d-4e5c-a87e-b58aedb8e594 0xc0024e9827 0xc0024e9828}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 11 13:38:15.439: INFO: Pod "test-recreate-deployment-5c8c9cc69d-c28xm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-c28xm,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-4519,SelfLink:/api/v1/namespaces/deployment-4519/pods/test-recreate-deployment-5c8c9cc69d-c28xm,UID:50d2867d-eca6-4d29-b3ac-4743894c0c5b,ResourceVersion:4847191,Generation:0,CreationTimestamp:2020-04-11 13:38:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d a161659a-3d60-4dd7-942e-7eaf88fd5bb6 0xc00257ed27 0xc00257ed28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4vgl9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4vgl9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4vgl9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00257eda0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00257edc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 13:38:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 13:38:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 13:38:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 13:38:15 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-11 13:38:15 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:38:15.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4519" for this suite. Apr 11 13:38:21.452: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:38:21.529: INFO: namespace deployment-4519 deletion completed in 6.086396851s • [SLOW TEST:10.755 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:38:21.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:39:21.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1414" for this suite. Apr 11 13:39:43.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:39:43.693: INFO: namespace container-probe-1414 deletion completed in 22.092183797s • [SLOW TEST:82.164 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:39:43.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Apr 11 13:39:47.800: INFO: Pod pod-hostip-ff45af01-4a01-4a68-8a86-1201e42519c0 has hostIP: 172.17.0.5 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:39:47.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8474" for this suite. Apr 11 13:40:09.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:40:09.894: INFO: namespace pods-8474 deletion completed in 22.090099639s • [SLOW TEST:26.201 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:40:09.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 11 13:40:14.502: INFO: Successfully updated pod "annotationupdate14c88626-789e-4e6c-8994-62a904eec681" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:40:16.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-410" for this suite. Apr 11 13:40:38.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:40:38.616: INFO: namespace projected-410 deletion completed in 22.090256043s • [SLOW TEST:28.721 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:40:38.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0411 13:40:48.686293 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 11 13:40:48.686: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:40:48.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7522" for this suite. Apr 11 13:40:54.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:40:54.787: INFO: namespace gc-7522 deletion completed in 6.098655313s • [SLOW TEST:16.171 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:40:54.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Apr 11 13:40:59.369: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1057 pod-service-account-e2a3672d-99b7-4e36-94e2-68bb25e1f0d5 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Apr 11 13:40:59.595: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1057 pod-service-account-e2a3672d-99b7-4e36-94e2-68bb25e1f0d5 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Apr 11 13:40:59.800: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1057 pod-service-account-e2a3672d-99b7-4e36-94e2-68bb25e1f0d5 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:41:00.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1057" for this suite. Apr 11 13:41:06.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:41:06.113: INFO: namespace svcaccounts-1057 deletion completed in 6.094830468s • [SLOW TEST:11.326 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:41:06.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 11 13:41:06.172: INFO: Creating ReplicaSet my-hostname-basic-e780837f-1651-4e4b-bf6b-98664377621a Apr 11 13:41:06.193: INFO: Pod name my-hostname-basic-e780837f-1651-4e4b-bf6b-98664377621a: Found 0 pods out of 1 Apr 11 13:41:11.198: INFO: Pod name my-hostname-basic-e780837f-1651-4e4b-bf6b-98664377621a: Found 1 pods out of 1 Apr 11 13:41:11.198: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-e780837f-1651-4e4b-bf6b-98664377621a" is running Apr 11 13:41:11.201: INFO: Pod "my-hostname-basic-e780837f-1651-4e4b-bf6b-98664377621a-fvljc" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-11 13:41:06 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-11 13:41:08 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-11 13:41:08 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-11 13:41:06 +0000 UTC Reason: Message:}]) Apr 11 13:41:11.201: INFO: Trying to dial the pod Apr 11 13:41:16.212: INFO: Controller my-hostname-basic-e780837f-1651-4e4b-bf6b-98664377621a: Got expected result from replica 1 [my-hostname-basic-e780837f-1651-4e4b-bf6b-98664377621a-fvljc]: "my-hostname-basic-e780837f-1651-4e4b-bf6b-98664377621a-fvljc", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:41:16.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-7377" for this suite. Apr 11 13:41:22.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:41:22.304: INFO: namespace replicaset-7377 deletion completed in 6.088973591s • [SLOW TEST:16.190 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:41:22.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:41:26.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5090" for this suite. Apr 11 13:42:16.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:42:16.546: INFO: namespace kubelet-test-5090 deletion completed in 50.119964552s • [SLOW TEST:54.241 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:42:16.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-d5981df6-3b86-40ef-9f3a-dbfdebf1dcf1 in namespace container-probe-9205 Apr 11 13:42:20.612: INFO: Started pod test-webserver-d5981df6-3b86-40ef-9f3a-dbfdebf1dcf1 in namespace container-probe-9205 STEP: checking the pod's current state and verifying that restartCount is present Apr 11 13:42:20.615: INFO: Initial restart count of pod test-webserver-d5981df6-3b86-40ef-9f3a-dbfdebf1dcf1 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:46:21.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9205" for this suite. Apr 11 13:46:27.620: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:46:27.697: INFO: namespace container-probe-9205 deletion completed in 6.248054948s • [SLOW TEST:251.151 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:46:27.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 11 13:46:28.186: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.105892ms) Apr 11 13:46:28.189: INFO: (1) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.201883ms) Apr 11 13:46:28.192: INFO: (2) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.607558ms) Apr 11 13:46:28.195: INFO: (3) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.641037ms) Apr 11 13:46:28.197: INFO: (4) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.573465ms) Apr 11 13:46:28.200: INFO: (5) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.804627ms) Apr 11 13:46:28.203: INFO: (6) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.964755ms) Apr 11 13:46:28.206: INFO: (7) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.509725ms) Apr 11 13:46:28.210: INFO: (8) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.584787ms) Apr 11 13:46:28.214: INFO: (9) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.546524ms) Apr 11 13:46:28.217: INFO: (10) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.248119ms) Apr 11 13:46:28.220: INFO: (11) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.777066ms) Apr 11 13:46:28.222: INFO: (12) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.596114ms) Apr 11 13:46:28.225: INFO: (13) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.860997ms) Apr 11 13:46:28.228: INFO: (14) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.74453ms) Apr 11 13:46:28.230: INFO: (15) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.381534ms) Apr 11 13:46:28.233: INFO: (16) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.628259ms) Apr 11 13:46:28.236: INFO: (17) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.560642ms) Apr 11 13:46:28.239: INFO: (18) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.941073ms) Apr 11 13:46:28.242: INFO: (19) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.066438ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:46:28.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-9173" for this suite. Apr 11 13:46:34.482: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:46:34.553: INFO: namespace proxy-9173 deletion completed in 6.308620352s • [SLOW TEST:6.855 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:46:34.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:46:40.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5271" for this suite. Apr 11 13:46:46.782: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:46:46.863: INFO: namespace emptydir-wrapper-5271 deletion completed in 6.119802109s • [SLOW TEST:12.310 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:46:46.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-553711b3-cdd0-4047-84ab-978ddcadcf02 in namespace container-probe-3183 Apr 11 13:46:51.041: INFO: Started pod busybox-553711b3-cdd0-4047-84ab-978ddcadcf02 in namespace container-probe-3183 STEP: checking the pod's current state and verifying that restartCount is present Apr 11 13:46:51.045: INFO: Initial restart count of pod busybox-553711b3-cdd0-4047-84ab-978ddcadcf02 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:50:51.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3183" for this suite. Apr 11 13:50:57.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:50:57.730: INFO: namespace container-probe-3183 deletion completed in 6.095282251s • [SLOW TEST:250.867 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:50:57.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-phbc STEP: Creating a pod to test atomic-volume-subpath Apr 11 13:50:57.823: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-phbc" in namespace "subpath-216" to be "success or failure" Apr 11 13:50:57.832: INFO: Pod "pod-subpath-test-configmap-phbc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.898642ms Apr 11 13:50:59.836: INFO: Pod "pod-subpath-test-configmap-phbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012710992s Apr 11 13:51:01.840: INFO: Pod "pod-subpath-test-configmap-phbc": Phase="Running", Reason="", readiness=true. Elapsed: 4.016963565s Apr 11 13:51:03.844: INFO: Pod "pod-subpath-test-configmap-phbc": Phase="Running", Reason="", readiness=true. Elapsed: 6.02152885s Apr 11 13:51:05.850: INFO: Pod "pod-subpath-test-configmap-phbc": Phase="Running", Reason="", readiness=true. Elapsed: 8.027386302s Apr 11 13:51:07.855: INFO: Pod "pod-subpath-test-configmap-phbc": Phase="Running", Reason="", readiness=true. Elapsed: 10.032044984s Apr 11 13:51:09.859: INFO: Pod "pod-subpath-test-configmap-phbc": Phase="Running", Reason="", readiness=true. Elapsed: 12.035881388s Apr 11 13:51:11.862: INFO: Pod "pod-subpath-test-configmap-phbc": Phase="Running", Reason="", readiness=true. Elapsed: 14.039560287s Apr 11 13:51:13.867: INFO: Pod "pod-subpath-test-configmap-phbc": Phase="Running", Reason="", readiness=true. Elapsed: 16.043866685s Apr 11 13:51:15.871: INFO: Pod "pod-subpath-test-configmap-phbc": Phase="Running", Reason="", readiness=true. Elapsed: 18.048401909s Apr 11 13:51:17.875: INFO: Pod "pod-subpath-test-configmap-phbc": Phase="Running", Reason="", readiness=true. Elapsed: 20.052314329s Apr 11 13:51:19.879: INFO: Pod "pod-subpath-test-configmap-phbc": Phase="Running", Reason="", readiness=true. Elapsed: 22.055844846s Apr 11 13:51:21.883: INFO: Pod "pod-subpath-test-configmap-phbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.059992517s STEP: Saw pod success Apr 11 13:51:21.883: INFO: Pod "pod-subpath-test-configmap-phbc" satisfied condition "success or failure" Apr 11 13:51:21.886: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-phbc container test-container-subpath-configmap-phbc: STEP: delete the pod Apr 11 13:51:21.920: INFO: Waiting for pod pod-subpath-test-configmap-phbc to disappear Apr 11 13:51:21.932: INFO: Pod pod-subpath-test-configmap-phbc no longer exists STEP: Deleting pod pod-subpath-test-configmap-phbc Apr 11 13:51:21.932: INFO: Deleting pod "pod-subpath-test-configmap-phbc" in namespace "subpath-216" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:51:21.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-216" for this suite. Apr 11 13:51:27.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:51:28.044: INFO: namespace subpath-216 deletion completed in 6.105825196s • [SLOW TEST:30.313 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:51:28.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4384.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-4384.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4384.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4384.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-4384.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4384.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 11 13:51:34.178: INFO: DNS probes using dns-4384/dns-test-b6ec96d7-b8f3-4244-ac14-1744fedddd12 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:51:34.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4384" for this suite. Apr 11 13:51:40.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:51:40.370: INFO: namespace dns-4384 deletion completed in 6.153735648s • [SLOW TEST:12.325 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:51:40.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Apr 11 13:51:40.447: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 11 13:51:40.465: INFO: Waiting for terminating namespaces to be deleted... Apr 11 13:51:40.468: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Apr 11 13:51:40.474: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 11 13:51:40.474: INFO: Container kube-proxy ready: true, restart count 0 Apr 11 13:51:40.474: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 11 13:51:40.474: INFO: Container kindnet-cni ready: true, restart count 0 Apr 11 13:51:40.474: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Apr 11 13:51:40.480: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Apr 11 13:51:40.480: INFO: Container coredns ready: true, restart count 0 Apr 11 13:51:40.480: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Apr 11 13:51:40.480: INFO: Container coredns ready: true, restart count 0 Apr 11 13:51:40.480: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Apr 11 13:51:40.480: INFO: Container kube-proxy ready: true, restart count 0 Apr 11 13:51:40.480: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Apr 11 13:51:40.480: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-79a88a16-f5fc-4a82-aa66-f2080acffd04 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-79a88a16-f5fc-4a82-aa66-f2080acffd04 off the node iruya-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-79a88a16-f5fc-4a82-aa66-f2080acffd04 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:51:48.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7750" for this suite. Apr 11 13:52:06.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:52:06.726: INFO: namespace sched-pred-7750 deletion completed in 18.093956081s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:26.356 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:52:06.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-3c38d873-3bd6-45a9-9732-ba4a7202a136 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:52:12.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5316" for this suite. Apr 11 13:52:34.867: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:52:34.943: INFO: namespace configmap-5316 deletion completed in 22.088386543s • [SLOW TEST:28.216 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:52:34.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-15bc3a3d-fe06-496f-ad59-15e2db506313 STEP: Creating a pod to test consume secrets Apr 11 13:52:35.057: INFO: Waiting up to 5m0s for pod "pod-secrets-315e21ec-8b91-4380-ad89-1505fcdf6866" in namespace "secrets-1529" to be "success or failure" Apr 11 13:52:35.074: INFO: Pod "pod-secrets-315e21ec-8b91-4380-ad89-1505fcdf6866": Phase="Pending", Reason="", readiness=false. Elapsed: 16.882ms Apr 11 13:52:37.078: INFO: Pod "pod-secrets-315e21ec-8b91-4380-ad89-1505fcdf6866": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021117467s Apr 11 13:52:39.083: INFO: Pod "pod-secrets-315e21ec-8b91-4380-ad89-1505fcdf6866": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025659581s STEP: Saw pod success Apr 11 13:52:39.083: INFO: Pod "pod-secrets-315e21ec-8b91-4380-ad89-1505fcdf6866" satisfied condition "success or failure" Apr 11 13:52:39.086: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-315e21ec-8b91-4380-ad89-1505fcdf6866 container secret-volume-test: STEP: delete the pod Apr 11 13:52:39.108: INFO: Waiting for pod pod-secrets-315e21ec-8b91-4380-ad89-1505fcdf6866 to disappear Apr 11 13:52:39.118: INFO: Pod pod-secrets-315e21ec-8b91-4380-ad89-1505fcdf6866 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:52:39.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1529" for this suite. Apr 11 13:52:45.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:52:45.216: INFO: namespace secrets-1529 deletion completed in 6.094655875s • [SLOW TEST:10.273 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:52:45.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Apr 11 13:52:55.317: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4337 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 11 13:52:55.318: INFO: >>> kubeConfig: /root/.kube/config I0411 13:52:55.355675 6 log.go:172] (0xc0025daa50) (0xc0020ed2c0) Create stream I0411 13:52:55.355710 6 log.go:172] (0xc0025daa50) (0xc0020ed2c0) Stream added, broadcasting: 1 I0411 13:52:55.357781 6 log.go:172] (0xc0025daa50) Reply frame received for 1 I0411 13:52:55.357826 6 log.go:172] (0xc0025daa50) (0xc002ebcc80) Create stream I0411 13:52:55.357843 6 log.go:172] (0xc0025daa50) (0xc002ebcc80) Stream added, broadcasting: 3 I0411 13:52:55.358786 6 log.go:172] (0xc0025daa50) Reply frame received for 3 I0411 13:52:55.358809 6 log.go:172] (0xc0025daa50) (0xc002d78e60) Create stream I0411 13:52:55.358819 6 log.go:172] (0xc0025daa50) (0xc002d78e60) Stream added, broadcasting: 5 I0411 13:52:55.359629 6 log.go:172] (0xc0025daa50) Reply frame received for 5 I0411 13:52:55.410142 6 log.go:172] (0xc0025daa50) Data frame received for 5 I0411 13:52:55.410189 6 log.go:172] (0xc002d78e60) (5) Data frame handling I0411 13:52:55.410215 6 log.go:172] (0xc0025daa50) Data frame received for 3 I0411 13:52:55.410228 6 log.go:172] (0xc002ebcc80) (3) Data frame handling I0411 13:52:55.410244 6 log.go:172] (0xc002ebcc80) (3) Data frame sent I0411 13:52:55.410256 6 log.go:172] (0xc0025daa50) Data frame received for 3 I0411 13:52:55.410269 6 log.go:172] (0xc002ebcc80) (3) Data frame handling I0411 13:52:55.411893 6 log.go:172] (0xc0025daa50) Data frame received for 1 I0411 13:52:55.411912 6 log.go:172] (0xc0020ed2c0) (1) Data frame handling I0411 13:52:55.411929 6 log.go:172] (0xc0020ed2c0) (1) Data frame sent I0411 13:52:55.411950 6 log.go:172] (0xc0025daa50) (0xc0020ed2c0) Stream removed, broadcasting: 1 I0411 13:52:55.411993 6 log.go:172] (0xc0025daa50) Go away received I0411 13:52:55.412050 6 log.go:172] (0xc0025daa50) (0xc0020ed2c0) Stream removed, broadcasting: 1 I0411 13:52:55.412063 6 log.go:172] (0xc0025daa50) (0xc002ebcc80) Stream removed, broadcasting: 3 I0411 13:52:55.412072 6 log.go:172] (0xc0025daa50) (0xc002d78e60) Stream removed, broadcasting: 5 Apr 11 13:52:55.412: INFO: Exec stderr: "" Apr 11 13:52:55.412: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4337 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 11 13:52:55.412: INFO: >>> kubeConfig: /root/.kube/config I0411 13:52:55.442958 6 log.go:172] (0xc0024aefd0) (0xc002d79180) Create stream I0411 13:52:55.442985 6 log.go:172] (0xc0024aefd0) (0xc002d79180) Stream added, broadcasting: 1 I0411 13:52:55.451347 6 log.go:172] (0xc0024aefd0) Reply frame received for 1 I0411 13:52:55.451391 6 log.go:172] (0xc0024aefd0) (0xc0008aa000) Create stream I0411 13:52:55.451403 6 log.go:172] (0xc0024aefd0) (0xc0008aa000) Stream added, broadcasting: 3 I0411 13:52:55.455115 6 log.go:172] (0xc0024aefd0) Reply frame received for 3 I0411 13:52:55.455157 6 log.go:172] (0xc0024aefd0) (0xc0008aa140) Create stream I0411 13:52:55.455167 6 log.go:172] (0xc0024aefd0) (0xc0008aa140) Stream added, broadcasting: 5 I0411 13:52:55.456282 6 log.go:172] (0xc0024aefd0) Reply frame received for 5 I0411 13:52:55.516555 6 log.go:172] (0xc0024aefd0) Data frame received for 5 I0411 13:52:55.516593 6 log.go:172] (0xc0008aa140) (5) Data frame handling I0411 13:52:55.516616 6 log.go:172] (0xc0024aefd0) Data frame received for 3 I0411 13:52:55.516627 6 log.go:172] (0xc0008aa000) (3) Data frame handling I0411 13:52:55.516656 6 log.go:172] (0xc0008aa000) (3) Data frame sent I0411 13:52:55.516669 6 log.go:172] (0xc0024aefd0) Data frame received for 3 I0411 13:52:55.516679 6 log.go:172] (0xc0008aa000) (3) Data frame handling I0411 13:52:55.518551 6 log.go:172] (0xc0024aefd0) Data frame received for 1 I0411 13:52:55.518586 6 log.go:172] (0xc002d79180) (1) Data frame handling I0411 13:52:55.518607 6 log.go:172] (0xc002d79180) (1) Data frame sent I0411 13:52:55.518624 6 log.go:172] (0xc0024aefd0) (0xc002d79180) Stream removed, broadcasting: 1 I0411 13:52:55.518696 6 log.go:172] (0xc0024aefd0) Go away received I0411 13:52:55.518727 6 log.go:172] (0xc0024aefd0) (0xc002d79180) Stream removed, broadcasting: 1 I0411 13:52:55.518766 6 log.go:172] (0xc0024aefd0) (0xc0008aa000) Stream removed, broadcasting: 3 I0411 13:52:55.518797 6 log.go:172] (0xc0024aefd0) (0xc0008aa140) Stream removed, broadcasting: 5 Apr 11 13:52:55.518: INFO: Exec stderr: "" Apr 11 13:52:55.518: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4337 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 11 13:52:55.518: INFO: >>> kubeConfig: /root/.kube/config I0411 13:52:55.551494 6 log.go:172] (0xc001f68f20) (0xc002ebd040) Create stream I0411 13:52:55.551514 6 log.go:172] (0xc001f68f20) (0xc002ebd040) Stream added, broadcasting: 1 I0411 13:52:55.553915 6 log.go:172] (0xc001f68f20) Reply frame received for 1 I0411 13:52:55.553971 6 log.go:172] (0xc001f68f20) (0xc0020ed360) Create stream I0411 13:52:55.554051 6 log.go:172] (0xc001f68f20) (0xc0020ed360) Stream added, broadcasting: 3 I0411 13:52:55.555518 6 log.go:172] (0xc001f68f20) Reply frame received for 3 I0411 13:52:55.555573 6 log.go:172] (0xc001f68f20) (0xc0020ed400) Create stream I0411 13:52:55.555596 6 log.go:172] (0xc001f68f20) (0xc0020ed400) Stream added, broadcasting: 5 I0411 13:52:55.556477 6 log.go:172] (0xc001f68f20) Reply frame received for 5 I0411 13:52:55.609397 6 log.go:172] (0xc001f68f20) Data frame received for 5 I0411 13:52:55.609445 6 log.go:172] (0xc0020ed400) (5) Data frame handling I0411 13:52:55.609475 6 log.go:172] (0xc001f68f20) Data frame received for 3 I0411 13:52:55.609490 6 log.go:172] (0xc0020ed360) (3) Data frame handling I0411 13:52:55.609504 6 log.go:172] (0xc0020ed360) (3) Data frame sent I0411 13:52:55.609521 6 log.go:172] (0xc001f68f20) Data frame received for 3 I0411 13:52:55.609538 6 log.go:172] (0xc0020ed360) (3) Data frame handling I0411 13:52:55.610896 6 log.go:172] (0xc001f68f20) Data frame received for 1 I0411 13:52:55.610932 6 log.go:172] (0xc002ebd040) (1) Data frame handling I0411 13:52:55.610958 6 log.go:172] (0xc002ebd040) (1) Data frame sent I0411 13:52:55.610974 6 log.go:172] (0xc001f68f20) (0xc002ebd040) Stream removed, broadcasting: 1 I0411 13:52:55.610990 6 log.go:172] (0xc001f68f20) Go away received I0411 13:52:55.611151 6 log.go:172] (0xc001f68f20) (0xc002ebd040) Stream removed, broadcasting: 1 I0411 13:52:55.611183 6 log.go:172] (0xc001f68f20) (0xc0020ed360) Stream removed, broadcasting: 3 I0411 13:52:55.611204 6 log.go:172] (0xc001f68f20) (0xc0020ed400) Stream removed, broadcasting: 5 Apr 11 13:52:55.611: INFO: Exec stderr: "" Apr 11 13:52:55.611: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4337 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 11 13:52:55.611: INFO: >>> kubeConfig: /root/.kube/config I0411 13:52:55.643596 6 log.go:172] (0xc001f311e0) (0xc0008aaa00) Create stream I0411 13:52:55.643634 6 log.go:172] (0xc001f311e0) (0xc0008aaa00) Stream added, broadcasting: 1 I0411 13:52:55.646749 6 log.go:172] (0xc001f311e0) Reply frame received for 1 I0411 13:52:55.646799 6 log.go:172] (0xc001f311e0) (0xc002ebd0e0) Create stream I0411 13:52:55.646815 6 log.go:172] (0xc001f311e0) (0xc002ebd0e0) Stream added, broadcasting: 3 I0411 13:52:55.647787 6 log.go:172] (0xc001f311e0) Reply frame received for 3 I0411 13:52:55.647838 6 log.go:172] (0xc001f311e0) (0xc0020ed4a0) Create stream I0411 13:52:55.647859 6 log.go:172] (0xc001f311e0) (0xc0020ed4a0) Stream added, broadcasting: 5 I0411 13:52:55.648647 6 log.go:172] (0xc001f311e0) Reply frame received for 5 I0411 13:52:55.728606 6 log.go:172] (0xc001f311e0) Data frame received for 5 I0411 13:52:55.728660 6 log.go:172] (0xc0020ed4a0) (5) Data frame handling I0411 13:52:55.728687 6 log.go:172] (0xc001f311e0) Data frame received for 3 I0411 13:52:55.728700 6 log.go:172] (0xc002ebd0e0) (3) Data frame handling I0411 13:52:55.728711 6 log.go:172] (0xc002ebd0e0) (3) Data frame sent I0411 13:52:55.728729 6 log.go:172] (0xc001f311e0) Data frame received for 3 I0411 13:52:55.728748 6 log.go:172] (0xc002ebd0e0) (3) Data frame handling I0411 13:52:55.730335 6 log.go:172] (0xc001f311e0) Data frame received for 1 I0411 13:52:55.730362 6 log.go:172] (0xc0008aaa00) (1) Data frame handling I0411 13:52:55.730376 6 log.go:172] (0xc0008aaa00) (1) Data frame sent I0411 13:52:55.730400 6 log.go:172] (0xc001f311e0) (0xc0008aaa00) Stream removed, broadcasting: 1 I0411 13:52:55.730422 6 log.go:172] (0xc001f311e0) Go away received I0411 13:52:55.730498 6 log.go:172] (0xc001f311e0) (0xc0008aaa00) Stream removed, broadcasting: 1 I0411 13:52:55.730523 6 log.go:172] (0xc001f311e0) (0xc002ebd0e0) Stream removed, broadcasting: 3 I0411 13:52:55.730536 6 log.go:172] (0xc001f311e0) (0xc0020ed4a0) Stream removed, broadcasting: 5 Apr 11 13:52:55.730: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Apr 11 13:52:55.730: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4337 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 11 13:52:55.730: INFO: >>> kubeConfig: /root/.kube/config I0411 13:52:55.762788 6 log.go:172] (0xc0025dbd90) (0xc0020ed7c0) Create stream I0411 13:52:55.762831 6 log.go:172] (0xc0025dbd90) (0xc0020ed7c0) Stream added, broadcasting: 1 I0411 13:52:55.765889 6 log.go:172] (0xc0025dbd90) Reply frame received for 1 I0411 13:52:55.765929 6 log.go:172] (0xc0025dbd90) (0xc002ebd180) Create stream I0411 13:52:55.765943 6 log.go:172] (0xc0025dbd90) (0xc002ebd180) Stream added, broadcasting: 3 I0411 13:52:55.767092 6 log.go:172] (0xc0025dbd90) Reply frame received for 3 I0411 13:52:55.767132 6 log.go:172] (0xc0025dbd90) (0xc0008aac80) Create stream I0411 13:52:55.767151 6 log.go:172] (0xc0025dbd90) (0xc0008aac80) Stream added, broadcasting: 5 I0411 13:52:55.768105 6 log.go:172] (0xc0025dbd90) Reply frame received for 5 I0411 13:52:55.811852 6 log.go:172] (0xc0025dbd90) Data frame received for 3 I0411 13:52:55.811879 6 log.go:172] (0xc002ebd180) (3) Data frame handling I0411 13:52:55.811892 6 log.go:172] (0xc002ebd180) (3) Data frame sent I0411 13:52:55.811900 6 log.go:172] (0xc0025dbd90) Data frame received for 3 I0411 13:52:55.811913 6 log.go:172] (0xc002ebd180) (3) Data frame handling I0411 13:52:55.811932 6 log.go:172] (0xc0025dbd90) Data frame received for 5 I0411 13:52:55.811942 6 log.go:172] (0xc0008aac80) (5) Data frame handling I0411 13:52:55.813428 6 log.go:172] (0xc0025dbd90) Data frame received for 1 I0411 13:52:55.813449 6 log.go:172] (0xc0020ed7c0) (1) Data frame handling I0411 13:52:55.813464 6 log.go:172] (0xc0020ed7c0) (1) Data frame sent I0411 13:52:55.813501 6 log.go:172] (0xc0025dbd90) (0xc0020ed7c0) Stream removed, broadcasting: 1 I0411 13:52:55.813590 6 log.go:172] (0xc0025dbd90) (0xc0020ed7c0) Stream removed, broadcasting: 1 I0411 13:52:55.813603 6 log.go:172] (0xc0025dbd90) (0xc002ebd180) Stream removed, broadcasting: 3 I0411 13:52:55.813685 6 log.go:172] (0xc0025dbd90) Go away received I0411 13:52:55.813728 6 log.go:172] (0xc0025dbd90) (0xc0008aac80) Stream removed, broadcasting: 5 Apr 11 13:52:55.813: INFO: Exec stderr: "" Apr 11 13:52:55.813: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4337 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 11 13:52:55.813: INFO: >>> kubeConfig: /root/.kube/config I0411 13:52:55.846562 6 log.go:172] (0xc00322c6e0) (0xc002ebd540) Create stream I0411 13:52:55.846596 6 log.go:172] (0xc00322c6e0) (0xc002ebd540) Stream added, broadcasting: 1 I0411 13:52:55.848691 6 log.go:172] (0xc00322c6e0) Reply frame received for 1 I0411 13:52:55.848715 6 log.go:172] (0xc00322c6e0) (0xc002e141e0) Create stream I0411 13:52:55.848723 6 log.go:172] (0xc00322c6e0) (0xc002e141e0) Stream added, broadcasting: 3 I0411 13:52:55.849434 6 log.go:172] (0xc00322c6e0) Reply frame received for 3 I0411 13:52:55.849457 6 log.go:172] (0xc00322c6e0) (0xc0020ed860) Create stream I0411 13:52:55.849465 6 log.go:172] (0xc00322c6e0) (0xc0020ed860) Stream added, broadcasting: 5 I0411 13:52:55.850021 6 log.go:172] (0xc00322c6e0) Reply frame received for 5 I0411 13:52:55.900259 6 log.go:172] (0xc00322c6e0) Data frame received for 5 I0411 13:52:55.900299 6 log.go:172] (0xc0020ed860) (5) Data frame handling I0411 13:52:55.900333 6 log.go:172] (0xc00322c6e0) Data frame received for 3 I0411 13:52:55.900347 6 log.go:172] (0xc002e141e0) (3) Data frame handling I0411 13:52:55.900367 6 log.go:172] (0xc002e141e0) (3) Data frame sent I0411 13:52:55.900385 6 log.go:172] (0xc00322c6e0) Data frame received for 3 I0411 13:52:55.900393 6 log.go:172] (0xc002e141e0) (3) Data frame handling I0411 13:52:55.901796 6 log.go:172] (0xc00322c6e0) Data frame received for 1 I0411 13:52:55.901836 6 log.go:172] (0xc002ebd540) (1) Data frame handling I0411 13:52:55.901868 6 log.go:172] (0xc002ebd540) (1) Data frame sent I0411 13:52:55.901896 6 log.go:172] (0xc00322c6e0) (0xc002ebd540) Stream removed, broadcasting: 1 I0411 13:52:55.901919 6 log.go:172] (0xc00322c6e0) Go away received I0411 13:52:55.902044 6 log.go:172] (0xc00322c6e0) (0xc002ebd540) Stream removed, broadcasting: 1 I0411 13:52:55.902064 6 log.go:172] (0xc00322c6e0) (0xc002e141e0) Stream removed, broadcasting: 3 I0411 13:52:55.902074 6 log.go:172] (0xc00322c6e0) (0xc0020ed860) Stream removed, broadcasting: 5 Apr 11 13:52:55.902: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Apr 11 13:52:55.902: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4337 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 11 13:52:55.902: INFO: >>> kubeConfig: /root/.kube/config I0411 13:52:55.937678 6 log.go:172] (0xc001a3b760) (0xc002e145a0) Create stream I0411 13:52:55.937710 6 log.go:172] (0xc001a3b760) (0xc002e145a0) Stream added, broadcasting: 1 I0411 13:52:55.940761 6 log.go:172] (0xc001a3b760) Reply frame received for 1 I0411 13:52:55.940829 6 log.go:172] (0xc001a3b760) (0xc0020ed900) Create stream I0411 13:52:55.940856 6 log.go:172] (0xc001a3b760) (0xc0020ed900) Stream added, broadcasting: 3 I0411 13:52:55.942126 6 log.go:172] (0xc001a3b760) Reply frame received for 3 I0411 13:52:55.942176 6 log.go:172] (0xc001a3b760) (0xc002ebd5e0) Create stream I0411 13:52:55.942193 6 log.go:172] (0xc001a3b760) (0xc002ebd5e0) Stream added, broadcasting: 5 I0411 13:52:55.943140 6 log.go:172] (0xc001a3b760) Reply frame received for 5 I0411 13:52:55.996556 6 log.go:172] (0xc001a3b760) Data frame received for 5 I0411 13:52:55.996583 6 log.go:172] (0xc002ebd5e0) (5) Data frame handling I0411 13:52:55.996612 6 log.go:172] (0xc001a3b760) Data frame received for 3 I0411 13:52:55.996621 6 log.go:172] (0xc0020ed900) (3) Data frame handling I0411 13:52:55.996632 6 log.go:172] (0xc0020ed900) (3) Data frame sent I0411 13:52:55.996654 6 log.go:172] (0xc001a3b760) Data frame received for 3 I0411 13:52:55.996665 6 log.go:172] (0xc0020ed900) (3) Data frame handling I0411 13:52:55.998390 6 log.go:172] (0xc001a3b760) Data frame received for 1 I0411 13:52:55.998406 6 log.go:172] (0xc002e145a0) (1) Data frame handling I0411 13:52:55.998430 6 log.go:172] (0xc002e145a0) (1) Data frame sent I0411 13:52:55.998451 6 log.go:172] (0xc001a3b760) (0xc002e145a0) Stream removed, broadcasting: 1 I0411 13:52:55.998470 6 log.go:172] (0xc001a3b760) Go away received I0411 13:52:55.998664 6 log.go:172] (0xc001a3b760) (0xc002e145a0) Stream removed, broadcasting: 1 I0411 13:52:55.998695 6 log.go:172] (0xc001a3b760) (0xc0020ed900) Stream removed, broadcasting: 3 I0411 13:52:55.998705 6 log.go:172] (0xc001a3b760) (0xc002ebd5e0) Stream removed, broadcasting: 5 Apr 11 13:52:55.998: INFO: Exec stderr: "" Apr 11 13:52:55.998: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4337 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 11 13:52:55.998: INFO: >>> kubeConfig: /root/.kube/config I0411 13:52:56.030850 6 log.go:172] (0xc00322d6b0) (0xc002ebd900) Create stream I0411 13:52:56.030875 6 log.go:172] (0xc00322d6b0) (0xc002ebd900) Stream added, broadcasting: 1 I0411 13:52:56.034016 6 log.go:172] (0xc00322d6b0) Reply frame received for 1 I0411 13:52:56.034064 6 log.go:172] (0xc00322d6b0) (0xc002ebd9a0) Create stream I0411 13:52:56.034085 6 log.go:172] (0xc00322d6b0) (0xc002ebd9a0) Stream added, broadcasting: 3 I0411 13:52:56.035062 6 log.go:172] (0xc00322d6b0) Reply frame received for 3 I0411 13:52:56.035101 6 log.go:172] (0xc00322d6b0) (0xc0020ed9a0) Create stream I0411 13:52:56.035116 6 log.go:172] (0xc00322d6b0) (0xc0020ed9a0) Stream added, broadcasting: 5 I0411 13:52:56.036149 6 log.go:172] (0xc00322d6b0) Reply frame received for 5 I0411 13:52:56.091350 6 log.go:172] (0xc00322d6b0) Data frame received for 5 I0411 13:52:56.091405 6 log.go:172] (0xc0020ed9a0) (5) Data frame handling I0411 13:52:56.091444 6 log.go:172] (0xc00322d6b0) Data frame received for 3 I0411 13:52:56.091469 6 log.go:172] (0xc002ebd9a0) (3) Data frame handling I0411 13:52:56.091494 6 log.go:172] (0xc002ebd9a0) (3) Data frame sent I0411 13:52:56.091519 6 log.go:172] (0xc00322d6b0) Data frame received for 3 I0411 13:52:56.091538 6 log.go:172] (0xc002ebd9a0) (3) Data frame handling I0411 13:52:56.093273 6 log.go:172] (0xc00322d6b0) Data frame received for 1 I0411 13:52:56.093305 6 log.go:172] (0xc002ebd900) (1) Data frame handling I0411 13:52:56.093328 6 log.go:172] (0xc002ebd900) (1) Data frame sent I0411 13:52:56.093383 6 log.go:172] (0xc00322d6b0) (0xc002ebd900) Stream removed, broadcasting: 1 I0411 13:52:56.093492 6 log.go:172] (0xc00322d6b0) (0xc002ebd900) Stream removed, broadcasting: 1 I0411 13:52:56.093502 6 log.go:172] (0xc00322d6b0) (0xc002ebd9a0) Stream removed, broadcasting: 3 I0411 13:52:56.093628 6 log.go:172] (0xc00322d6b0) Go away received I0411 13:52:56.093675 6 log.go:172] (0xc00322d6b0) (0xc0020ed9a0) Stream removed, broadcasting: 5 Apr 11 13:52:56.093: INFO: Exec stderr: "" Apr 11 13:52:56.093: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4337 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 11 13:52:56.093: INFO: >>> kubeConfig: /root/.kube/config I0411 13:52:56.127685 6 log.go:172] (0xc002a61d90) (0xc0020edcc0) Create stream I0411 13:52:56.127718 6 log.go:172] (0xc002a61d90) (0xc0020edcc0) Stream added, broadcasting: 1 I0411 13:52:56.135929 6 log.go:172] (0xc002a61d90) Reply frame received for 1 I0411 13:52:56.135998 6 log.go:172] (0xc002a61d90) (0xc0020edd60) Create stream I0411 13:52:56.136019 6 log.go:172] (0xc002a61d90) (0xc0020edd60) Stream added, broadcasting: 3 I0411 13:52:56.137731 6 log.go:172] (0xc002a61d90) Reply frame received for 3 I0411 13:52:56.137777 6 log.go:172] (0xc002a61d90) (0xc002ebda40) Create stream I0411 13:52:56.137793 6 log.go:172] (0xc002a61d90) (0xc002ebda40) Stream added, broadcasting: 5 I0411 13:52:56.139195 6 log.go:172] (0xc002a61d90) Reply frame received for 5 I0411 13:52:56.188723 6 log.go:172] (0xc002a61d90) Data frame received for 5 I0411 13:52:56.188775 6 log.go:172] (0xc002ebda40) (5) Data frame handling I0411 13:52:56.188812 6 log.go:172] (0xc002a61d90) Data frame received for 3 I0411 13:52:56.188835 6 log.go:172] (0xc0020edd60) (3) Data frame handling I0411 13:52:56.188860 6 log.go:172] (0xc0020edd60) (3) Data frame sent I0411 13:52:56.188880 6 log.go:172] (0xc002a61d90) Data frame received for 3 I0411 13:52:56.188900 6 log.go:172] (0xc0020edd60) (3) Data frame handling I0411 13:52:56.190032 6 log.go:172] (0xc002a61d90) Data frame received for 1 I0411 13:52:56.190054 6 log.go:172] (0xc0020edcc0) (1) Data frame handling I0411 13:52:56.190066 6 log.go:172] (0xc0020edcc0) (1) Data frame sent I0411 13:52:56.190085 6 log.go:172] (0xc002a61d90) (0xc0020edcc0) Stream removed, broadcasting: 1 I0411 13:52:56.190102 6 log.go:172] (0xc002a61d90) Go away received I0411 13:52:56.190189 6 log.go:172] (0xc002a61d90) (0xc0020edcc0) Stream removed, broadcasting: 1 I0411 13:52:56.190225 6 log.go:172] (0xc002a61d90) (0xc0020edd60) Stream removed, broadcasting: 3 I0411 13:52:56.190250 6 log.go:172] (0xc002a61d90) (0xc002ebda40) Stream removed, broadcasting: 5 Apr 11 13:52:56.190: INFO: Exec stderr: "" Apr 11 13:52:56.190: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4337 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 11 13:52:56.190: INFO: >>> kubeConfig: /root/.kube/config I0411 13:52:56.223739 6 log.go:172] (0xc002d1e370) (0xc002ebdd60) Create stream I0411 13:52:56.223765 6 log.go:172] (0xc002d1e370) (0xc002ebdd60) Stream added, broadcasting: 1 I0411 13:52:56.232193 6 log.go:172] (0xc002d1e370) Reply frame received for 1 I0411 13:52:56.232248 6 log.go:172] (0xc002d1e370) (0xc0020ec000) Create stream I0411 13:52:56.232263 6 log.go:172] (0xc002d1e370) (0xc0020ec000) Stream added, broadcasting: 3 I0411 13:52:56.233042 6 log.go:172] (0xc002d1e370) Reply frame received for 3 I0411 13:52:56.233076 6 log.go:172] (0xc002d1e370) (0xc0020ec0a0) Create stream I0411 13:52:56.233087 6 log.go:172] (0xc002d1e370) (0xc0020ec0a0) Stream added, broadcasting: 5 I0411 13:52:56.234047 6 log.go:172] (0xc002d1e370) Reply frame received for 5 I0411 13:52:56.308188 6 log.go:172] (0xc002d1e370) Data frame received for 5 I0411 13:52:56.308245 6 log.go:172] (0xc002d1e370) Data frame received for 3 I0411 13:52:56.308298 6 log.go:172] (0xc0020ec000) (3) Data frame handling I0411 13:52:56.308311 6 log.go:172] (0xc0020ec000) (3) Data frame sent I0411 13:52:56.308329 6 log.go:172] (0xc002d1e370) Data frame received for 3 I0411 13:52:56.308340 6 log.go:172] (0xc0020ec000) (3) Data frame handling I0411 13:52:56.308369 6 log.go:172] (0xc0020ec0a0) (5) Data frame handling I0411 13:52:56.310148 6 log.go:172] (0xc002d1e370) Data frame received for 1 I0411 13:52:56.310174 6 log.go:172] (0xc002ebdd60) (1) Data frame handling I0411 13:52:56.310186 6 log.go:172] (0xc002ebdd60) (1) Data frame sent I0411 13:52:56.310205 6 log.go:172] (0xc002d1e370) (0xc002ebdd60) Stream removed, broadcasting: 1 I0411 13:52:56.310221 6 log.go:172] (0xc002d1e370) Go away received I0411 13:52:56.310391 6 log.go:172] (0xc002d1e370) (0xc002ebdd60) Stream removed, broadcasting: 1 I0411 13:52:56.310426 6 log.go:172] (0xc002d1e370) (0xc0020ec000) Stream removed, broadcasting: 3 I0411 13:52:56.310443 6 log.go:172] (0xc002d1e370) (0xc0020ec0a0) Stream removed, broadcasting: 5 Apr 11 13:52:56.310: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:52:56.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-4337" for this suite. Apr 11 13:53:46.329: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:53:46.403: INFO: namespace e2e-kubelet-etc-hosts-4337 deletion completed in 50.08871919s • [SLOW TEST:61.187 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:53:46.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:54:12.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1621" for this suite. Apr 11 13:54:18.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:54:18.713: INFO: namespace namespaces-1621 deletion completed in 6.09986972s STEP: Destroying namespace "nsdeletetest-4435" for this suite. Apr 11 13:54:18.715: INFO: Namespace nsdeletetest-4435 was already deleted STEP: Destroying namespace "nsdeletetest-5885" for this suite. Apr 11 13:54:24.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:54:24.814: INFO: namespace nsdeletetest-5885 deletion completed in 6.098344438s • [SLOW TEST:38.410 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:54:24.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:54:28.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1364" for this suite. Apr 11 13:55:06.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:55:07.042: INFO: namespace kubelet-test-1364 deletion completed in 38.113605546s • [SLOW TEST:42.228 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:55:07.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-x8c7 STEP: Creating a pod to test atomic-volume-subpath Apr 11 13:55:07.151: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-x8c7" in namespace "subpath-8436" to be "success or failure" Apr 11 13:55:07.154: INFO: Pod "pod-subpath-test-secret-x8c7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.556978ms Apr 11 13:55:09.159: INFO: Pod "pod-subpath-test-secret-x8c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007825229s Apr 11 13:55:11.163: INFO: Pod "pod-subpath-test-secret-x8c7": Phase="Running", Reason="", readiness=true. Elapsed: 4.012042883s Apr 11 13:55:13.167: INFO: Pod "pod-subpath-test-secret-x8c7": Phase="Running", Reason="", readiness=true. Elapsed: 6.016376293s Apr 11 13:55:15.172: INFO: Pod "pod-subpath-test-secret-x8c7": Phase="Running", Reason="", readiness=true. Elapsed: 8.020858654s Apr 11 13:55:17.176: INFO: Pod "pod-subpath-test-secret-x8c7": Phase="Running", Reason="", readiness=true. Elapsed: 10.024951304s Apr 11 13:55:19.179: INFO: Pod "pod-subpath-test-secret-x8c7": Phase="Running", Reason="", readiness=true. Elapsed: 12.028684548s Apr 11 13:55:21.184: INFO: Pod "pod-subpath-test-secret-x8c7": Phase="Running", Reason="", readiness=true. Elapsed: 14.033046779s Apr 11 13:55:23.188: INFO: Pod "pod-subpath-test-secret-x8c7": Phase="Running", Reason="", readiness=true. Elapsed: 16.037271786s Apr 11 13:55:25.192: INFO: Pod "pod-subpath-test-secret-x8c7": Phase="Running", Reason="", readiness=true. Elapsed: 18.041672196s Apr 11 13:55:27.196: INFO: Pod "pod-subpath-test-secret-x8c7": Phase="Running", Reason="", readiness=true. Elapsed: 20.045764358s Apr 11 13:55:29.201: INFO: Pod "pod-subpath-test-secret-x8c7": Phase="Running", Reason="", readiness=true. Elapsed: 22.050146617s Apr 11 13:55:31.205: INFO: Pod "pod-subpath-test-secret-x8c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.054044696s STEP: Saw pod success Apr 11 13:55:31.205: INFO: Pod "pod-subpath-test-secret-x8c7" satisfied condition "success or failure" Apr 11 13:55:31.207: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-secret-x8c7 container test-container-subpath-secret-x8c7: STEP: delete the pod Apr 11 13:55:31.242: INFO: Waiting for pod pod-subpath-test-secret-x8c7 to disappear Apr 11 13:55:31.264: INFO: Pod pod-subpath-test-secret-x8c7 no longer exists STEP: Deleting pod pod-subpath-test-secret-x8c7 Apr 11 13:55:31.264: INFO: Deleting pod "pod-subpath-test-secret-x8c7" in namespace "subpath-8436" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:55:31.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8436" for this suite. Apr 11 13:55:37.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:55:37.388: INFO: namespace subpath-8436 deletion completed in 6.118213409s • [SLOW TEST:30.345 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:55:37.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 11 13:55:37.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-6645' Apr 11 13:55:39.874: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 11 13:55:39.874: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Apr 11 13:55:39.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-6645' Apr 11 13:55:39.986: INFO: stderr: "" Apr 11 13:55:39.986: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:55:39.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6645" for this suite. Apr 11 13:55:46.005: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:55:46.088: INFO: namespace kubectl-6645 deletion completed in 6.099063923s • [SLOW TEST:8.700 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:55:46.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Apr 11 13:55:46.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5574' Apr 11 13:55:46.388: INFO: stderr: "" Apr 11 13:55:46.388: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 11 13:55:46.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5574' Apr 11 13:55:46.502: INFO: stderr: "" Apr 11 13:55:46.502: INFO: stdout: "update-demo-nautilus-5g6bt update-demo-nautilus-zd9wn " Apr 11 13:55:46.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5g6bt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5574' Apr 11 13:55:46.585: INFO: stderr: "" Apr 11 13:55:46.585: INFO: stdout: "" Apr 11 13:55:46.585: INFO: update-demo-nautilus-5g6bt is created but not running Apr 11 13:55:51.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5574' Apr 11 13:55:51.683: INFO: stderr: "" Apr 11 13:55:51.683: INFO: stdout: "update-demo-nautilus-5g6bt update-demo-nautilus-zd9wn " Apr 11 13:55:51.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5g6bt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5574' Apr 11 13:55:51.784: INFO: stderr: "" Apr 11 13:55:51.784: INFO: stdout: "true" Apr 11 13:55:51.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5g6bt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5574' Apr 11 13:55:51.880: INFO: stderr: "" Apr 11 13:55:51.880: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 11 13:55:51.880: INFO: validating pod update-demo-nautilus-5g6bt Apr 11 13:55:51.884: INFO: got data: { "image": "nautilus.jpg" } Apr 11 13:55:51.884: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 11 13:55:51.884: INFO: update-demo-nautilus-5g6bt is verified up and running Apr 11 13:55:51.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zd9wn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5574' Apr 11 13:55:51.973: INFO: stderr: "" Apr 11 13:55:51.973: INFO: stdout: "true" Apr 11 13:55:51.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zd9wn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5574' Apr 11 13:55:52.055: INFO: stderr: "" Apr 11 13:55:52.055: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 11 13:55:52.055: INFO: validating pod update-demo-nautilus-zd9wn Apr 11 13:55:52.059: INFO: got data: { "image": "nautilus.jpg" } Apr 11 13:55:52.059: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 11 13:55:52.059: INFO: update-demo-nautilus-zd9wn is verified up and running STEP: rolling-update to new replication controller Apr 11 13:55:52.061: INFO: scanned /root for discovery docs: Apr 11 13:55:52.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-5574' Apr 11 13:56:14.668: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Apr 11 13:56:14.668: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 11 13:56:14.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5574' Apr 11 13:56:14.761: INFO: stderr: "" Apr 11 13:56:14.761: INFO: stdout: "update-demo-kitten-fs6xf update-demo-kitten-xh9np " Apr 11 13:56:14.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-fs6xf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5574' Apr 11 13:56:14.856: INFO: stderr: "" Apr 11 13:56:14.856: INFO: stdout: "true" Apr 11 13:56:14.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-fs6xf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5574' Apr 11 13:56:14.934: INFO: stderr: "" Apr 11 13:56:14.934: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Apr 11 13:56:14.934: INFO: validating pod update-demo-kitten-fs6xf Apr 11 13:56:14.938: INFO: got data: { "image": "kitten.jpg" } Apr 11 13:56:14.938: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Apr 11 13:56:14.938: INFO: update-demo-kitten-fs6xf is verified up and running Apr 11 13:56:14.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-xh9np -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5574' Apr 11 13:56:15.033: INFO: stderr: "" Apr 11 13:56:15.033: INFO: stdout: "true" Apr 11 13:56:15.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-xh9np -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5574' Apr 11 13:56:15.117: INFO: stderr: "" Apr 11 13:56:15.117: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Apr 11 13:56:15.117: INFO: validating pod update-demo-kitten-xh9np Apr 11 13:56:15.121: INFO: got data: { "image": "kitten.jpg" } Apr 11 13:56:15.121: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Apr 11 13:56:15.121: INFO: update-demo-kitten-xh9np is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:56:15.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5574" for this suite. Apr 11 13:56:37.139: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:56:37.261: INFO: namespace kubectl-5574 deletion completed in 22.137267088s • [SLOW TEST:51.172 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:56:37.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-ecf2753c-e995-467d-9fe4-7c5e8bc468f7 STEP: Creating a pod to test consume secrets Apr 11 13:56:37.384: INFO: Waiting up to 5m0s for pod "pod-secrets-80c879bc-bf5f-4905-a68b-4a7396bc83ab" in namespace "secrets-4864" to be "success or failure" Apr 11 13:56:37.390: INFO: Pod "pod-secrets-80c879bc-bf5f-4905-a68b-4a7396bc83ab": Phase="Pending", Reason="", readiness=false. Elapsed: 5.787775ms Apr 11 13:56:39.394: INFO: Pod "pod-secrets-80c879bc-bf5f-4905-a68b-4a7396bc83ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010048987s Apr 11 13:56:41.399: INFO: Pod "pod-secrets-80c879bc-bf5f-4905-a68b-4a7396bc83ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014484315s STEP: Saw pod success Apr 11 13:56:41.399: INFO: Pod "pod-secrets-80c879bc-bf5f-4905-a68b-4a7396bc83ab" satisfied condition "success or failure" Apr 11 13:56:41.402: INFO: Trying to get logs from node iruya-worker pod pod-secrets-80c879bc-bf5f-4905-a68b-4a7396bc83ab container secret-volume-test: STEP: delete the pod Apr 11 13:56:41.422: INFO: Waiting for pod pod-secrets-80c879bc-bf5f-4905-a68b-4a7396bc83ab to disappear Apr 11 13:56:41.442: INFO: Pod pod-secrets-80c879bc-bf5f-4905-a68b-4a7396bc83ab no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:56:41.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4864" for this suite. Apr 11 13:56:47.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:56:47.530: INFO: namespace secrets-4864 deletion completed in 6.085437006s • [SLOW TEST:10.268 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:56:47.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Apr 11 13:56:48.103: INFO: created pod pod-service-account-defaultsa Apr 11 13:56:48.103: INFO: pod pod-service-account-defaultsa service account token volume mount: true Apr 11 13:56:48.110: INFO: created pod pod-service-account-mountsa Apr 11 13:56:48.110: INFO: pod pod-service-account-mountsa service account token volume mount: true Apr 11 13:56:48.139: INFO: created pod pod-service-account-nomountsa Apr 11 13:56:48.139: INFO: pod pod-service-account-nomountsa service account token volume mount: false Apr 11 13:56:48.175: INFO: created pod pod-service-account-defaultsa-mountspec Apr 11 13:56:48.175: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Apr 11 13:56:48.189: INFO: created pod pod-service-account-mountsa-mountspec Apr 11 13:56:48.189: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Apr 11 13:56:48.220: INFO: created pod pod-service-account-nomountsa-mountspec Apr 11 13:56:48.220: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Apr 11 13:56:48.259: INFO: created pod pod-service-account-defaultsa-nomountspec Apr 11 13:56:48.259: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Apr 11 13:56:48.320: INFO: created pod pod-service-account-mountsa-nomountspec Apr 11 13:56:48.320: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Apr 11 13:56:48.328: INFO: created pod pod-service-account-nomountsa-nomountspec Apr 11 13:56:48.328: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:56:48.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8320" for this suite. Apr 11 13:57:14.461: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:57:14.542: INFO: namespace svcaccounts-8320 deletion completed in 26.146047422s • [SLOW TEST:27.011 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:57:14.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 11 13:57:22.636: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 11 13:57:22.663: INFO: Pod pod-with-poststart-exec-hook still exists Apr 11 13:57:24.664: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 11 13:57:24.668: INFO: Pod pod-with-poststart-exec-hook still exists Apr 11 13:57:26.664: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 11 13:57:26.668: INFO: Pod pod-with-poststart-exec-hook still exists Apr 11 13:57:28.664: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 11 13:57:28.667: INFO: Pod pod-with-poststart-exec-hook still exists Apr 11 13:57:30.664: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 11 13:57:30.667: INFO: Pod pod-with-poststart-exec-hook still exists Apr 11 13:57:32.664: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 11 13:57:32.668: INFO: Pod pod-with-poststart-exec-hook still exists Apr 11 13:57:34.664: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 11 13:57:34.670: INFO: Pod pod-with-poststart-exec-hook still exists Apr 11 13:57:36.664: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 11 13:57:36.668: INFO: Pod pod-with-poststart-exec-hook still exists Apr 11 13:57:38.664: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 11 13:57:38.668: INFO: Pod pod-with-poststart-exec-hook still exists Apr 11 13:57:40.664: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 11 13:57:40.668: INFO: Pod pod-with-poststart-exec-hook still exists Apr 11 13:57:42.664: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 11 13:57:42.668: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:57:42.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1878" for this suite. Apr 11 13:58:04.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:58:04.773: INFO: namespace container-lifecycle-hook-1878 deletion completed in 22.101153242s • [SLOW TEST:50.231 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:58:04.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-e7ec291c-4bcd-41df-89af-bdd5d3e694a0 STEP: Creating a pod to test consume secrets Apr 11 13:58:04.844: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7b2f3bd5-9350-41fa-9e5c-157436879aa4" in namespace "projected-2742" to be "success or failure" Apr 11 13:58:04.849: INFO: Pod "pod-projected-secrets-7b2f3bd5-9350-41fa-9e5c-157436879aa4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.595113ms Apr 11 13:58:06.853: INFO: Pod "pod-projected-secrets-7b2f3bd5-9350-41fa-9e5c-157436879aa4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009017326s Apr 11 13:58:08.857: INFO: Pod "pod-projected-secrets-7b2f3bd5-9350-41fa-9e5c-157436879aa4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012820015s STEP: Saw pod success Apr 11 13:58:08.857: INFO: Pod "pod-projected-secrets-7b2f3bd5-9350-41fa-9e5c-157436879aa4" satisfied condition "success or failure" Apr 11 13:58:08.860: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-7b2f3bd5-9350-41fa-9e5c-157436879aa4 container projected-secret-volume-test: STEP: delete the pod Apr 11 13:58:08.895: INFO: Waiting for pod pod-projected-secrets-7b2f3bd5-9350-41fa-9e5c-157436879aa4 to disappear Apr 11 13:58:08.925: INFO: Pod pod-projected-secrets-7b2f3bd5-9350-41fa-9e5c-157436879aa4 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:58:08.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2742" for this suite. Apr 11 13:58:14.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:58:15.012: INFO: namespace projected-2742 deletion completed in 6.082332542s • [SLOW TEST:10.238 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:58:15.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 11 13:58:15.067: INFO: Waiting up to 5m0s for pod "pod-cbfbf191-b480-4f9b-b902-24a536acb8bc" in namespace "emptydir-4157" to be "success or failure" Apr 11 13:58:15.076: INFO: Pod "pod-cbfbf191-b480-4f9b-b902-24a536acb8bc": Phase="Pending", Reason="", readiness=false. Elapsed: 9.178307ms Apr 11 13:58:17.081: INFO: Pod "pod-cbfbf191-b480-4f9b-b902-24a536acb8bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014048582s Apr 11 13:58:19.085: INFO: Pod "pod-cbfbf191-b480-4f9b-b902-24a536acb8bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01858298s STEP: Saw pod success Apr 11 13:58:19.085: INFO: Pod "pod-cbfbf191-b480-4f9b-b902-24a536acb8bc" satisfied condition "success or failure" Apr 11 13:58:19.089: INFO: Trying to get logs from node iruya-worker2 pod pod-cbfbf191-b480-4f9b-b902-24a536acb8bc container test-container: STEP: delete the pod Apr 11 13:58:19.124: INFO: Waiting for pod pod-cbfbf191-b480-4f9b-b902-24a536acb8bc to disappear Apr 11 13:58:19.135: INFO: Pod pod-cbfbf191-b480-4f9b-b902-24a536acb8bc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:58:19.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4157" for this suite. Apr 11 13:58:25.151: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 13:58:25.256: INFO: namespace emptydir-4157 deletion completed in 6.117452213s • [SLOW TEST:10.243 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 13:58:25.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-df468ce7-2d0a-40d9-b0b3-e0415c3dddb0 STEP: Creating secret with name s-test-opt-upd-ce68b0b8-55af-4260-9b9c-3fff6d5c1a8a STEP: Creating the pod STEP: Deleting secret s-test-opt-del-df468ce7-2d0a-40d9-b0b3-e0415c3dddb0 STEP: Updating secret s-test-opt-upd-ce68b0b8-55af-4260-9b9c-3fff6d5c1a8a STEP: Creating secret with name s-test-opt-create-6af187de-b60f-48ad-9823-abbfeaf3970c STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 13:59:53.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6698" for this suite. Apr 11 14:00:15.848: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:00:15.923: INFO: namespace projected-6698 deletion completed in 22.099807017s • [SLOW TEST:110.667 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:00:15.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 11 14:00:16.012: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b0ce88c0-b1ad-456a-a980-0a9ce639ccc3" in namespace "projected-9460" to be "success or failure" Apr 11 14:00:16.020: INFO: Pod "downwardapi-volume-b0ce88c0-b1ad-456a-a980-0a9ce639ccc3": Phase="Pending", Reason="", readiness=false. Elapsed: 7.268517ms Apr 11 14:00:18.024: INFO: Pod "downwardapi-volume-b0ce88c0-b1ad-456a-a980-0a9ce639ccc3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01174606s Apr 11 14:00:20.029: INFO: Pod "downwardapi-volume-b0ce88c0-b1ad-456a-a980-0a9ce639ccc3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016422748s STEP: Saw pod success Apr 11 14:00:20.029: INFO: Pod "downwardapi-volume-b0ce88c0-b1ad-456a-a980-0a9ce639ccc3" satisfied condition "success or failure" Apr 11 14:00:20.032: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-b0ce88c0-b1ad-456a-a980-0a9ce639ccc3 container client-container: STEP: delete the pod Apr 11 14:00:20.049: INFO: Waiting for pod downwardapi-volume-b0ce88c0-b1ad-456a-a980-0a9ce639ccc3 to disappear Apr 11 14:00:20.054: INFO: Pod downwardapi-volume-b0ce88c0-b1ad-456a-a980-0a9ce639ccc3 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:00:20.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9460" for this suite. Apr 11 14:00:26.081: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:00:26.155: INFO: namespace projected-9460 deletion completed in 6.098256597s • [SLOW TEST:10.232 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:00:26.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 11 14:00:26.229: INFO: Waiting up to 5m0s for pod "pod-d11c5e0b-a458-4bd5-b770-d70f4b29dcda" in namespace "emptydir-4175" to be "success or failure" Apr 11 14:00:26.233: INFO: Pod "pod-d11c5e0b-a458-4bd5-b770-d70f4b29dcda": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099215ms Apr 11 14:00:28.238: INFO: Pod "pod-d11c5e0b-a458-4bd5-b770-d70f4b29dcda": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008241231s Apr 11 14:00:30.242: INFO: Pod "pod-d11c5e0b-a458-4bd5-b770-d70f4b29dcda": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012772175s STEP: Saw pod success Apr 11 14:00:30.242: INFO: Pod "pod-d11c5e0b-a458-4bd5-b770-d70f4b29dcda" satisfied condition "success or failure" Apr 11 14:00:30.246: INFO: Trying to get logs from node iruya-worker pod pod-d11c5e0b-a458-4bd5-b770-d70f4b29dcda container test-container: STEP: delete the pod Apr 11 14:00:30.280: INFO: Waiting for pod pod-d11c5e0b-a458-4bd5-b770-d70f4b29dcda to disappear Apr 11 14:00:30.287: INFO: Pod pod-d11c5e0b-a458-4bd5-b770-d70f4b29dcda no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:00:30.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4175" for this suite. Apr 11 14:00:36.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:00:36.417: INFO: namespace emptydir-4175 deletion completed in 6.126846982s • [SLOW TEST:10.262 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:00:36.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-7125/configmap-test-2a794717-39bf-4d90-a20f-6a974f52d1e9 STEP: Creating a pod to test consume configMaps Apr 11 14:00:36.512: INFO: Waiting up to 5m0s for pod "pod-configmaps-5b47b58a-71e8-4272-b36c-3e059ac28159" in namespace "configmap-7125" to be "success or failure" Apr 11 14:00:36.515: INFO: Pod "pod-configmaps-5b47b58a-71e8-4272-b36c-3e059ac28159": Phase="Pending", Reason="", readiness=false. Elapsed: 3.142925ms Apr 11 14:00:38.519: INFO: Pod "pod-configmaps-5b47b58a-71e8-4272-b36c-3e059ac28159": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006966947s Apr 11 14:00:40.523: INFO: Pod "pod-configmaps-5b47b58a-71e8-4272-b36c-3e059ac28159": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011475514s STEP: Saw pod success Apr 11 14:00:40.523: INFO: Pod "pod-configmaps-5b47b58a-71e8-4272-b36c-3e059ac28159" satisfied condition "success or failure" Apr 11 14:00:40.526: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-5b47b58a-71e8-4272-b36c-3e059ac28159 container env-test: STEP: delete the pod Apr 11 14:00:40.559: INFO: Waiting for pod pod-configmaps-5b47b58a-71e8-4272-b36c-3e059ac28159 to disappear Apr 11 14:00:40.569: INFO: Pod pod-configmaps-5b47b58a-71e8-4272-b36c-3e059ac28159 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:00:40.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7125" for this suite. Apr 11 14:00:46.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:00:46.710: INFO: namespace configmap-7125 deletion completed in 6.138172297s • [SLOW TEST:10.292 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:00:46.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Apr 11 14:00:46.768: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:01:01.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3099" for this suite. Apr 11 14:01:07.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:01:07.989: INFO: namespace pods-3099 deletion completed in 6.086276148s • [SLOW TEST:21.279 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:01:07.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 11 14:01:08.125: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 14:01:08.132: INFO: Number of nodes with available pods: 0 Apr 11 14:01:08.132: INFO: Node iruya-worker is running more than one daemon pod Apr 11 14:01:09.137: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 14:01:09.140: INFO: Number of nodes with available pods: 0 Apr 11 14:01:09.140: INFO: Node iruya-worker is running more than one daemon pod Apr 11 14:01:10.136: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 14:01:10.140: INFO: Number of nodes with available pods: 0 Apr 11 14:01:10.140: INFO: Node iruya-worker is running more than one daemon pod Apr 11 14:01:11.137: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 14:01:11.140: INFO: Number of nodes with available pods: 0 Apr 11 14:01:11.140: INFO: Node iruya-worker is running more than one daemon pod Apr 11 14:01:12.136: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 14:01:12.139: INFO: Number of nodes with available pods: 2 Apr 11 14:01:12.139: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Apr 11 14:01:12.157: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 14:01:12.203: INFO: Number of nodes with available pods: 1 Apr 11 14:01:12.203: INFO: Node iruya-worker is running more than one daemon pod Apr 11 14:01:13.208: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 14:01:13.212: INFO: Number of nodes with available pods: 1 Apr 11 14:01:13.212: INFO: Node iruya-worker is running more than one daemon pod Apr 11 14:01:14.208: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 14:01:14.212: INFO: Number of nodes with available pods: 1 Apr 11 14:01:14.212: INFO: Node iruya-worker is running more than one daemon pod Apr 11 14:01:15.209: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 14:01:15.213: INFO: Number of nodes with available pods: 1 Apr 11 14:01:15.213: INFO: Node iruya-worker is running more than one daemon pod Apr 11 14:01:16.207: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 14:01:16.211: INFO: Number of nodes with available pods: 1 Apr 11 14:01:16.211: INFO: Node iruya-worker is running more than one daemon pod Apr 11 14:01:17.207: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 14:01:17.210: INFO: Number of nodes with available pods: 1 Apr 11 14:01:17.210: INFO: Node iruya-worker is running more than one daemon pod Apr 11 14:01:18.208: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 14:01:18.211: INFO: Number of nodes with available pods: 1 Apr 11 14:01:18.211: INFO: Node iruya-worker is running more than one daemon pod Apr 11 14:01:19.207: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 14:01:19.209: INFO: Number of nodes with available pods: 1 Apr 11 14:01:19.209: INFO: Node iruya-worker is running more than one daemon pod Apr 11 14:01:20.208: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 14:01:20.212: INFO: Number of nodes with available pods: 1 Apr 11 14:01:20.212: INFO: Node iruya-worker is running more than one daemon pod Apr 11 14:01:21.208: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 14:01:21.212: INFO: Number of nodes with available pods: 1 Apr 11 14:01:21.212: INFO: Node iruya-worker is running more than one daemon pod Apr 11 14:01:22.282: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 14:01:22.295: INFO: Number of nodes with available pods: 1 Apr 11 14:01:22.295: INFO: Node iruya-worker is running more than one daemon pod Apr 11 14:01:23.208: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 14:01:23.212: INFO: Number of nodes with available pods: 1 Apr 11 14:01:23.212: INFO: Node iruya-worker is running more than one daemon pod Apr 11 14:01:24.208: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 14:01:24.213: INFO: Number of nodes with available pods: 1 Apr 11 14:01:24.213: INFO: Node iruya-worker is running more than one daemon pod Apr 11 14:01:25.208: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 14:01:25.211: INFO: Number of nodes with available pods: 2 Apr 11 14:01:25.211: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9007, will wait for the garbage collector to delete the pods Apr 11 14:01:25.274: INFO: Deleting DaemonSet.extensions daemon-set took: 6.581154ms Apr 11 14:01:25.575: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.375008ms Apr 11 14:01:32.177: INFO: Number of nodes with available pods: 0 Apr 11 14:01:32.177: INFO: Number of running nodes: 0, number of available pods: 0 Apr 11 14:01:32.179: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9007/daemonsets","resourceVersion":"4851143"},"items":null} Apr 11 14:01:32.181: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9007/pods","resourceVersion":"4851143"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:01:32.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9007" for this suite. Apr 11 14:01:38.237: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:01:38.312: INFO: namespace daemonsets-9007 deletion completed in 6.119326806s • [SLOW TEST:30.322 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:01:38.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 11 14:01:38.392: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 14:01:38.431: INFO: Number of nodes with available pods: 0 Apr 11 14:01:38.431: INFO: Node iruya-worker is running more than one daemon pod Apr 11 14:01:39.492: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 14:01:39.495: INFO: Number of nodes with available pods: 0 Apr 11 14:01:39.495: INFO: Node iruya-worker is running more than one daemon pod Apr 11 14:01:40.478: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 14:01:40.482: INFO: Number of nodes with available pods: 0 Apr 11 14:01:40.482: INFO: Node iruya-worker is running more than one daemon pod Apr 11 14:01:41.440: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 14:01:41.443: INFO: Number of nodes with available pods: 1 Apr 11 14:01:41.443: INFO: Node iruya-worker2 is running more than one daemon pod Apr 11 14:01:42.435: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 14:01:42.438: INFO: Number of nodes with available pods: 2 Apr 11 14:01:42.438: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Apr 11 14:01:42.488: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 14:01:42.499: INFO: Number of nodes with available pods: 2 Apr 11 14:01:42.499: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5576, will wait for the garbage collector to delete the pods Apr 11 14:01:43.588: INFO: Deleting DaemonSet.extensions daemon-set took: 4.896253ms Apr 11 14:01:43.889: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.43751ms Apr 11 14:01:47.093: INFO: Number of nodes with available pods: 0 Apr 11 14:01:47.093: INFO: Number of running nodes: 0, number of available pods: 0 Apr 11 14:01:47.096: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5576/daemonsets","resourceVersion":"4851249"},"items":null} Apr 11 14:01:47.099: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5576/pods","resourceVersion":"4851249"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:01:47.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5576" for this suite. Apr 11 14:01:53.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:01:53.225: INFO: namespace daemonsets-5576 deletion completed in 6.096413775s • [SLOW TEST:14.913 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:01:53.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Apr 11 14:01:53.342: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-8222,SelfLink:/api/v1/namespaces/watch-8222/configmaps/e2e-watch-test-resource-version,UID:b2afd22d-07cd-4322-9e16-e16d186fbbb5,ResourceVersion:4851292,Generation:0,CreationTimestamp:2020-04-11 14:01:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 11 14:01:53.342: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-8222,SelfLink:/api/v1/namespaces/watch-8222/configmaps/e2e-watch-test-resource-version,UID:b2afd22d-07cd-4322-9e16-e16d186fbbb5,ResourceVersion:4851293,Generation:0,CreationTimestamp:2020-04-11 14:01:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:01:53.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8222" for this suite. Apr 11 14:01:59.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:01:59.440: INFO: namespace watch-8222 deletion completed in 6.094072478s • [SLOW TEST:6.215 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:01:59.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Apr 11 14:01:59.525: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3163,SelfLink:/api/v1/namespaces/watch-3163/configmaps/e2e-watch-test-watch-closed,UID:d0cfda71-3c05-4bd2-8d6b-dcd1c475eaf2,ResourceVersion:4851315,Generation:0,CreationTimestamp:2020-04-11 14:01:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 11 14:01:59.525: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3163,SelfLink:/api/v1/namespaces/watch-3163/configmaps/e2e-watch-test-watch-closed,UID:d0cfda71-3c05-4bd2-8d6b-dcd1c475eaf2,ResourceVersion:4851316,Generation:0,CreationTimestamp:2020-04-11 14:01:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Apr 11 14:01:59.537: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3163,SelfLink:/api/v1/namespaces/watch-3163/configmaps/e2e-watch-test-watch-closed,UID:d0cfda71-3c05-4bd2-8d6b-dcd1c475eaf2,ResourceVersion:4851317,Generation:0,CreationTimestamp:2020-04-11 14:01:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 11 14:01:59.537: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3163,SelfLink:/api/v1/namespaces/watch-3163/configmaps/e2e-watch-test-watch-closed,UID:d0cfda71-3c05-4bd2-8d6b-dcd1c475eaf2,ResourceVersion:4851318,Generation:0,CreationTimestamp:2020-04-11 14:01:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:01:59.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3163" for this suite. Apr 11 14:02:05.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:02:05.655: INFO: namespace watch-3163 deletion completed in 6.113809925s • [SLOW TEST:6.214 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:02:05.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Apr 11 14:02:05.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4299' Apr 11 14:02:05.991: INFO: stderr: "" Apr 11 14:02:05.991: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 11 14:02:05.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4299' Apr 11 14:02:06.106: INFO: stderr: "" Apr 11 14:02:06.106: INFO: stdout: "update-demo-nautilus-5w6kc update-demo-nautilus-nt9qg " Apr 11 14:02:06.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5w6kc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4299' Apr 11 14:02:06.195: INFO: stderr: "" Apr 11 14:02:06.195: INFO: stdout: "" Apr 11 14:02:06.195: INFO: update-demo-nautilus-5w6kc is created but not running Apr 11 14:02:11.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4299' Apr 11 14:02:11.296: INFO: stderr: "" Apr 11 14:02:11.296: INFO: stdout: "update-demo-nautilus-5w6kc update-demo-nautilus-nt9qg " Apr 11 14:02:11.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5w6kc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4299' Apr 11 14:02:11.384: INFO: stderr: "" Apr 11 14:02:11.384: INFO: stdout: "true" Apr 11 14:02:11.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5w6kc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4299' Apr 11 14:02:11.476: INFO: stderr: "" Apr 11 14:02:11.476: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 11 14:02:11.476: INFO: validating pod update-demo-nautilus-5w6kc Apr 11 14:02:11.480: INFO: got data: { "image": "nautilus.jpg" } Apr 11 14:02:11.480: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 11 14:02:11.480: INFO: update-demo-nautilus-5w6kc is verified up and running Apr 11 14:02:11.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nt9qg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4299' Apr 11 14:02:11.572: INFO: stderr: "" Apr 11 14:02:11.572: INFO: stdout: "true" Apr 11 14:02:11.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nt9qg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4299' Apr 11 14:02:11.660: INFO: stderr: "" Apr 11 14:02:11.660: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 11 14:02:11.660: INFO: validating pod update-demo-nautilus-nt9qg Apr 11 14:02:11.664: INFO: got data: { "image": "nautilus.jpg" } Apr 11 14:02:11.664: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 11 14:02:11.664: INFO: update-demo-nautilus-nt9qg is verified up and running STEP: using delete to clean up resources Apr 11 14:02:11.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4299' Apr 11 14:02:11.765: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 11 14:02:11.765: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 11 14:02:11.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4299' Apr 11 14:02:11.863: INFO: stderr: "No resources found.\n" Apr 11 14:02:11.863: INFO: stdout: "" Apr 11 14:02:11.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4299 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 11 14:02:11.962: INFO: stderr: "" Apr 11 14:02:11.962: INFO: stdout: "update-demo-nautilus-5w6kc\nupdate-demo-nautilus-nt9qg\n" Apr 11 14:02:12.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4299' Apr 11 14:02:12.717: INFO: stderr: "No resources found.\n" Apr 11 14:02:12.717: INFO: stdout: "" Apr 11 14:02:12.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4299 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 11 14:02:12.813: INFO: stderr: "" Apr 11 14:02:12.813: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:02:12.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4299" for this suite. Apr 11 14:02:34.850: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:02:34.920: INFO: namespace kubectl-4299 deletion completed in 22.104546159s • [SLOW TEST:29.265 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:02:34.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-6482 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 11 14:02:34.962: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 11 14:03:01.122: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.138:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6482 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 11 14:03:01.122: INFO: >>> kubeConfig: /root/.kube/config I0411 14:03:01.147527 6 log.go:172] (0xc000617e40) (0xc002df0280) Create stream I0411 14:03:01.147556 6 log.go:172] (0xc000617e40) (0xc002df0280) Stream added, broadcasting: 1 I0411 14:03:01.149878 6 log.go:172] (0xc000617e40) Reply frame received for 1 I0411 14:03:01.149931 6 log.go:172] (0xc000617e40) (0xc002df0500) Create stream I0411 14:03:01.149945 6 log.go:172] (0xc000617e40) (0xc002df0500) Stream added, broadcasting: 3 I0411 14:03:01.150834 6 log.go:172] (0xc000617e40) Reply frame received for 3 I0411 14:03:01.150864 6 log.go:172] (0xc000617e40) (0xc002d78320) Create stream I0411 14:03:01.150881 6 log.go:172] (0xc000617e40) (0xc002d78320) Stream added, broadcasting: 5 I0411 14:03:01.151795 6 log.go:172] (0xc000617e40) Reply frame received for 5 I0411 14:03:01.254105 6 log.go:172] (0xc000617e40) Data frame received for 3 I0411 14:03:01.254160 6 log.go:172] (0xc000617e40) Data frame received for 5 I0411 14:03:01.254205 6 log.go:172] (0xc002d78320) (5) Data frame handling I0411 14:03:01.254243 6 log.go:172] (0xc002df0500) (3) Data frame handling I0411 14:03:01.254268 6 log.go:172] (0xc002df0500) (3) Data frame sent I0411 14:03:01.254280 6 log.go:172] (0xc000617e40) Data frame received for 3 I0411 14:03:01.254298 6 log.go:172] (0xc002df0500) (3) Data frame handling I0411 14:03:01.256657 6 log.go:172] (0xc000617e40) Data frame received for 1 I0411 14:03:01.256680 6 log.go:172] (0xc002df0280) (1) Data frame handling I0411 14:03:01.256705 6 log.go:172] (0xc002df0280) (1) Data frame sent I0411 14:03:01.256720 6 log.go:172] (0xc000617e40) (0xc002df0280) Stream removed, broadcasting: 1 I0411 14:03:01.256768 6 log.go:172] (0xc000617e40) Go away received I0411 14:03:01.256832 6 log.go:172] (0xc000617e40) (0xc002df0280) Stream removed, broadcasting: 1 I0411 14:03:01.256866 6 log.go:172] (0xc000617e40) (0xc002df0500) Stream removed, broadcasting: 3 I0411 14:03:01.256890 6 log.go:172] (0xc000617e40) (0xc002d78320) Stream removed, broadcasting: 5 Apr 11 14:03:01.256: INFO: Found all expected endpoints: [netserver-0] Apr 11 14:03:01.260: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.98:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6482 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 11 14:03:01.260: INFO: >>> kubeConfig: /root/.kube/config I0411 14:03:01.286833 6 log.go:172] (0xc000c32160) (0xc0020ec0a0) Create stream I0411 14:03:01.286859 6 log.go:172] (0xc000c32160) (0xc0020ec0a0) Stream added, broadcasting: 1 I0411 14:03:01.289395 6 log.go:172] (0xc000c32160) Reply frame received for 1 I0411 14:03:01.289435 6 log.go:172] (0xc000c32160) (0xc0020ec140) Create stream I0411 14:03:01.289449 6 log.go:172] (0xc000c32160) (0xc0020ec140) Stream added, broadcasting: 3 I0411 14:03:01.290312 6 log.go:172] (0xc000c32160) Reply frame received for 3 I0411 14:03:01.290349 6 log.go:172] (0xc000c32160) (0xc002df06e0) Create stream I0411 14:03:01.290363 6 log.go:172] (0xc000c32160) (0xc002df06e0) Stream added, broadcasting: 5 I0411 14:03:01.291143 6 log.go:172] (0xc000c32160) Reply frame received for 5 I0411 14:03:01.364998 6 log.go:172] (0xc000c32160) Data frame received for 3 I0411 14:03:01.365042 6 log.go:172] (0xc0020ec140) (3) Data frame handling I0411 14:03:01.365053 6 log.go:172] (0xc0020ec140) (3) Data frame sent I0411 14:03:01.365060 6 log.go:172] (0xc000c32160) Data frame received for 3 I0411 14:03:01.365068 6 log.go:172] (0xc0020ec140) (3) Data frame handling I0411 14:03:01.365092 6 log.go:172] (0xc000c32160) Data frame received for 5 I0411 14:03:01.365240 6 log.go:172] (0xc002df06e0) (5) Data frame handling I0411 14:03:01.367014 6 log.go:172] (0xc000c32160) Data frame received for 1 I0411 14:03:01.367031 6 log.go:172] (0xc0020ec0a0) (1) Data frame handling I0411 14:03:01.367052 6 log.go:172] (0xc0020ec0a0) (1) Data frame sent I0411 14:03:01.367063 6 log.go:172] (0xc000c32160) (0xc0020ec0a0) Stream removed, broadcasting: 1 I0411 14:03:01.367192 6 log.go:172] (0xc000c32160) Go away received I0411 14:03:01.367255 6 log.go:172] (0xc000c32160) (0xc0020ec0a0) Stream removed, broadcasting: 1 I0411 14:03:01.367286 6 log.go:172] (0xc000c32160) (0xc0020ec140) Stream removed, broadcasting: 3 I0411 14:03:01.367302 6 log.go:172] (0xc000c32160) (0xc002df06e0) Stream removed, broadcasting: 5 Apr 11 14:03:01.367: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:03:01.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6482" for this suite. Apr 11 14:03:23.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:03:23.463: INFO: namespace pod-network-test-6482 deletion completed in 22.090993136s • [SLOW TEST:48.542 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:03:23.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 11 14:03:23.551: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:03:29.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2274" for this suite. Apr 11 14:03:35.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:03:35.342: INFO: namespace init-container-2274 deletion completed in 6.094546214s • [SLOW TEST:11.879 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:03:35.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 11 14:03:35.416: INFO: Waiting up to 5m0s for pod "downward-api-c8f2765a-55c3-4c7b-ba64-9b8f60f1a96f" in namespace "downward-api-6345" to be "success or failure" Apr 11 14:03:35.422: INFO: Pod "downward-api-c8f2765a-55c3-4c7b-ba64-9b8f60f1a96f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.611966ms Apr 11 14:03:37.426: INFO: Pod "downward-api-c8f2765a-55c3-4c7b-ba64-9b8f60f1a96f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00964388s Apr 11 14:03:39.430: INFO: Pod "downward-api-c8f2765a-55c3-4c7b-ba64-9b8f60f1a96f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013641304s STEP: Saw pod success Apr 11 14:03:39.430: INFO: Pod "downward-api-c8f2765a-55c3-4c7b-ba64-9b8f60f1a96f" satisfied condition "success or failure" Apr 11 14:03:39.433: INFO: Trying to get logs from node iruya-worker pod downward-api-c8f2765a-55c3-4c7b-ba64-9b8f60f1a96f container dapi-container: STEP: delete the pod Apr 11 14:03:39.453: INFO: Waiting for pod downward-api-c8f2765a-55c3-4c7b-ba64-9b8f60f1a96f to disappear Apr 11 14:03:39.458: INFO: Pod downward-api-c8f2765a-55c3-4c7b-ba64-9b8f60f1a96f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:03:39.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6345" for this suite. Apr 11 14:03:45.474: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:03:45.595: INFO: namespace downward-api-6345 deletion completed in 6.133889863s • [SLOW TEST:10.253 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:03:45.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 11 14:03:45.634: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7f42805b-4a8f-494e-b458-16d597cf7fa4" in namespace "downward-api-4311" to be "success or failure" Apr 11 14:03:45.678: INFO: Pod "downwardapi-volume-7f42805b-4a8f-494e-b458-16d597cf7fa4": Phase="Pending", Reason="", readiness=false. Elapsed: 44.274606ms Apr 11 14:03:47.683: INFO: Pod "downwardapi-volume-7f42805b-4a8f-494e-b458-16d597cf7fa4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048905171s Apr 11 14:03:49.687: INFO: Pod "downwardapi-volume-7f42805b-4a8f-494e-b458-16d597cf7fa4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052937399s STEP: Saw pod success Apr 11 14:03:49.687: INFO: Pod "downwardapi-volume-7f42805b-4a8f-494e-b458-16d597cf7fa4" satisfied condition "success or failure" Apr 11 14:03:49.689: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-7f42805b-4a8f-494e-b458-16d597cf7fa4 container client-container: STEP: delete the pod Apr 11 14:03:49.707: INFO: Waiting for pod downwardapi-volume-7f42805b-4a8f-494e-b458-16d597cf7fa4 to disappear Apr 11 14:03:49.711: INFO: Pod downwardapi-volume-7f42805b-4a8f-494e-b458-16d597cf7fa4 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:03:49.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4311" for this suite. Apr 11 14:03:55.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:03:55.817: INFO: namespace downward-api-4311 deletion completed in 6.10127753s • [SLOW TEST:10.221 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:03:55.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 11 14:03:55.876: INFO: Waiting up to 5m0s for pod "downward-api-74ebc889-2a79-4ae4-8582-f8a534264fe8" in namespace "downward-api-4855" to be "success or failure" Apr 11 14:03:55.879: INFO: Pod "downward-api-74ebc889-2a79-4ae4-8582-f8a534264fe8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.422891ms Apr 11 14:03:57.942: INFO: Pod "downward-api-74ebc889-2a79-4ae4-8582-f8a534264fe8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065950446s Apr 11 14:03:59.946: INFO: Pod "downward-api-74ebc889-2a79-4ae4-8582-f8a534264fe8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070399155s STEP: Saw pod success Apr 11 14:03:59.946: INFO: Pod "downward-api-74ebc889-2a79-4ae4-8582-f8a534264fe8" satisfied condition "success or failure" Apr 11 14:03:59.950: INFO: Trying to get logs from node iruya-worker2 pod downward-api-74ebc889-2a79-4ae4-8582-f8a534264fe8 container dapi-container: STEP: delete the pod Apr 11 14:03:59.986: INFO: Waiting for pod downward-api-74ebc889-2a79-4ae4-8582-f8a534264fe8 to disappear Apr 11 14:04:00.005: INFO: Pod downward-api-74ebc889-2a79-4ae4-8582-f8a534264fe8 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:04:00.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4855" for this suite. Apr 11 14:04:06.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:04:06.139: INFO: namespace downward-api-4855 deletion completed in 6.107244785s • [SLOW TEST:10.322 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:04:06.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Apr 11 14:04:06.186: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:04:06.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5937" for this suite. Apr 11 14:04:12.282: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:04:12.361: INFO: namespace kubectl-5937 deletion completed in 6.090532275s • [SLOW TEST:6.221 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:04:12.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Apr 11 14:04:12.425: INFO: Waiting up to 5m0s for pod "client-containers-51ac654b-0e5d-47c7-9f63-35668242749a" in namespace "containers-7650" to be "success or failure" Apr 11 14:04:12.429: INFO: Pod "client-containers-51ac654b-0e5d-47c7-9f63-35668242749a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.991677ms Apr 11 14:04:14.433: INFO: Pod "client-containers-51ac654b-0e5d-47c7-9f63-35668242749a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008175661s Apr 11 14:04:16.438: INFO: Pod "client-containers-51ac654b-0e5d-47c7-9f63-35668242749a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012852318s STEP: Saw pod success Apr 11 14:04:16.438: INFO: Pod "client-containers-51ac654b-0e5d-47c7-9f63-35668242749a" satisfied condition "success or failure" Apr 11 14:04:16.442: INFO: Trying to get logs from node iruya-worker2 pod client-containers-51ac654b-0e5d-47c7-9f63-35668242749a container test-container: STEP: delete the pod Apr 11 14:04:16.460: INFO: Waiting for pod client-containers-51ac654b-0e5d-47c7-9f63-35668242749a to disappear Apr 11 14:04:16.464: INFO: Pod client-containers-51ac654b-0e5d-47c7-9f63-35668242749a no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:04:16.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7650" for this suite. Apr 11 14:04:22.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:04:22.577: INFO: namespace containers-7650 deletion completed in 6.109825539s • [SLOW TEST:10.216 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:04:22.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Apr 11 14:04:26.664: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Apr 11 14:04:36.762: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:04:36.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1582" for this suite. Apr 11 14:04:42.802: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:04:42.885: INFO: namespace pods-1582 deletion completed in 6.115008128s • [SLOW TEST:20.307 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:04:42.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 11 14:04:42.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-3696' Apr 11 14:04:43.070: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 11 14:04:43.070: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Apr 11 14:04:43.108: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-x5d8v] Apr 11 14:04:43.108: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-x5d8v" in namespace "kubectl-3696" to be "running and ready" Apr 11 14:04:43.135: INFO: Pod "e2e-test-nginx-rc-x5d8v": Phase="Pending", Reason="", readiness=false. Elapsed: 26.50656ms Apr 11 14:04:45.144: INFO: Pod "e2e-test-nginx-rc-x5d8v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035976308s Apr 11 14:04:47.148: INFO: Pod "e2e-test-nginx-rc-x5d8v": Phase="Running", Reason="", readiness=true. Elapsed: 4.040307322s Apr 11 14:04:47.148: INFO: Pod "e2e-test-nginx-rc-x5d8v" satisfied condition "running and ready" Apr 11 14:04:47.148: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-x5d8v] Apr 11 14:04:47.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-3696' Apr 11 14:04:47.266: INFO: stderr: "" Apr 11 14:04:47.266: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Apr 11 14:04:47.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-3696' Apr 11 14:04:47.356: INFO: stderr: "" Apr 11 14:04:47.357: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:04:47.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3696" for this suite. Apr 11 14:04:53.369: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:04:53.444: INFO: namespace kubectl-3696 deletion completed in 6.084800206s • [SLOW TEST:10.559 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:04:53.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Apr 11 14:04:53.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5966' Apr 11 14:04:53.798: INFO: stderr: "" Apr 11 14:04:53.798: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Apr 11 14:04:54.802: INFO: Selector matched 1 pods for map[app:redis] Apr 11 14:04:54.802: INFO: Found 0 / 1 Apr 11 14:04:55.803: INFO: Selector matched 1 pods for map[app:redis] Apr 11 14:04:55.803: INFO: Found 0 / 1 Apr 11 14:04:56.803: INFO: Selector matched 1 pods for map[app:redis] Apr 11 14:04:56.803: INFO: Found 1 / 1 Apr 11 14:04:56.803: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Apr 11 14:04:56.807: INFO: Selector matched 1 pods for map[app:redis] Apr 11 14:04:56.807: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 11 14:04:56.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-xlqk5 --namespace=kubectl-5966 -p {"metadata":{"annotations":{"x":"y"}}}' Apr 11 14:04:56.912: INFO: stderr: "" Apr 11 14:04:56.912: INFO: stdout: "pod/redis-master-xlqk5 patched\n" STEP: checking annotations Apr 11 14:04:56.916: INFO: Selector matched 1 pods for map[app:redis] Apr 11 14:04:56.916: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:04:56.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5966" for this suite. Apr 11 14:05:18.971: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:05:19.050: INFO: namespace kubectl-5966 deletion completed in 22.131570013s • [SLOW TEST:25.605 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:05:19.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 11 14:05:19.132: INFO: Pod name rollover-pod: Found 0 pods out of 1 Apr 11 14:05:24.137: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 11 14:05:24.137: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Apr 11 14:05:26.142: INFO: Creating deployment "test-rollover-deployment" Apr 11 14:05:26.151: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Apr 11 14:05:28.157: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Apr 11 14:05:28.163: INFO: Ensure that both replica sets have 1 created replica Apr 11 14:05:28.166: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Apr 11 14:05:28.170: INFO: Updating deployment test-rollover-deployment Apr 11 14:05:28.170: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Apr 11 14:05:30.241: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Apr 11 14:05:30.247: INFO: Make sure deployment "test-rollover-deployment" is complete Apr 11 14:05:30.252: INFO: all replica sets need to contain the pod-template-hash label Apr 11 14:05:30.252: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722210726, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722210726, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722210728, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722210726, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 11 14:05:32.261: INFO: all replica sets need to contain the pod-template-hash label Apr 11 14:05:32.261: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722210726, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722210726, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722210731, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722210726, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 11 14:05:34.261: INFO: all replica sets need to contain the pod-template-hash label Apr 11 14:05:34.261: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722210726, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722210726, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722210731, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722210726, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 11 14:05:36.261: INFO: all replica sets need to contain the pod-template-hash label Apr 11 14:05:36.261: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722210726, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722210726, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722210731, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722210726, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 11 14:05:38.260: INFO: all replica sets need to contain the pod-template-hash label Apr 11 14:05:38.261: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722210726, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722210726, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722210731, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722210726, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 11 14:05:40.259: INFO: all replica sets need to contain the pod-template-hash label Apr 11 14:05:40.259: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722210726, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722210726, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722210731, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722210726, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 11 14:05:42.260: INFO: Apr 11 14:05:42.260: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 11 14:05:42.268: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-5987,SelfLink:/apis/apps/v1/namespaces/deployment-5987/deployments/test-rollover-deployment,UID:f7ce0659-14ac-4d4a-839b-03440e399c68,ResourceVersion:4852170,Generation:2,CreationTimestamp:2020-04-11 14:05:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-04-11 14:05:26 +0000 UTC 2020-04-11 14:05:26 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-04-11 14:05:41 +0000 UTC 2020-04-11 14:05:26 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Apr 11 14:05:42.272: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-5987,SelfLink:/apis/apps/v1/namespaces/deployment-5987/replicasets/test-rollover-deployment-854595fc44,UID:d3468f0a-4b28-473d-83d1-620cec3fecce,ResourceVersion:4852159,Generation:2,CreationTimestamp:2020-04-11 14:05:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment f7ce0659-14ac-4d4a-839b-03440e399c68 0xc0029a4ec7 0xc0029a4ec8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Apr 11 14:05:42.272: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Apr 11 14:05:42.272: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-5987,SelfLink:/apis/apps/v1/namespaces/deployment-5987/replicasets/test-rollover-controller,UID:ddfd8511-76d1-4aec-868e-26185cfabde4,ResourceVersion:4852168,Generation:2,CreationTimestamp:2020-04-11 14:05:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment f7ce0659-14ac-4d4a-839b-03440e399c68 0xc0029a4ddf 0xc0029a4df0}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 11 14:05:42.272: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-5987,SelfLink:/apis/apps/v1/namespaces/deployment-5987/replicasets/test-rollover-deployment-9b8b997cf,UID:53906daa-6530-4426-8470-01aa0bfa8567,ResourceVersion:4852124,Generation:2,CreationTimestamp:2020-04-11 14:05:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment f7ce0659-14ac-4d4a-839b-03440e399c68 0xc0029a4f90 0xc0029a4f91}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 11 14:05:42.276: INFO: Pod "test-rollover-deployment-854595fc44-x7pww" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-x7pww,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-5987,SelfLink:/api/v1/namespaces/deployment-5987/pods/test-rollover-deployment-854595fc44-x7pww,UID:fb9bb53c-c7d2-48d6-866a-c4612e261a35,ResourceVersion:4852136,Generation:0,CreationTimestamp:2020-04-11 14:05:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 d3468f0a-4b28-473d-83d1-620cec3fecce 0xc0029a5b77 0xc0029a5b78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tgxwh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tgxwh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-tgxwh true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0029a5bf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0029a5c10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:05:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:05:31 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:05:31 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:05:28 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.146,StartTime:2020-04-11 14:05:28 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-04-11 14:05:31 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://010af7ee5ac325edc81747f02c9428f1eef9dd31b2e28b7a3c16b42dfc0a682d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:05:42.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5987" for this suite. Apr 11 14:05:48.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:05:48.373: INFO: namespace deployment-5987 deletion completed in 6.09394211s • [SLOW TEST:29.323 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:05:48.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-1d400761-b0a9-49be-9b66-db9e792b6219 STEP: Creating a pod to test consume secrets Apr 11 14:05:48.459: INFO: Waiting up to 5m0s for pod "pod-secrets-eb82b97b-f203-49eb-937d-a41faad32f34" in namespace "secrets-7998" to be "success or failure" Apr 11 14:05:48.461: INFO: Pod "pod-secrets-eb82b97b-f203-49eb-937d-a41faad32f34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.486743ms Apr 11 14:05:50.466: INFO: Pod "pod-secrets-eb82b97b-f203-49eb-937d-a41faad32f34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007026362s Apr 11 14:05:52.469: INFO: Pod "pod-secrets-eb82b97b-f203-49eb-937d-a41faad32f34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01056126s STEP: Saw pod success Apr 11 14:05:52.469: INFO: Pod "pod-secrets-eb82b97b-f203-49eb-937d-a41faad32f34" satisfied condition "success or failure" Apr 11 14:05:52.471: INFO: Trying to get logs from node iruya-worker pod pod-secrets-eb82b97b-f203-49eb-937d-a41faad32f34 container secret-volume-test: STEP: delete the pod Apr 11 14:05:52.488: INFO: Waiting for pod pod-secrets-eb82b97b-f203-49eb-937d-a41faad32f34 to disappear Apr 11 14:05:52.492: INFO: Pod pod-secrets-eb82b97b-f203-49eb-937d-a41faad32f34 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:05:52.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7998" for this suite. Apr 11 14:05:58.508: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:05:58.636: INFO: namespace secrets-7998 deletion completed in 6.139711943s • [SLOW TEST:10.263 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:05:58.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-3298 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3298 to expose endpoints map[] Apr 11 14:05:58.706: INFO: Get endpoints failed (12.805677ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Apr 11 14:05:59.710: INFO: successfully validated that service multi-endpoint-test in namespace services-3298 exposes endpoints map[] (1.01680578s elapsed) STEP: Creating pod pod1 in namespace services-3298 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3298 to expose endpoints map[pod1:[100]] Apr 11 14:06:02.765: INFO: successfully validated that service multi-endpoint-test in namespace services-3298 exposes endpoints map[pod1:[100]] (3.04815141s elapsed) STEP: Creating pod pod2 in namespace services-3298 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3298 to expose endpoints map[pod1:[100] pod2:[101]] Apr 11 14:06:05.888: INFO: successfully validated that service multi-endpoint-test in namespace services-3298 exposes endpoints map[pod1:[100] pod2:[101]] (3.117948849s elapsed) STEP: Deleting pod pod1 in namespace services-3298 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3298 to expose endpoints map[pod2:[101]] Apr 11 14:06:05.943: INFO: successfully validated that service multi-endpoint-test in namespace services-3298 exposes endpoints map[pod2:[101]] (50.716896ms elapsed) STEP: Deleting pod pod2 in namespace services-3298 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3298 to expose endpoints map[] Apr 11 14:06:07.028: INFO: successfully validated that service multi-endpoint-test in namespace services-3298 exposes endpoints map[] (1.079708644s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:06:07.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3298" for this suite. Apr 11 14:06:13.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:06:13.217: INFO: namespace services-3298 deletion completed in 6.154327772s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:14.581 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:06:13.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-4feac2bd-356f-46a6-aeb5-d9d6380dca2b [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:06:13.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4330" for this suite. Apr 11 14:06:19.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:06:19.432: INFO: namespace configmap-4330 deletion completed in 6.100859321s • [SLOW TEST:6.213 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:06:19.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Apr 11 14:06:19.465: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 11 14:06:19.485: INFO: Waiting for terminating namespaces to be deleted... Apr 11 14:06:19.487: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Apr 11 14:06:19.492: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 11 14:06:19.492: INFO: Container kube-proxy ready: true, restart count 0 Apr 11 14:06:19.492: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 11 14:06:19.492: INFO: Container kindnet-cni ready: true, restart count 0 Apr 11 14:06:19.492: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Apr 11 14:06:19.498: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Apr 11 14:06:19.498: INFO: Container kube-proxy ready: true, restart count 0 Apr 11 14:06:19.498: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Apr 11 14:06:19.498: INFO: Container kindnet-cni ready: true, restart count 0 Apr 11 14:06:19.498: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Apr 11 14:06:19.498: INFO: Container coredns ready: true, restart count 0 Apr 11 14:06:19.498: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Apr 11 14:06:19.498: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1604c90215c7253c], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:06:20.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4973" for this suite. Apr 11 14:06:26.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:06:26.634: INFO: namespace sched-pred-4973 deletion completed in 6.092592065s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.202 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:06:26.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 11 14:06:29.714: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:06:29.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7817" for this suite. Apr 11 14:06:35.741: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:06:35.818: INFO: namespace container-runtime-7817 deletion completed in 6.085393053s • [SLOW TEST:9.184 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:06:35.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 11 14:06:35.864: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:06:40.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8462" for this suite. Apr 11 14:07:26.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:07:26.112: INFO: namespace pods-8462 deletion completed in 46.090898148s • [SLOW TEST:50.294 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:07:26.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-28f0aaf2-0a55-47d5-9f96-3ea03ae02a62 Apr 11 14:07:26.196: INFO: Pod name my-hostname-basic-28f0aaf2-0a55-47d5-9f96-3ea03ae02a62: Found 0 pods out of 1 Apr 11 14:07:31.201: INFO: Pod name my-hostname-basic-28f0aaf2-0a55-47d5-9f96-3ea03ae02a62: Found 1 pods out of 1 Apr 11 14:07:31.201: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-28f0aaf2-0a55-47d5-9f96-3ea03ae02a62" are running Apr 11 14:07:31.204: INFO: Pod "my-hostname-basic-28f0aaf2-0a55-47d5-9f96-3ea03ae02a62-lmfcc" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-11 14:07:26 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-11 14:07:28 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-11 14:07:28 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-11 14:07:26 +0000 UTC Reason: Message:}]) Apr 11 14:07:31.204: INFO: Trying to dial the pod Apr 11 14:07:36.216: INFO: Controller my-hostname-basic-28f0aaf2-0a55-47d5-9f96-3ea03ae02a62: Got expected result from replica 1 [my-hostname-basic-28f0aaf2-0a55-47d5-9f96-3ea03ae02a62-lmfcc]: "my-hostname-basic-28f0aaf2-0a55-47d5-9f96-3ea03ae02a62-lmfcc", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:07:36.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-445" for this suite. Apr 11 14:07:42.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:07:42.313: INFO: namespace replication-controller-445 deletion completed in 6.090768927s • [SLOW TEST:16.201 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:07:42.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-e7533384-ef58-4158-b694-ac041fe9d6de STEP: Creating configMap with name cm-test-opt-upd-ce3cbf21-258a-4dfd-a514-b49fdb2c0652 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-e7533384-ef58-4158-b694-ac041fe9d6de STEP: Updating configmap cm-test-opt-upd-ce3cbf21-258a-4dfd-a514-b49fdb2c0652 STEP: Creating configMap with name cm-test-opt-create-2f8bd459-65ab-407e-820e-5e6e875f6500 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:09:10.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4588" for this suite. Apr 11 14:09:32.896: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:09:32.966: INFO: namespace projected-4588 deletion completed in 22.08641159s • [SLOW TEST:110.652 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:09:32.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 11 14:09:33.063: INFO: Waiting up to 5m0s for pod "downward-api-ce56dcd6-679e-49b7-943f-5de5b630ee95" in namespace "downward-api-7469" to be "success or failure" Apr 11 14:09:33.067: INFO: Pod "downward-api-ce56dcd6-679e-49b7-943f-5de5b630ee95": Phase="Pending", Reason="", readiness=false. Elapsed: 3.873906ms Apr 11 14:09:35.071: INFO: Pod "downward-api-ce56dcd6-679e-49b7-943f-5de5b630ee95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007806589s Apr 11 14:09:37.075: INFO: Pod "downward-api-ce56dcd6-679e-49b7-943f-5de5b630ee95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012208133s STEP: Saw pod success Apr 11 14:09:37.075: INFO: Pod "downward-api-ce56dcd6-679e-49b7-943f-5de5b630ee95" satisfied condition "success or failure" Apr 11 14:09:37.079: INFO: Trying to get logs from node iruya-worker2 pod downward-api-ce56dcd6-679e-49b7-943f-5de5b630ee95 container dapi-container: STEP: delete the pod Apr 11 14:09:37.140: INFO: Waiting for pod downward-api-ce56dcd6-679e-49b7-943f-5de5b630ee95 to disappear Apr 11 14:09:37.146: INFO: Pod downward-api-ce56dcd6-679e-49b7-943f-5de5b630ee95 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:09:37.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7469" for this suite. Apr 11 14:09:43.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:09:43.239: INFO: namespace downward-api-7469 deletion completed in 6.089484815s • [SLOW TEST:10.272 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:09:43.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 11 14:09:43.297: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 5.796017ms) Apr 11 14:09:43.300: INFO: (1) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.142479ms) Apr 11 14:09:43.304: INFO: (2) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.664227ms) Apr 11 14:09:43.307: INFO: (3) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.886306ms) Apr 11 14:09:43.310: INFO: (4) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.327004ms) Apr 11 14:09:43.313: INFO: (5) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.208626ms) Apr 11 14:09:43.316: INFO: (6) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.070744ms) Apr 11 14:09:43.320: INFO: (7) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.321351ms) Apr 11 14:09:43.323: INFO: (8) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.417547ms) Apr 11 14:09:43.327: INFO: (9) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.42456ms) Apr 11 14:09:43.330: INFO: (10) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.44692ms) Apr 11 14:09:43.333: INFO: (11) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.35596ms) Apr 11 14:09:43.348: INFO: (12) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 15.029954ms) Apr 11 14:09:43.352: INFO: (13) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.542817ms) Apr 11 14:09:43.355: INFO: (14) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.240727ms) Apr 11 14:09:43.359: INFO: (15) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.300685ms) Apr 11 14:09:43.362: INFO: (16) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.975964ms) Apr 11 14:09:43.365: INFO: (17) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.144368ms) Apr 11 14:09:43.368: INFO: (18) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.307628ms) Apr 11 14:09:43.371: INFO: (19) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.197473ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:09:43.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-734" for this suite. Apr 11 14:09:49.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:09:49.457: INFO: namespace proxy-734 deletion completed in 6.081796477s • [SLOW TEST:6.218 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:09:49.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 11 14:09:49.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-1894' Apr 11 14:09:51.981: INFO: stderr: "" Apr 11 14:09:51.981: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Apr 11 14:09:57.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-1894 -o json' Apr 11 14:09:57.123: INFO: stderr: "" Apr 11 14:09:57.123: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-04-11T14:09:51Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-1894\",\n \"resourceVersion\": \"4852957\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-1894/pods/e2e-test-nginx-pod\",\n \"uid\": \"9111f251-5823-46e6-b460-303d8b72d450\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-w9hc4\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-w9hc4\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-w9hc4\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-11T14:09:52Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-11T14:09:55Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-11T14:09:55Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-11T14:09:51Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://b465e5aebb2ffd6cc7a532e33dc533725b4abf7f7eada982f14bb5d1ee0ddbf5\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-04-11T14:09:54Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.6\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.107\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-04-11T14:09:52Z\"\n }\n}\n" STEP: replace the image in the pod Apr 11 14:09:57.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-1894' Apr 11 14:09:57.401: INFO: stderr: "" Apr 11 14:09:57.401: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 Apr 11 14:09:57.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-1894' Apr 11 14:10:02.175: INFO: stderr: "" Apr 11 14:10:02.175: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:10:02.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1894" for this suite. Apr 11 14:10:08.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:10:08.294: INFO: namespace kubectl-1894 deletion completed in 6.094791154s • [SLOW TEST:18.837 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:10:08.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-2765 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 11 14:10:08.380: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 11 14:10:36.503: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.152:8080/dial?request=hostName&protocol=udp&host=10.244.1.151&port=8081&tries=1'] Namespace:pod-network-test-2765 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 11 14:10:36.503: INFO: >>> kubeConfig: /root/.kube/config I0411 14:10:36.541331 6 log.go:172] (0xc0010eca50) (0xc002683540) Create stream I0411 14:10:36.541357 6 log.go:172] (0xc0010eca50) (0xc002683540) Stream added, broadcasting: 1 I0411 14:10:36.543356 6 log.go:172] (0xc0010eca50) Reply frame received for 1 I0411 14:10:36.543398 6 log.go:172] (0xc0010eca50) (0xc001b45b80) Create stream I0411 14:10:36.543414 6 log.go:172] (0xc0010eca50) (0xc001b45b80) Stream added, broadcasting: 3 I0411 14:10:36.544454 6 log.go:172] (0xc0010eca50) Reply frame received for 3 I0411 14:10:36.544495 6 log.go:172] (0xc0010eca50) (0xc0026835e0) Create stream I0411 14:10:36.544510 6 log.go:172] (0xc0010eca50) (0xc0026835e0) Stream added, broadcasting: 5 I0411 14:10:36.545884 6 log.go:172] (0xc0010eca50) Reply frame received for 5 I0411 14:10:36.635232 6 log.go:172] (0xc0010eca50) Data frame received for 3 I0411 14:10:36.635266 6 log.go:172] (0xc001b45b80) (3) Data frame handling I0411 14:10:36.635280 6 log.go:172] (0xc001b45b80) (3) Data frame sent I0411 14:10:36.635842 6 log.go:172] (0xc0010eca50) Data frame received for 5 I0411 14:10:36.635910 6 log.go:172] (0xc0026835e0) (5) Data frame handling I0411 14:10:36.635941 6 log.go:172] (0xc0010eca50) Data frame received for 3 I0411 14:10:36.635963 6 log.go:172] (0xc001b45b80) (3) Data frame handling I0411 14:10:36.637968 6 log.go:172] (0xc0010eca50) Data frame received for 1 I0411 14:10:36.637996 6 log.go:172] (0xc002683540) (1) Data frame handling I0411 14:10:36.638013 6 log.go:172] (0xc002683540) (1) Data frame sent I0411 14:10:36.638048 6 log.go:172] (0xc0010eca50) (0xc002683540) Stream removed, broadcasting: 1 I0411 14:10:36.638099 6 log.go:172] (0xc0010eca50) Go away received I0411 14:10:36.638185 6 log.go:172] (0xc0010eca50) (0xc002683540) Stream removed, broadcasting: 1 I0411 14:10:36.638213 6 log.go:172] (0xc0010eca50) (0xc001b45b80) Stream removed, broadcasting: 3 I0411 14:10:36.638249 6 log.go:172] (0xc0010eca50) (0xc0026835e0) Stream removed, broadcasting: 5 Apr 11 14:10:36.638: INFO: Waiting for endpoints: map[] Apr 11 14:10:36.641: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.152:8080/dial?request=hostName&protocol=udp&host=10.244.2.108&port=8081&tries=1'] Namespace:pod-network-test-2765 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 11 14:10:36.641: INFO: >>> kubeConfig: /root/.kube/config I0411 14:10:36.673873 6 log.go:172] (0xc001bb0b00) (0xc001ea6000) Create stream I0411 14:10:36.673895 6 log.go:172] (0xc001bb0b00) (0xc001ea6000) Stream added, broadcasting: 1 I0411 14:10:36.676413 6 log.go:172] (0xc001bb0b00) Reply frame received for 1 I0411 14:10:36.676465 6 log.go:172] (0xc001bb0b00) (0xc000636aa0) Create stream I0411 14:10:36.676485 6 log.go:172] (0xc001bb0b00) (0xc000636aa0) Stream added, broadcasting: 3 I0411 14:10:36.677795 6 log.go:172] (0xc001bb0b00) Reply frame received for 3 I0411 14:10:36.677863 6 log.go:172] (0xc001bb0b00) (0xc001ea60a0) Create stream I0411 14:10:36.677880 6 log.go:172] (0xc001bb0b00) (0xc001ea60a0) Stream added, broadcasting: 5 I0411 14:10:36.678841 6 log.go:172] (0xc001bb0b00) Reply frame received for 5 I0411 14:10:36.756615 6 log.go:172] (0xc001bb0b00) Data frame received for 3 I0411 14:10:36.756642 6 log.go:172] (0xc000636aa0) (3) Data frame handling I0411 14:10:36.756674 6 log.go:172] (0xc000636aa0) (3) Data frame sent I0411 14:10:36.757549 6 log.go:172] (0xc001bb0b00) Data frame received for 3 I0411 14:10:36.757589 6 log.go:172] (0xc000636aa0) (3) Data frame handling I0411 14:10:36.757615 6 log.go:172] (0xc001bb0b00) Data frame received for 5 I0411 14:10:36.757625 6 log.go:172] (0xc001ea60a0) (5) Data frame handling I0411 14:10:36.759365 6 log.go:172] (0xc001bb0b00) Data frame received for 1 I0411 14:10:36.759382 6 log.go:172] (0xc001ea6000) (1) Data frame handling I0411 14:10:36.759394 6 log.go:172] (0xc001ea6000) (1) Data frame sent I0411 14:10:36.759409 6 log.go:172] (0xc001bb0b00) (0xc001ea6000) Stream removed, broadcasting: 1 I0411 14:10:36.759424 6 log.go:172] (0xc001bb0b00) Go away received I0411 14:10:36.759559 6 log.go:172] (0xc001bb0b00) (0xc001ea6000) Stream removed, broadcasting: 1 I0411 14:10:36.759581 6 log.go:172] (0xc001bb0b00) (0xc000636aa0) Stream removed, broadcasting: 3 I0411 14:10:36.759592 6 log.go:172] (0xc001bb0b00) (0xc001ea60a0) Stream removed, broadcasting: 5 Apr 11 14:10:36.759: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:10:36.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2765" for this suite. Apr 11 14:10:58.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:10:58.856: INFO: namespace pod-network-test-2765 deletion completed in 22.092243657s • [SLOW TEST:50.562 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:10:58.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 11 14:10:58.945: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a8beb2ec-c84e-4039-a69e-ab907272e450" in namespace "downward-api-5515" to be "success or failure" Apr 11 14:10:58.949: INFO: Pod "downwardapi-volume-a8beb2ec-c84e-4039-a69e-ab907272e450": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096319ms Apr 11 14:11:00.954: INFO: Pod "downwardapi-volume-a8beb2ec-c84e-4039-a69e-ab907272e450": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008560317s Apr 11 14:11:02.958: INFO: Pod "downwardapi-volume-a8beb2ec-c84e-4039-a69e-ab907272e450": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012963513s STEP: Saw pod success Apr 11 14:11:02.958: INFO: Pod "downwardapi-volume-a8beb2ec-c84e-4039-a69e-ab907272e450" satisfied condition "success or failure" Apr 11 14:11:02.962: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-a8beb2ec-c84e-4039-a69e-ab907272e450 container client-container: STEP: delete the pod Apr 11 14:11:02.980: INFO: Waiting for pod downwardapi-volume-a8beb2ec-c84e-4039-a69e-ab907272e450 to disappear Apr 11 14:11:02.984: INFO: Pod downwardapi-volume-a8beb2ec-c84e-4039-a69e-ab907272e450 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:11:02.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5515" for this suite. Apr 11 14:11:09.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:11:09.123: INFO: namespace downward-api-5515 deletion completed in 6.135481564s • [SLOW TEST:10.267 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:11:09.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-1550 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 11 14:11:09.213: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 11 14:11:33.307: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.154 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1550 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 11 14:11:33.307: INFO: >>> kubeConfig: /root/.kube/config I0411 14:11:33.342487 6 log.go:172] (0xc001920790) (0xc001a161e0) Create stream I0411 14:11:33.342518 6 log.go:172] (0xc001920790) (0xc001a161e0) Stream added, broadcasting: 1 I0411 14:11:33.344492 6 log.go:172] (0xc001920790) Reply frame received for 1 I0411 14:11:33.344532 6 log.go:172] (0xc001920790) (0xc002d79b80) Create stream I0411 14:11:33.344545 6 log.go:172] (0xc001920790) (0xc002d79b80) Stream added, broadcasting: 3 I0411 14:11:33.345807 6 log.go:172] (0xc001920790) Reply frame received for 3 I0411 14:11:33.345864 6 log.go:172] (0xc001920790) (0xc000ebcb40) Create stream I0411 14:11:33.345892 6 log.go:172] (0xc001920790) (0xc000ebcb40) Stream added, broadcasting: 5 I0411 14:11:33.346940 6 log.go:172] (0xc001920790) Reply frame received for 5 I0411 14:11:34.424395 6 log.go:172] (0xc001920790) Data frame received for 3 I0411 14:11:34.424424 6 log.go:172] (0xc002d79b80) (3) Data frame handling I0411 14:11:34.424436 6 log.go:172] (0xc002d79b80) (3) Data frame sent I0411 14:11:34.424855 6 log.go:172] (0xc001920790) Data frame received for 3 I0411 14:11:34.424892 6 log.go:172] (0xc002d79b80) (3) Data frame handling I0411 14:11:34.424916 6 log.go:172] (0xc001920790) Data frame received for 5 I0411 14:11:34.424921 6 log.go:172] (0xc000ebcb40) (5) Data frame handling I0411 14:11:34.426820 6 log.go:172] (0xc001920790) Data frame received for 1 I0411 14:11:34.426831 6 log.go:172] (0xc001a161e0) (1) Data frame handling I0411 14:11:34.426837 6 log.go:172] (0xc001a161e0) (1) Data frame sent I0411 14:11:34.427104 6 log.go:172] (0xc001920790) (0xc001a161e0) Stream removed, broadcasting: 1 I0411 14:11:34.427158 6 log.go:172] (0xc001920790) Go away received I0411 14:11:34.427293 6 log.go:172] (0xc001920790) (0xc001a161e0) Stream removed, broadcasting: 1 I0411 14:11:34.427330 6 log.go:172] (0xc001920790) (0xc002d79b80) Stream removed, broadcasting: 3 I0411 14:11:34.427353 6 log.go:172] (0xc001920790) (0xc000ebcb40) Stream removed, broadcasting: 5 Apr 11 14:11:34.427: INFO: Found all expected endpoints: [netserver-0] Apr 11 14:11:34.430: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.109 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1550 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 11 14:11:34.430: INFO: >>> kubeConfig: /root/.kube/config I0411 14:11:34.462467 6 log.go:172] (0xc001220d10) (0xc000ebcfa0) Create stream I0411 14:11:34.462501 6 log.go:172] (0xc001220d10) (0xc000ebcfa0) Stream added, broadcasting: 1 I0411 14:11:34.464962 6 log.go:172] (0xc001220d10) Reply frame received for 1 I0411 14:11:34.465005 6 log.go:172] (0xc001220d10) (0xc000a5a3c0) Create stream I0411 14:11:34.465021 6 log.go:172] (0xc001220d10) (0xc000a5a3c0) Stream added, broadcasting: 3 I0411 14:11:34.466250 6 log.go:172] (0xc001220d10) Reply frame received for 3 I0411 14:11:34.466290 6 log.go:172] (0xc001220d10) (0xc002d79c20) Create stream I0411 14:11:34.466305 6 log.go:172] (0xc001220d10) (0xc002d79c20) Stream added, broadcasting: 5 I0411 14:11:34.467368 6 log.go:172] (0xc001220d10) Reply frame received for 5 I0411 14:11:35.536093 6 log.go:172] (0xc001220d10) Data frame received for 5 I0411 14:11:35.536163 6 log.go:172] (0xc002d79c20) (5) Data frame handling I0411 14:11:35.536202 6 log.go:172] (0xc001220d10) Data frame received for 3 I0411 14:11:35.536222 6 log.go:172] (0xc000a5a3c0) (3) Data frame handling I0411 14:11:35.536244 6 log.go:172] (0xc000a5a3c0) (3) Data frame sent I0411 14:11:35.536414 6 log.go:172] (0xc001220d10) Data frame received for 3 I0411 14:11:35.536436 6 log.go:172] (0xc000a5a3c0) (3) Data frame handling I0411 14:11:35.538584 6 log.go:172] (0xc001220d10) Data frame received for 1 I0411 14:11:35.538619 6 log.go:172] (0xc000ebcfa0) (1) Data frame handling I0411 14:11:35.538672 6 log.go:172] (0xc000ebcfa0) (1) Data frame sent I0411 14:11:35.538703 6 log.go:172] (0xc001220d10) (0xc000ebcfa0) Stream removed, broadcasting: 1 I0411 14:11:35.538854 6 log.go:172] (0xc001220d10) Go away received I0411 14:11:35.538938 6 log.go:172] (0xc001220d10) (0xc000ebcfa0) Stream removed, broadcasting: 1 I0411 14:11:35.538967 6 log.go:172] (0xc001220d10) (0xc000a5a3c0) Stream removed, broadcasting: 3 I0411 14:11:35.538985 6 log.go:172] (0xc001220d10) (0xc002d79c20) Stream removed, broadcasting: 5 Apr 11 14:11:35.539: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:11:35.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1550" for this suite. Apr 11 14:11:59.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:11:59.634: INFO: namespace pod-network-test-1550 deletion completed in 24.090122141s • [SLOW TEST:50.511 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:11:59.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-f3ddc125-e9fa-4a4e-a0fb-a9c24d78db74 STEP: Creating a pod to test consume secrets Apr 11 14:11:59.701: INFO: Waiting up to 5m0s for pod "pod-secrets-1d8434c3-e61a-4f7d-a988-83adcec2043a" in namespace "secrets-5911" to be "success or failure" Apr 11 14:11:59.717: INFO: Pod "pod-secrets-1d8434c3-e61a-4f7d-a988-83adcec2043a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.023155ms Apr 11 14:12:01.785: INFO: Pod "pod-secrets-1d8434c3-e61a-4f7d-a988-83adcec2043a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08397523s Apr 11 14:12:03.790: INFO: Pod "pod-secrets-1d8434c3-e61a-4f7d-a988-83adcec2043a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.088385198s STEP: Saw pod success Apr 11 14:12:03.790: INFO: Pod "pod-secrets-1d8434c3-e61a-4f7d-a988-83adcec2043a" satisfied condition "success or failure" Apr 11 14:12:03.792: INFO: Trying to get logs from node iruya-worker pod pod-secrets-1d8434c3-e61a-4f7d-a988-83adcec2043a container secret-volume-test: STEP: delete the pod Apr 11 14:12:03.810: INFO: Waiting for pod pod-secrets-1d8434c3-e61a-4f7d-a988-83adcec2043a to disappear Apr 11 14:12:03.814: INFO: Pod pod-secrets-1d8434c3-e61a-4f7d-a988-83adcec2043a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:12:03.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5911" for this suite. Apr 11 14:12:09.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:12:09.909: INFO: namespace secrets-5911 deletion completed in 6.092566362s • [SLOW TEST:10.275 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:12:09.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 11 14:12:10.024: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"d899aeb1-5a03-4cfc-a43e-242b1612b7ea", Controller:(*bool)(0xc0025212a2), BlockOwnerDeletion:(*bool)(0xc0025212a3)}} Apr 11 14:12:10.055: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"57f541ec-8cb1-4e83-a9e4-810bcb937546", Controller:(*bool)(0xc002983ac2), BlockOwnerDeletion:(*bool)(0xc002983ac3)}} Apr 11 14:12:10.099: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"149852d4-e80c-40b1-910f-b1a19cbffd47", Controller:(*bool)(0xc002a7a79a), BlockOwnerDeletion:(*bool)(0xc002a7a79b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:12:15.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5887" for this suite. Apr 11 14:12:21.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:12:21.219: INFO: namespace gc-5887 deletion completed in 6.102435596s • [SLOW TEST:11.310 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:12:21.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-ca7312ac-e104-4d83-8198-74074cd30c3c STEP: Creating a pod to test consume secrets Apr 11 14:12:21.312: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-43d569af-0f30-4bf9-b7f1-c5ebfa23306e" in namespace "projected-403" to be "success or failure" Apr 11 14:12:21.327: INFO: Pod "pod-projected-secrets-43d569af-0f30-4bf9-b7f1-c5ebfa23306e": Phase="Pending", Reason="", readiness=false. Elapsed: 15.37393ms Apr 11 14:12:23.331: INFO: Pod "pod-projected-secrets-43d569af-0f30-4bf9-b7f1-c5ebfa23306e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019347632s Apr 11 14:12:25.335: INFO: Pod "pod-projected-secrets-43d569af-0f30-4bf9-b7f1-c5ebfa23306e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023772861s STEP: Saw pod success Apr 11 14:12:25.335: INFO: Pod "pod-projected-secrets-43d569af-0f30-4bf9-b7f1-c5ebfa23306e" satisfied condition "success or failure" Apr 11 14:12:25.338: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-43d569af-0f30-4bf9-b7f1-c5ebfa23306e container projected-secret-volume-test: STEP: delete the pod Apr 11 14:12:25.371: INFO: Waiting for pod pod-projected-secrets-43d569af-0f30-4bf9-b7f1-c5ebfa23306e to disappear Apr 11 14:12:25.382: INFO: Pod pod-projected-secrets-43d569af-0f30-4bf9-b7f1-c5ebfa23306e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:12:25.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-403" for this suite. Apr 11 14:12:31.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:12:31.479: INFO: namespace projected-403 deletion completed in 6.094436228s • [SLOW TEST:10.260 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:12:31.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Apr 11 14:12:31.516: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Apr 11 14:12:31.968: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Apr 11 14:12:34.555: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722211151, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722211151, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722211152, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722211151, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 11 14:12:37.289: INFO: Waited 723.383546ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:12:37.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-4033" for this suite. Apr 11 14:12:43.842: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:12:43.920: INFO: namespace aggregator-4033 deletion completed in 6.177644908s • [SLOW TEST:12.440 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:12:43.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-3bc29adb-a29c-43e5-b2a0-8b8bae663365 STEP: Creating a pod to test consume secrets Apr 11 14:12:44.088: INFO: Waiting up to 5m0s for pod "pod-secrets-1072bf68-35c4-4025-930c-b9c00287ad32" in namespace "secrets-1236" to be "success or failure" Apr 11 14:12:44.090: INFO: Pod "pod-secrets-1072bf68-35c4-4025-930c-b9c00287ad32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.235651ms Apr 11 14:12:46.095: INFO: Pod "pod-secrets-1072bf68-35c4-4025-930c-b9c00287ad32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006571553s Apr 11 14:12:48.099: INFO: Pod "pod-secrets-1072bf68-35c4-4025-930c-b9c00287ad32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010883474s STEP: Saw pod success Apr 11 14:12:48.099: INFO: Pod "pod-secrets-1072bf68-35c4-4025-930c-b9c00287ad32" satisfied condition "success or failure" Apr 11 14:12:48.102: INFO: Trying to get logs from node iruya-worker pod pod-secrets-1072bf68-35c4-4025-930c-b9c00287ad32 container secret-volume-test: STEP: delete the pod Apr 11 14:12:48.137: INFO: Waiting for pod pod-secrets-1072bf68-35c4-4025-930c-b9c00287ad32 to disappear Apr 11 14:12:48.177: INFO: Pod pod-secrets-1072bf68-35c4-4025-930c-b9c00287ad32 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:12:48.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1236" for this suite. Apr 11 14:12:54.200: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:12:54.270: INFO: namespace secrets-1236 deletion completed in 6.08836625s STEP: Destroying namespace "secret-namespace-5885" for this suite. Apr 11 14:13:00.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:13:00.347: INFO: namespace secret-namespace-5885 deletion completed in 6.077824709s • [SLOW TEST:16.427 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:13:00.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 11 14:13:00.446: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Apr 11 14:13:00.456: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 14:13:00.461: INFO: Number of nodes with available pods: 0 Apr 11 14:13:00.461: INFO: Node iruya-worker is running more than one daemon pod Apr 11 14:13:01.466: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 14:13:01.469: INFO: Number of nodes with available pods: 0 Apr 11 14:13:01.469: INFO: Node iruya-worker is running more than one daemon pod Apr 11 14:13:02.466: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 14:13:02.470: INFO: Number of nodes with available pods: 0 Apr 11 14:13:02.470: INFO: Node iruya-worker is running more than one daemon pod Apr 11 14:13:03.471: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 14:13:03.474: INFO: Number of nodes with available pods: 1 Apr 11 14:13:03.474: INFO: Node iruya-worker is running more than one daemon pod Apr 11 14:13:04.466: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 14:13:04.470: INFO: Number of nodes with available pods: 2 Apr 11 14:13:04.470: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Apr 11 14:13:04.522: INFO: Wrong image for pod: daemon-set-gm6vw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 11 14:13:04.522: INFO: Wrong image for pod: daemon-set-pf5jz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 11 14:13:04.528: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 14:13:05.533: INFO: Wrong image for pod: daemon-set-gm6vw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 11 14:13:05.533: INFO: Wrong image for pod: daemon-set-pf5jz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 11 14:13:05.538: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 14:13:06.533: INFO: Wrong image for pod: daemon-set-gm6vw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 11 14:13:06.534: INFO: Wrong image for pod: daemon-set-pf5jz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 11 14:13:06.538: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 14:13:07.532: INFO: Wrong image for pod: daemon-set-gm6vw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 11 14:13:07.532: INFO: Pod daemon-set-gm6vw is not available Apr 11 14:13:07.532: INFO: Wrong image for pod: daemon-set-pf5jz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 11 14:13:07.536: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 14:13:08.561: INFO: Wrong image for pod: daemon-set-pf5jz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 11 14:13:08.561: INFO: Pod daemon-set-w6qmk is not available Apr 11 14:13:08.565: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 14:13:09.534: INFO: Wrong image for pod: daemon-set-pf5jz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 11 14:13:09.534: INFO: Pod daemon-set-w6qmk is not available Apr 11 14:13:09.538: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 14:13:10.532: INFO: Wrong image for pod: daemon-set-pf5jz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 11 14:13:10.532: INFO: Pod daemon-set-w6qmk is not available Apr 11 14:13:10.536: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 14:13:11.532: INFO: Wrong image for pod: daemon-set-pf5jz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 11 14:13:11.536: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 14:13:12.533: INFO: Wrong image for pod: daemon-set-pf5jz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 11 14:13:12.538: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 14:13:13.533: INFO: Wrong image for pod: daemon-set-pf5jz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 11 14:13:13.533: INFO: Pod daemon-set-pf5jz is not available Apr 11 14:13:13.538: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 14:13:14.537: INFO: Pod daemon-set-krzx7 is not available Apr 11 14:13:14.541: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Apr 11 14:13:14.551: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 14:13:14.556: INFO: Number of nodes with available pods: 1 Apr 11 14:13:14.556: INFO: Node iruya-worker2 is running more than one daemon pod Apr 11 14:13:15.560: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 14:13:15.564: INFO: Number of nodes with available pods: 1 Apr 11 14:13:15.564: INFO: Node iruya-worker2 is running more than one daemon pod Apr 11 14:13:16.560: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 14:13:16.564: INFO: Number of nodes with available pods: 1 Apr 11 14:13:16.564: INFO: Node iruya-worker2 is running more than one daemon pod Apr 11 14:13:17.561: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 14:13:17.564: INFO: Number of nodes with available pods: 2 Apr 11 14:13:17.564: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2278, will wait for the garbage collector to delete the pods Apr 11 14:13:17.639: INFO: Deleting DaemonSet.extensions daemon-set took: 6.769516ms Apr 11 14:13:17.939: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.313454ms Apr 11 14:13:31.943: INFO: Number of nodes with available pods: 0 Apr 11 14:13:31.943: INFO: Number of running nodes: 0, number of available pods: 0 Apr 11 14:13:31.945: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2278/daemonsets","resourceVersion":"4853846"},"items":null} Apr 11 14:13:31.948: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2278/pods","resourceVersion":"4853846"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:13:31.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2278" for this suite. Apr 11 14:13:37.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:13:38.049: INFO: namespace daemonsets-2278 deletion completed in 6.089385045s • [SLOW TEST:37.701 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:13:38.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-11f40a63-8121-49c7-a5c3-97d04fe4a8ed in namespace container-probe-6405 Apr 11 14:13:42.137: INFO: Started pod liveness-11f40a63-8121-49c7-a5c3-97d04fe4a8ed in namespace container-probe-6405 STEP: checking the pod's current state and verifying that restartCount is present Apr 11 14:13:42.140: INFO: Initial restart count of pod liveness-11f40a63-8121-49c7-a5c3-97d04fe4a8ed is 0 Apr 11 14:14:02.185: INFO: Restart count of pod container-probe-6405/liveness-11f40a63-8121-49c7-a5c3-97d04fe4a8ed is now 1 (20.045345095s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:14:02.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6405" for this suite. Apr 11 14:14:08.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:14:08.416: INFO: namespace container-probe-6405 deletion completed in 6.120578345s • [SLOW TEST:30.366 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:14:08.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 11 14:14:08.495: INFO: Waiting up to 5m0s for pod "pod-b31679d2-6064-43b0-8685-aed44c066919" in namespace "emptydir-1735" to be "success or failure" Apr 11 14:14:08.498: INFO: Pod "pod-b31679d2-6064-43b0-8685-aed44c066919": Phase="Pending", Reason="", readiness=false. Elapsed: 3.706275ms Apr 11 14:14:10.534: INFO: Pod "pod-b31679d2-6064-43b0-8685-aed44c066919": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038957313s Apr 11 14:14:12.537: INFO: Pod "pod-b31679d2-6064-43b0-8685-aed44c066919": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042570572s STEP: Saw pod success Apr 11 14:14:12.537: INFO: Pod "pod-b31679d2-6064-43b0-8685-aed44c066919" satisfied condition "success or failure" Apr 11 14:14:12.539: INFO: Trying to get logs from node iruya-worker pod pod-b31679d2-6064-43b0-8685-aed44c066919 container test-container: STEP: delete the pod Apr 11 14:14:12.554: INFO: Waiting for pod pod-b31679d2-6064-43b0-8685-aed44c066919 to disappear Apr 11 14:14:12.558: INFO: Pod pod-b31679d2-6064-43b0-8685-aed44c066919 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:14:12.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1735" for this suite. Apr 11 14:14:18.590: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:14:18.665: INFO: namespace emptydir-1735 deletion completed in 6.10378786s • [SLOW TEST:10.249 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:14:18.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-8713 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Apr 11 14:14:18.744: INFO: Found 0 stateful pods, waiting for 3 Apr 11 14:14:28.749: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 11 14:14:28.749: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 11 14:14:28.749: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Apr 11 14:14:28.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8713 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 11 14:14:29.027: INFO: stderr: "I0411 14:14:28.896734 2365 log.go:172] (0xc000a08420) (0xc00097e640) Create stream\nI0411 14:14:28.896786 2365 log.go:172] (0xc000a08420) (0xc00097e640) Stream added, broadcasting: 1\nI0411 14:14:28.898859 2365 log.go:172] (0xc000a08420) Reply frame received for 1\nI0411 14:14:28.898911 2365 log.go:172] (0xc000a08420) (0xc000946000) Create stream\nI0411 14:14:28.898925 2365 log.go:172] (0xc000a08420) (0xc000946000) Stream added, broadcasting: 3\nI0411 14:14:28.899922 2365 log.go:172] (0xc000a08420) Reply frame received for 3\nI0411 14:14:28.899961 2365 log.go:172] (0xc000a08420) (0xc0009460a0) Create stream\nI0411 14:14:28.899976 2365 log.go:172] (0xc000a08420) (0xc0009460a0) Stream added, broadcasting: 5\nI0411 14:14:28.900974 2365 log.go:172] (0xc000a08420) Reply frame received for 5\nI0411 14:14:28.986556 2365 log.go:172] (0xc000a08420) Data frame received for 5\nI0411 14:14:28.986599 2365 log.go:172] (0xc0009460a0) (5) Data frame handling\nI0411 14:14:28.986621 2365 log.go:172] (0xc0009460a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0411 14:14:29.018623 2365 log.go:172] (0xc000a08420) Data frame received for 3\nI0411 14:14:29.018652 2365 log.go:172] (0xc000946000) (3) Data frame handling\nI0411 14:14:29.018665 2365 log.go:172] (0xc000946000) (3) Data frame sent\nI0411 14:14:29.018710 2365 log.go:172] (0xc000a08420) Data frame received for 3\nI0411 14:14:29.018736 2365 log.go:172] (0xc000a08420) Data frame received for 5\nI0411 14:14:29.018778 2365 log.go:172] (0xc0009460a0) (5) Data frame handling\nI0411 14:14:29.018807 2365 log.go:172] (0xc000946000) (3) Data frame handling\nI0411 14:14:29.020807 2365 log.go:172] (0xc000a08420) Data frame received for 1\nI0411 14:14:29.020832 2365 log.go:172] (0xc00097e640) (1) Data frame handling\nI0411 14:14:29.020851 2365 log.go:172] (0xc00097e640) (1) Data frame sent\nI0411 14:14:29.020869 2365 log.go:172] (0xc000a08420) (0xc00097e640) Stream removed, broadcasting: 1\nI0411 14:14:29.020899 2365 log.go:172] (0xc000a08420) Go away received\nI0411 14:14:29.021585 2365 log.go:172] (0xc000a08420) (0xc00097e640) Stream removed, broadcasting: 1\nI0411 14:14:29.021625 2365 log.go:172] (0xc000a08420) (0xc000946000) Stream removed, broadcasting: 3\nI0411 14:14:29.021651 2365 log.go:172] (0xc000a08420) (0xc0009460a0) Stream removed, broadcasting: 5\n" Apr 11 14:14:29.027: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 11 14:14:29.027: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Apr 11 14:14:39.063: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Apr 11 14:14:49.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8713 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 11 14:14:49.353: INFO: stderr: "I0411 14:14:49.262291 2387 log.go:172] (0xc0006ca370) (0xc0006c86e0) Create stream\nI0411 14:14:49.262347 2387 log.go:172] (0xc0006ca370) (0xc0006c86e0) Stream added, broadcasting: 1\nI0411 14:14:49.264378 2387 log.go:172] (0xc0006ca370) Reply frame received for 1\nI0411 14:14:49.264413 2387 log.go:172] (0xc0006ca370) (0xc00065e140) Create stream\nI0411 14:14:49.264423 2387 log.go:172] (0xc0006ca370) (0xc00065e140) Stream added, broadcasting: 3\nI0411 14:14:49.265411 2387 log.go:172] (0xc0006ca370) Reply frame received for 3\nI0411 14:14:49.265460 2387 log.go:172] (0xc0006ca370) (0xc0006c8780) Create stream\nI0411 14:14:49.265480 2387 log.go:172] (0xc0006ca370) (0xc0006c8780) Stream added, broadcasting: 5\nI0411 14:14:49.266420 2387 log.go:172] (0xc0006ca370) Reply frame received for 5\nI0411 14:14:49.345783 2387 log.go:172] (0xc0006ca370) Data frame received for 3\nI0411 14:14:49.345825 2387 log.go:172] (0xc0006ca370) Data frame received for 5\nI0411 14:14:49.345865 2387 log.go:172] (0xc0006c8780) (5) Data frame handling\nI0411 14:14:49.345886 2387 log.go:172] (0xc0006c8780) (5) Data frame sent\nI0411 14:14:49.345899 2387 log.go:172] (0xc0006ca370) Data frame received for 5\nI0411 14:14:49.345908 2387 log.go:172] (0xc0006c8780) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0411 14:14:49.345955 2387 log.go:172] (0xc00065e140) (3) Data frame handling\nI0411 14:14:49.345987 2387 log.go:172] (0xc00065e140) (3) Data frame sent\nI0411 14:14:49.345999 2387 log.go:172] (0xc0006ca370) Data frame received for 3\nI0411 14:14:49.346009 2387 log.go:172] (0xc00065e140) (3) Data frame handling\nI0411 14:14:49.347913 2387 log.go:172] (0xc0006ca370) Data frame received for 1\nI0411 14:14:49.347965 2387 log.go:172] (0xc0006c86e0) (1) Data frame handling\nI0411 14:14:49.348001 2387 log.go:172] (0xc0006c86e0) (1) Data frame sent\nI0411 14:14:49.348028 2387 log.go:172] (0xc0006ca370) (0xc0006c86e0) Stream removed, broadcasting: 1\nI0411 14:14:49.348103 2387 log.go:172] (0xc0006ca370) Go away received\nI0411 14:14:49.348415 2387 log.go:172] (0xc0006ca370) (0xc0006c86e0) Stream removed, broadcasting: 1\nI0411 14:14:49.348438 2387 log.go:172] (0xc0006ca370) (0xc00065e140) Stream removed, broadcasting: 3\nI0411 14:14:49.348458 2387 log.go:172] (0xc0006ca370) (0xc0006c8780) Stream removed, broadcasting: 5\n" Apr 11 14:14:49.353: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 11 14:14:49.353: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' STEP: Rolling back to a previous revision Apr 11 14:15:09.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8713 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 11 14:15:09.650: INFO: stderr: "I0411 14:15:09.513243 2408 log.go:172] (0xc0005e4420) (0xc000520640) Create stream\nI0411 14:15:09.513292 2408 log.go:172] (0xc0005e4420) (0xc000520640) Stream added, broadcasting: 1\nI0411 14:15:09.515421 2408 log.go:172] (0xc0005e4420) Reply frame received for 1\nI0411 14:15:09.515487 2408 log.go:172] (0xc0005e4420) (0xc0005206e0) Create stream\nI0411 14:15:09.515522 2408 log.go:172] (0xc0005e4420) (0xc0005206e0) Stream added, broadcasting: 3\nI0411 14:15:09.516574 2408 log.go:172] (0xc0005e4420) Reply frame received for 3\nI0411 14:15:09.516616 2408 log.go:172] (0xc0005e4420) (0xc0006843c0) Create stream\nI0411 14:15:09.516630 2408 log.go:172] (0xc0005e4420) (0xc0006843c0) Stream added, broadcasting: 5\nI0411 14:15:09.517986 2408 log.go:172] (0xc0005e4420) Reply frame received for 5\nI0411 14:15:09.615002 2408 log.go:172] (0xc0005e4420) Data frame received for 5\nI0411 14:15:09.615031 2408 log.go:172] (0xc0006843c0) (5) Data frame handling\nI0411 14:15:09.615051 2408 log.go:172] (0xc0006843c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0411 14:15:09.643336 2408 log.go:172] (0xc0005e4420) Data frame received for 5\nI0411 14:15:09.643384 2408 log.go:172] (0xc0006843c0) (5) Data frame handling\nI0411 14:15:09.643412 2408 log.go:172] (0xc0005e4420) Data frame received for 3\nI0411 14:15:09.643422 2408 log.go:172] (0xc0005206e0) (3) Data frame handling\nI0411 14:15:09.643441 2408 log.go:172] (0xc0005206e0) (3) Data frame sent\nI0411 14:15:09.643473 2408 log.go:172] (0xc0005e4420) Data frame received for 3\nI0411 14:15:09.643486 2408 log.go:172] (0xc0005206e0) (3) Data frame handling\nI0411 14:15:09.645321 2408 log.go:172] (0xc0005e4420) Data frame received for 1\nI0411 14:15:09.645353 2408 log.go:172] (0xc000520640) (1) Data frame handling\nI0411 14:15:09.645372 2408 log.go:172] (0xc000520640) (1) Data frame sent\nI0411 14:15:09.645400 2408 log.go:172] (0xc0005e4420) (0xc000520640) Stream removed, broadcasting: 1\nI0411 14:15:09.645421 2408 log.go:172] (0xc0005e4420) Go away received\nI0411 14:15:09.645924 2408 log.go:172] (0xc0005e4420) (0xc000520640) Stream removed, broadcasting: 1\nI0411 14:15:09.645949 2408 log.go:172] (0xc0005e4420) (0xc0005206e0) Stream removed, broadcasting: 3\nI0411 14:15:09.645961 2408 log.go:172] (0xc0005e4420) (0xc0006843c0) Stream removed, broadcasting: 5\n" Apr 11 14:15:09.650: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 11 14:15:09.650: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 11 14:15:19.692: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Apr 11 14:15:29.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8713 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 11 14:15:29.965: INFO: stderr: "I0411 14:15:29.862581 2430 log.go:172] (0xc000af6420) (0xc0006d26e0) Create stream\nI0411 14:15:29.862640 2430 log.go:172] (0xc000af6420) (0xc0006d26e0) Stream added, broadcasting: 1\nI0411 14:15:29.866959 2430 log.go:172] (0xc000af6420) Reply frame received for 1\nI0411 14:15:29.867004 2430 log.go:172] (0xc000af6420) (0xc0006d2000) Create stream\nI0411 14:15:29.867018 2430 log.go:172] (0xc000af6420) (0xc0006d2000) Stream added, broadcasting: 3\nI0411 14:15:29.868150 2430 log.go:172] (0xc000af6420) Reply frame received for 3\nI0411 14:15:29.868184 2430 log.go:172] (0xc000af6420) (0xc000670280) Create stream\nI0411 14:15:29.868197 2430 log.go:172] (0xc000af6420) (0xc000670280) Stream added, broadcasting: 5\nI0411 14:15:29.869394 2430 log.go:172] (0xc000af6420) Reply frame received for 5\nI0411 14:15:29.953737 2430 log.go:172] (0xc000af6420) Data frame received for 5\nI0411 14:15:29.953772 2430 log.go:172] (0xc000670280) (5) Data frame handling\nI0411 14:15:29.953784 2430 log.go:172] (0xc000670280) (5) Data frame sent\nI0411 14:15:29.953791 2430 log.go:172] (0xc000af6420) Data frame received for 5\nI0411 14:15:29.953796 2430 log.go:172] (0xc000670280) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0411 14:15:29.953875 2430 log.go:172] (0xc000af6420) Data frame received for 3\nI0411 14:15:29.953886 2430 log.go:172] (0xc0006d2000) (3) Data frame handling\nI0411 14:15:29.953894 2430 log.go:172] (0xc0006d2000) (3) Data frame sent\nI0411 14:15:29.953899 2430 log.go:172] (0xc000af6420) Data frame received for 3\nI0411 14:15:29.953904 2430 log.go:172] (0xc0006d2000) (3) Data frame handling\nI0411 14:15:29.961543 2430 log.go:172] (0xc000af6420) Data frame received for 1\nI0411 14:15:29.961559 2430 log.go:172] (0xc0006d26e0) (1) Data frame handling\nI0411 14:15:29.961565 2430 log.go:172] (0xc0006d26e0) (1) Data frame sent\nI0411 14:15:29.961573 2430 log.go:172] (0xc000af6420) (0xc0006d26e0) Stream removed, broadcasting: 1\nI0411 14:15:29.961582 2430 log.go:172] (0xc000af6420) Go away received\nI0411 14:15:29.961886 2430 log.go:172] (0xc000af6420) (0xc0006d26e0) Stream removed, broadcasting: 1\nI0411 14:15:29.961912 2430 log.go:172] (0xc000af6420) (0xc0006d2000) Stream removed, broadcasting: 3\nI0411 14:15:29.961922 2430 log.go:172] (0xc000af6420) (0xc000670280) Stream removed, broadcasting: 5\n" Apr 11 14:15:29.965: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 11 14:15:29.965: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 11 14:15:49.982: INFO: Waiting for StatefulSet statefulset-8713/ss2 to complete update Apr 11 14:15:49.982: INFO: Waiting for Pod statefulset-8713/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 11 14:15:59.991: INFO: Deleting all statefulset in ns statefulset-8713 Apr 11 14:15:59.994: INFO: Scaling statefulset ss2 to 0 Apr 11 14:16:30.015: INFO: Waiting for statefulset status.replicas updated to 0 Apr 11 14:16:30.018: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:16:30.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8713" for this suite. Apr 11 14:16:36.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:16:36.509: INFO: namespace statefulset-8713 deletion completed in 6.475906609s • [SLOW TEST:137.844 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:16:36.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 11 14:16:36.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Apr 11 14:16:36.734: INFO: stderr: "" Apr 11 14:16:36.734: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.11\", GitCommit:\"d94a81c724ea8e1ccc9002d89b7fe81d58f89ede\", GitTreeState:\"clean\", BuildDate:\"2020-04-05T10:39:42Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:28:37Z\", GoVersion:\"go1.12.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:16:36.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2420" for this suite. Apr 11 14:16:42.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:16:42.826: INFO: namespace kubectl-2420 deletion completed in 6.087941889s • [SLOW TEST:6.317 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:16:42.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults Apr 11 14:16:42.930: INFO: Waiting up to 5m0s for pod "client-containers-c4ade8b6-3bec-445e-9f64-93d98c66e66a" in namespace "containers-821" to be "success or failure" Apr 11 14:16:42.934: INFO: Pod "client-containers-c4ade8b6-3bec-445e-9f64-93d98c66e66a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104653ms Apr 11 14:16:44.937: INFO: Pod "client-containers-c4ade8b6-3bec-445e-9f64-93d98c66e66a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007721582s Apr 11 14:16:46.942: INFO: Pod "client-containers-c4ade8b6-3bec-445e-9f64-93d98c66e66a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012147849s STEP: Saw pod success Apr 11 14:16:46.942: INFO: Pod "client-containers-c4ade8b6-3bec-445e-9f64-93d98c66e66a" satisfied condition "success or failure" Apr 11 14:16:46.945: INFO: Trying to get logs from node iruya-worker pod client-containers-c4ade8b6-3bec-445e-9f64-93d98c66e66a container test-container: STEP: delete the pod Apr 11 14:16:46.965: INFO: Waiting for pod client-containers-c4ade8b6-3bec-445e-9f64-93d98c66e66a to disappear Apr 11 14:16:46.969: INFO: Pod client-containers-c4ade8b6-3bec-445e-9f64-93d98c66e66a no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:16:46.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-821" for this suite. Apr 11 14:16:53.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:16:53.082: INFO: namespace containers-821 deletion completed in 6.108617275s • [SLOW TEST:10.255 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:16:53.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-b794cd82-f115-4117-b19e-6111e9c278f5 STEP: Creating a pod to test consume configMaps Apr 11 14:16:53.169: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d4423b58-de9e-490d-9412-ddb2ee4ddab3" in namespace "projected-1112" to be "success or failure" Apr 11 14:16:53.199: INFO: Pod "pod-projected-configmaps-d4423b58-de9e-490d-9412-ddb2ee4ddab3": Phase="Pending", Reason="", readiness=false. Elapsed: 29.763862ms Apr 11 14:16:55.202: INFO: Pod "pod-projected-configmaps-d4423b58-de9e-490d-9412-ddb2ee4ddab3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033581866s Apr 11 14:16:57.218: INFO: Pod "pod-projected-configmaps-d4423b58-de9e-490d-9412-ddb2ee4ddab3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04877671s STEP: Saw pod success Apr 11 14:16:57.218: INFO: Pod "pod-projected-configmaps-d4423b58-de9e-490d-9412-ddb2ee4ddab3" satisfied condition "success or failure" Apr 11 14:16:57.259: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-d4423b58-de9e-490d-9412-ddb2ee4ddab3 container projected-configmap-volume-test: STEP: delete the pod Apr 11 14:16:57.282: INFO: Waiting for pod pod-projected-configmaps-d4423b58-de9e-490d-9412-ddb2ee4ddab3 to disappear Apr 11 14:16:57.292: INFO: Pod pod-projected-configmaps-d4423b58-de9e-490d-9412-ddb2ee4ddab3 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:16:57.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1112" for this suite. Apr 11 14:17:03.307: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:17:03.380: INFO: namespace projected-1112 deletion completed in 6.084884514s • [SLOW TEST:10.298 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:17:03.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 11 14:17:03.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2911' Apr 11 14:17:03.765: INFO: stderr: "" Apr 11 14:17:03.765: INFO: stdout: "replicationcontroller/redis-master created\n" Apr 11 14:17:03.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2911' Apr 11 14:17:04.051: INFO: stderr: "" Apr 11 14:17:04.051: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Apr 11 14:17:05.115: INFO: Selector matched 1 pods for map[app:redis] Apr 11 14:17:05.115: INFO: Found 0 / 1 Apr 11 14:17:06.056: INFO: Selector matched 1 pods for map[app:redis] Apr 11 14:17:06.056: INFO: Found 0 / 1 Apr 11 14:17:07.056: INFO: Selector matched 1 pods for map[app:redis] Apr 11 14:17:07.056: INFO: Found 1 / 1 Apr 11 14:17:07.056: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 11 14:17:07.058: INFO: Selector matched 1 pods for map[app:redis] Apr 11 14:17:07.058: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 11 14:17:07.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-8ngpj --namespace=kubectl-2911' Apr 11 14:17:07.162: INFO: stderr: "" Apr 11 14:17:07.162: INFO: stdout: "Name: redis-master-8ngpj\nNamespace: kubectl-2911\nPriority: 0\nNode: iruya-worker/172.17.0.6\nStart Time: Sat, 11 Apr 2020 14:17:03 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.125\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://fe2f9dd0a0577dfcc766c03c5cf82d007524673d1576396ed84674eb8c454101\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sat, 11 Apr 2020 14:17:06 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-wxjwj (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-wxjwj:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-wxjwj\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-2911/redis-master-8ngpj to iruya-worker\n Normal Pulled 3s kubelet, iruya-worker Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 1s kubelet, iruya-worker Created container redis-master\n Normal Started 1s kubelet, iruya-worker Started container redis-master\n" Apr 11 14:17:07.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-2911' Apr 11 14:17:07.280: INFO: stderr: "" Apr 11 14:17:07.280: INFO: stdout: "Name: redis-master\nNamespace: kubectl-2911\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: redis-master-8ngpj\n" Apr 11 14:17:07.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-2911' Apr 11 14:17:07.390: INFO: stderr: "" Apr 11 14:17:07.390: INFO: stdout: "Name: redis-master\nNamespace: kubectl-2911\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.107.70.20\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.2.125:6379\nSession Affinity: None\nEvents: \n" Apr 11 14:17:07.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane' Apr 11 14:17:07.505: INFO: stderr: "" Apr 11 14:17:07.505: INFO: stdout: "Name: iruya-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:24:20 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Sat, 11 Apr 2020 14:16:41 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sat, 11 Apr 2020 14:16:41 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sat, 11 Apr 2020 14:16:41 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sat, 11 Apr 2020 14:16:41 +0000 Sun, 15 Mar 2020 18:25:00 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.7\n Hostname: iruya-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 09f14f6f4d1640fcaab2243401c9f154\n System UUID: 7c6ca533-492e-400c-b058-c282f97a69ec\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.15.7\n Kube-Proxy Version: v1.15.7\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-iruya-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 26d\n kube-system kindnet-zn8sx 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 26d\n kube-system kube-apiserver-iruya-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 26d\n kube-system kube-controller-manager-iruya-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 26d\n kube-system kube-proxy-46nsr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 26d\n kube-system kube-scheduler-iruya-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 26d\n local-path-storage local-path-provisioner-d4947b89c-72frh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 26d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Apr 11 14:17:07.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-2911' Apr 11 14:17:07.603: INFO: stderr: "" Apr 11 14:17:07.603: INFO: stdout: "Name: kubectl-2911\nLabels: e2e-framework=kubectl\n e2e-run=6e02c623-31f7-407b-978e-03bd21157b98\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:17:07.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2911" for this suite. Apr 11 14:17:29.632: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:17:29.713: INFO: namespace kubectl-2911 deletion completed in 22.106382801s • [SLOW TEST:26.332 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:17:29.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Apr 11 14:17:30.355: INFO: Pod name wrapped-volume-race-2ba96c80-3776-4753-bc27-afe717cc1547: Found 0 pods out of 5 Apr 11 14:17:35.362: INFO: Pod name wrapped-volume-race-2ba96c80-3776-4753-bc27-afe717cc1547: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-2ba96c80-3776-4753-bc27-afe717cc1547 in namespace emptydir-wrapper-1253, will wait for the garbage collector to delete the pods Apr 11 14:17:49.444: INFO: Deleting ReplicationController wrapped-volume-race-2ba96c80-3776-4753-bc27-afe717cc1547 took: 7.009497ms Apr 11 14:17:49.744: INFO: Terminating ReplicationController wrapped-volume-race-2ba96c80-3776-4753-bc27-afe717cc1547 pods took: 300.285324ms STEP: Creating RC which spawns configmap-volume pods Apr 11 14:18:32.705: INFO: Pod name wrapped-volume-race-dbaf4be4-0f5f-40a0-87bf-c1544bdc3883: Found 0 pods out of 5 Apr 11 14:18:37.713: INFO: Pod name wrapped-volume-race-dbaf4be4-0f5f-40a0-87bf-c1544bdc3883: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-dbaf4be4-0f5f-40a0-87bf-c1544bdc3883 in namespace emptydir-wrapper-1253, will wait for the garbage collector to delete the pods Apr 11 14:18:51.803: INFO: Deleting ReplicationController wrapped-volume-race-dbaf4be4-0f5f-40a0-87bf-c1544bdc3883 took: 8.758628ms Apr 11 14:18:52.103: INFO: Terminating ReplicationController wrapped-volume-race-dbaf4be4-0f5f-40a0-87bf-c1544bdc3883 pods took: 300.277338ms STEP: Creating RC which spawns configmap-volume pods Apr 11 14:19:33.261: INFO: Pod name wrapped-volume-race-1bbd4e92-7894-41cb-9e9b-a63f3d2d70c3: Found 0 pods out of 5 Apr 11 14:19:38.269: INFO: Pod name wrapped-volume-race-1bbd4e92-7894-41cb-9e9b-a63f3d2d70c3: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-1bbd4e92-7894-41cb-9e9b-a63f3d2d70c3 in namespace emptydir-wrapper-1253, will wait for the garbage collector to delete the pods Apr 11 14:19:52.360: INFO: Deleting ReplicationController wrapped-volume-race-1bbd4e92-7894-41cb-9e9b-a63f3d2d70c3 took: 21.009402ms Apr 11 14:19:52.760: INFO: Terminating ReplicationController wrapped-volume-race-1bbd4e92-7894-41cb-9e9b-a63f3d2d70c3 pods took: 400.25017ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:20:32.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-1253" for this suite. Apr 11 14:20:40.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:20:40.954: INFO: namespace emptydir-wrapper-1253 deletion completed in 8.09073094s • [SLOW TEST:191.240 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:20:40.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 11 14:20:41.027: INFO: Waiting up to 5m0s for pod "downwardapi-volume-57f1fc14-ec4e-449a-95bd-ecc70f9b7bbd" in namespace "downward-api-9965" to be "success or failure" Apr 11 14:20:41.036: INFO: Pod "downwardapi-volume-57f1fc14-ec4e-449a-95bd-ecc70f9b7bbd": Phase="Pending", Reason="", readiness=false. Elapsed: 9.147691ms Apr 11 14:20:43.041: INFO: Pod "downwardapi-volume-57f1fc14-ec4e-449a-95bd-ecc70f9b7bbd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013492323s Apr 11 14:20:45.045: INFO: Pod "downwardapi-volume-57f1fc14-ec4e-449a-95bd-ecc70f9b7bbd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017913192s STEP: Saw pod success Apr 11 14:20:45.045: INFO: Pod "downwardapi-volume-57f1fc14-ec4e-449a-95bd-ecc70f9b7bbd" satisfied condition "success or failure" Apr 11 14:20:45.048: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-57f1fc14-ec4e-449a-95bd-ecc70f9b7bbd container client-container: STEP: delete the pod Apr 11 14:20:45.084: INFO: Waiting for pod downwardapi-volume-57f1fc14-ec4e-449a-95bd-ecc70f9b7bbd to disappear Apr 11 14:20:45.102: INFO: Pod downwardapi-volume-57f1fc14-ec4e-449a-95bd-ecc70f9b7bbd no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:20:45.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9965" for this suite. Apr 11 14:20:51.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:20:51.200: INFO: namespace downward-api-9965 deletion completed in 6.094665868s • [SLOW TEST:10.245 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:20:51.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:20:56.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2716" for this suite. Apr 11 14:21:18.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:21:18.421: INFO: namespace replication-controller-2716 deletion completed in 22.090769365s • [SLOW TEST:27.221 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:21:18.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 11 14:21:18.467: INFO: Waiting up to 5m0s for pod "downward-api-b31852a8-3a6f-4f93-8a2d-8eb7fcda242a" in namespace "downward-api-3468" to be "success or failure" Apr 11 14:21:18.487: INFO: Pod "downward-api-b31852a8-3a6f-4f93-8a2d-8eb7fcda242a": Phase="Pending", Reason="", readiness=false. Elapsed: 19.936734ms Apr 11 14:21:20.491: INFO: Pod "downward-api-b31852a8-3a6f-4f93-8a2d-8eb7fcda242a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024114154s Apr 11 14:21:22.495: INFO: Pod "downward-api-b31852a8-3a6f-4f93-8a2d-8eb7fcda242a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027693708s STEP: Saw pod success Apr 11 14:21:22.495: INFO: Pod "downward-api-b31852a8-3a6f-4f93-8a2d-8eb7fcda242a" satisfied condition "success or failure" Apr 11 14:21:22.498: INFO: Trying to get logs from node iruya-worker2 pod downward-api-b31852a8-3a6f-4f93-8a2d-8eb7fcda242a container dapi-container: STEP: delete the pod Apr 11 14:21:22.531: INFO: Waiting for pod downward-api-b31852a8-3a6f-4f93-8a2d-8eb7fcda242a to disappear Apr 11 14:21:22.540: INFO: Pod downward-api-b31852a8-3a6f-4f93-8a2d-8eb7fcda242a no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:21:22.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3468" for this suite. Apr 11 14:21:28.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:21:28.637: INFO: namespace downward-api-3468 deletion completed in 6.09469589s • [SLOW TEST:10.215 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:21:28.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-597 I0411 14:21:28.714757 6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-597, replica count: 1 I0411 14:21:29.765310 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0411 14:21:30.765669 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0411 14:21:31.765861 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 11 14:21:31.918: INFO: Created: latency-svc-nttwv Apr 11 14:21:31.930: INFO: Got endpoints: latency-svc-nttwv [64.361108ms] Apr 11 14:21:31.978: INFO: Created: latency-svc-hh5j7 Apr 11 14:21:31.990: INFO: Got endpoints: latency-svc-hh5j7 [60.182275ms] Apr 11 14:21:32.009: INFO: Created: latency-svc-g2vfj Apr 11 14:21:32.041: INFO: Got endpoints: latency-svc-g2vfj [110.755022ms] Apr 11 14:21:32.057: INFO: Created: latency-svc-zlqqz Apr 11 14:21:32.089: INFO: Got endpoints: latency-svc-zlqqz [158.987531ms] Apr 11 14:21:32.123: INFO: Created: latency-svc-fgpmp Apr 11 14:21:32.135: INFO: Got endpoints: latency-svc-fgpmp [204.574113ms] Apr 11 14:21:32.203: INFO: Created: latency-svc-hq29j Apr 11 14:21:32.207: INFO: Got endpoints: latency-svc-hq29j [276.555425ms] Apr 11 14:21:32.236: INFO: Created: latency-svc-6lzhm Apr 11 14:21:32.246: INFO: Got endpoints: latency-svc-6lzhm [316.08048ms] Apr 11 14:21:32.267: INFO: Created: latency-svc-jql78 Apr 11 14:21:32.277: INFO: Got endpoints: latency-svc-jql78 [346.607638ms] Apr 11 14:21:32.297: INFO: Created: latency-svc-cdzn5 Apr 11 14:21:32.364: INFO: Got endpoints: latency-svc-cdzn5 [434.104033ms] Apr 11 14:21:32.367: INFO: Created: latency-svc-bkdct Apr 11 14:21:32.374: INFO: Got endpoints: latency-svc-bkdct [443.637239ms] Apr 11 14:21:32.398: INFO: Created: latency-svc-pgbk4 Apr 11 14:21:32.416: INFO: Got endpoints: latency-svc-pgbk4 [485.394869ms] Apr 11 14:21:32.451: INFO: Created: latency-svc-d72t9 Apr 11 14:21:32.502: INFO: Got endpoints: latency-svc-d72t9 [571.793542ms] Apr 11 14:21:32.532: INFO: Created: latency-svc-6fpsp Apr 11 14:21:32.548: INFO: Got endpoints: latency-svc-6fpsp [617.680448ms] Apr 11 14:21:32.579: INFO: Created: latency-svc-4p842 Apr 11 14:21:32.646: INFO: Got endpoints: latency-svc-4p842 [715.626752ms] Apr 11 14:21:32.661: INFO: Created: latency-svc-npz8x Apr 11 14:21:32.674: INFO: Got endpoints: latency-svc-npz8x [744.114592ms] Apr 11 14:21:32.700: INFO: Created: latency-svc-98lc9 Apr 11 14:21:32.711: INFO: Got endpoints: latency-svc-98lc9 [780.283346ms] Apr 11 14:21:32.796: INFO: Created: latency-svc-5pp8f Apr 11 14:21:32.843: INFO: Got endpoints: latency-svc-5pp8f [853.160543ms] Apr 11 14:21:32.879: INFO: Created: latency-svc-t45gg Apr 11 14:21:32.945: INFO: Got endpoints: latency-svc-t45gg [904.378395ms] Apr 11 14:21:32.952: INFO: Created: latency-svc-gcxgs Apr 11 14:21:32.955: INFO: Got endpoints: latency-svc-gcxgs [865.832547ms] Apr 11 14:21:32.979: INFO: Created: latency-svc-6wzmd Apr 11 14:21:32.998: INFO: Got endpoints: latency-svc-6wzmd [863.31157ms] Apr 11 14:21:33.029: INFO: Created: latency-svc-fbcmx Apr 11 14:21:33.095: INFO: Got endpoints: latency-svc-fbcmx [888.019619ms] Apr 11 14:21:33.099: INFO: Created: latency-svc-bhlft Apr 11 14:21:33.106: INFO: Got endpoints: latency-svc-bhlft [859.52969ms] Apr 11 14:21:33.130: INFO: Created: latency-svc-ntvzw Apr 11 14:21:33.142: INFO: Got endpoints: latency-svc-ntvzw [865.348052ms] Apr 11 14:21:33.166: INFO: Created: latency-svc-c5t4m Apr 11 14:21:33.178: INFO: Got endpoints: latency-svc-c5t4m [813.904348ms] Apr 11 14:21:33.233: INFO: Created: latency-svc-smvsj Apr 11 14:21:33.245: INFO: Got endpoints: latency-svc-smvsj [870.647915ms] Apr 11 14:21:33.269: INFO: Created: latency-svc-v6g6p Apr 11 14:21:33.281: INFO: Got endpoints: latency-svc-v6g6p [865.65814ms] Apr 11 14:21:33.307: INFO: Created: latency-svc-8pr9q Apr 11 14:21:33.317: INFO: Got endpoints: latency-svc-8pr9q [815.214516ms] Apr 11 14:21:33.377: INFO: Created: latency-svc-2js4n Apr 11 14:21:33.384: INFO: Got endpoints: latency-svc-2js4n [836.266803ms] Apr 11 14:21:33.406: INFO: Created: latency-svc-749ck Apr 11 14:21:33.421: INFO: Got endpoints: latency-svc-749ck [774.611794ms] Apr 11 14:21:33.443: INFO: Created: latency-svc-hglgf Apr 11 14:21:33.457: INFO: Got endpoints: latency-svc-hglgf [782.471641ms] Apr 11 14:21:33.473: INFO: Created: latency-svc-j7xpp Apr 11 14:21:33.514: INFO: Got endpoints: latency-svc-j7xpp [803.284642ms] Apr 11 14:21:33.517: INFO: Created: latency-svc-d5mdk Apr 11 14:21:33.529: INFO: Got endpoints: latency-svc-d5mdk [685.502497ms] Apr 11 14:21:33.562: INFO: Created: latency-svc-bj7fb Apr 11 14:21:33.571: INFO: Got endpoints: latency-svc-bj7fb [625.939395ms] Apr 11 14:21:33.592: INFO: Created: latency-svc-slmv2 Apr 11 14:21:33.608: INFO: Got endpoints: latency-svc-slmv2 [652.775464ms] Apr 11 14:21:33.658: INFO: Created: latency-svc-6g5jh Apr 11 14:21:33.662: INFO: Got endpoints: latency-svc-6g5jh [664.118403ms] Apr 11 14:21:33.684: INFO: Created: latency-svc-wm94x Apr 11 14:21:33.698: INFO: Got endpoints: latency-svc-wm94x [603.257213ms] Apr 11 14:21:33.720: INFO: Created: latency-svc-982jm Apr 11 14:21:33.729: INFO: Got endpoints: latency-svc-982jm [623.172477ms] Apr 11 14:21:33.749: INFO: Created: latency-svc-qngvk Apr 11 14:21:33.783: INFO: Got endpoints: latency-svc-qngvk [641.010302ms] Apr 11 14:21:33.801: INFO: Created: latency-svc-fsmqs Apr 11 14:21:33.813: INFO: Got endpoints: latency-svc-fsmqs [634.855479ms] Apr 11 14:21:33.838: INFO: Created: latency-svc-9nd7c Apr 11 14:21:33.869: INFO: Got endpoints: latency-svc-9nd7c [624.397594ms] Apr 11 14:21:33.934: INFO: Created: latency-svc-ldp5w Apr 11 14:21:33.963: INFO: Got endpoints: latency-svc-ldp5w [681.867829ms] Apr 11 14:21:33.964: INFO: Created: latency-svc-b89v4 Apr 11 14:21:33.987: INFO: Got endpoints: latency-svc-b89v4 [670.149928ms] Apr 11 14:21:34.012: INFO: Created: latency-svc-rmrk7 Apr 11 14:21:34.024: INFO: Got endpoints: latency-svc-rmrk7 [640.114025ms] Apr 11 14:21:34.071: INFO: Created: latency-svc-g6cwd Apr 11 14:21:34.079: INFO: Got endpoints: latency-svc-g6cwd [658.220886ms] Apr 11 14:21:34.109: INFO: Created: latency-svc-zzhtk Apr 11 14:21:34.139: INFO: Got endpoints: latency-svc-zzhtk [682.379982ms] Apr 11 14:21:34.168: INFO: Created: latency-svc-lkgx9 Apr 11 14:21:34.220: INFO: Got endpoints: latency-svc-lkgx9 [706.553103ms] Apr 11 14:21:34.228: INFO: Created: latency-svc-njsj2 Apr 11 14:21:34.260: INFO: Got endpoints: latency-svc-njsj2 [730.739422ms] Apr 11 14:21:34.296: INFO: Created: latency-svc-n7dxv Apr 11 14:21:34.308: INFO: Got endpoints: latency-svc-n7dxv [736.497599ms] Apr 11 14:21:34.359: INFO: Created: latency-svc-zc2nb Apr 11 14:21:34.362: INFO: Got endpoints: latency-svc-zc2nb [754.350432ms] Apr 11 14:21:34.389: INFO: Created: latency-svc-bkd87 Apr 11 14:21:34.404: INFO: Got endpoints: latency-svc-bkd87 [742.118651ms] Apr 11 14:21:34.431: INFO: Created: latency-svc-svcnk Apr 11 14:21:34.453: INFO: Got endpoints: latency-svc-svcnk [754.919148ms] Apr 11 14:21:34.509: INFO: Created: latency-svc-7hl6q Apr 11 14:21:34.511: INFO: Got endpoints: latency-svc-7hl6q [781.855165ms] Apr 11 14:21:34.541: INFO: Created: latency-svc-9fg26 Apr 11 14:21:34.561: INFO: Got endpoints: latency-svc-9fg26 [778.025048ms] Apr 11 14:21:34.583: INFO: Created: latency-svc-9hvgt Apr 11 14:21:34.598: INFO: Got endpoints: latency-svc-9hvgt [784.456701ms] Apr 11 14:21:34.652: INFO: Created: latency-svc-87zsc Apr 11 14:21:34.683: INFO: Got endpoints: latency-svc-87zsc [813.985214ms] Apr 11 14:21:34.722: INFO: Created: latency-svc-zpr7x Apr 11 14:21:34.742: INFO: Got endpoints: latency-svc-zpr7x [778.918959ms] Apr 11 14:21:34.801: INFO: Created: latency-svc-kljln Apr 11 14:21:34.814: INFO: Got endpoints: latency-svc-kljln [826.899096ms] Apr 11 14:21:34.839: INFO: Created: latency-svc-bcqdm Apr 11 14:21:34.850: INFO: Got endpoints: latency-svc-bcqdm [825.802505ms] Apr 11 14:21:34.869: INFO: Created: latency-svc-vmppq Apr 11 14:21:34.881: INFO: Got endpoints: latency-svc-vmppq [802.316002ms] Apr 11 14:21:34.899: INFO: Created: latency-svc-tz8tq Apr 11 14:21:34.945: INFO: Got endpoints: latency-svc-tz8tq [805.701721ms] Apr 11 14:21:34.973: INFO: Created: latency-svc-8k6rq Apr 11 14:21:34.984: INFO: Got endpoints: latency-svc-8k6rq [762.999272ms] Apr 11 14:21:35.003: INFO: Created: latency-svc-2dbxr Apr 11 14:21:35.027: INFO: Got endpoints: latency-svc-2dbxr [766.711812ms] Apr 11 14:21:35.089: INFO: Created: latency-svc-wlhh5 Apr 11 14:21:35.092: INFO: Got endpoints: latency-svc-wlhh5 [784.072052ms] Apr 11 14:21:35.127: INFO: Created: latency-svc-7thsd Apr 11 14:21:35.141: INFO: Got endpoints: latency-svc-7thsd [778.624027ms] Apr 11 14:21:35.157: INFO: Created: latency-svc-4nzgf Apr 11 14:21:35.183: INFO: Got endpoints: latency-svc-4nzgf [778.170495ms] Apr 11 14:21:35.239: INFO: Created: latency-svc-hg4pn Apr 11 14:21:35.243: INFO: Got endpoints: latency-svc-hg4pn [790.138211ms] Apr 11 14:21:35.274: INFO: Created: latency-svc-lprvf Apr 11 14:21:35.286: INFO: Got endpoints: latency-svc-lprvf [774.590471ms] Apr 11 14:21:35.314: INFO: Created: latency-svc-cpktm Apr 11 14:21:35.328: INFO: Got endpoints: latency-svc-cpktm [766.363129ms] Apr 11 14:21:35.379: INFO: Created: latency-svc-m5lj5 Apr 11 14:21:35.400: INFO: Got endpoints: latency-svc-m5lj5 [802.391091ms] Apr 11 14:21:35.423: INFO: Created: latency-svc-9smm5 Apr 11 14:21:35.436: INFO: Got endpoints: latency-svc-9smm5 [753.195622ms] Apr 11 14:21:35.465: INFO: Created: latency-svc-kxd77 Apr 11 14:21:35.526: INFO: Got endpoints: latency-svc-kxd77 [783.449301ms] Apr 11 14:21:35.528: INFO: Created: latency-svc-jj5pn Apr 11 14:21:35.539: INFO: Got endpoints: latency-svc-jj5pn [724.43876ms] Apr 11 14:21:35.571: INFO: Created: latency-svc-k9qp4 Apr 11 14:21:35.587: INFO: Got endpoints: latency-svc-k9qp4 [737.04644ms] Apr 11 14:21:35.608: INFO: Created: latency-svc-nnb2n Apr 11 14:21:35.624: INFO: Got endpoints: latency-svc-nnb2n [742.404228ms] Apr 11 14:21:35.707: INFO: Created: latency-svc-mzxmj Apr 11 14:21:35.717: INFO: Got endpoints: latency-svc-mzxmj [771.451501ms] Apr 11 14:21:35.751: INFO: Created: latency-svc-28z9n Apr 11 14:21:35.788: INFO: Got endpoints: latency-svc-28z9n [804.733627ms] Apr 11 14:21:35.862: INFO: Created: latency-svc-8wpr2 Apr 11 14:21:35.865: INFO: Got endpoints: latency-svc-8wpr2 [838.624487ms] Apr 11 14:21:35.922: INFO: Created: latency-svc-h4hnv Apr 11 14:21:35.932: INFO: Got endpoints: latency-svc-h4hnv [840.433331ms] Apr 11 14:21:35.950: INFO: Created: latency-svc-tt55k Apr 11 14:21:35.993: INFO: Got endpoints: latency-svc-tt55k [852.200752ms] Apr 11 14:21:36.005: INFO: Created: latency-svc-xwbcs Apr 11 14:21:36.017: INFO: Got endpoints: latency-svc-xwbcs [834.541524ms] Apr 11 14:21:36.035: INFO: Created: latency-svc-67rf4 Apr 11 14:21:36.059: INFO: Got endpoints: latency-svc-67rf4 [815.399498ms] Apr 11 14:21:36.088: INFO: Created: latency-svc-dkm6c Apr 11 14:21:36.167: INFO: Got endpoints: latency-svc-dkm6c [880.820027ms] Apr 11 14:21:36.168: INFO: Created: latency-svc-n6kvt Apr 11 14:21:36.174: INFO: Got endpoints: latency-svc-n6kvt [846.064612ms] Apr 11 14:21:36.210: INFO: Created: latency-svc-4jdtd Apr 11 14:21:36.222: INFO: Got endpoints: latency-svc-4jdtd [822.129235ms] Apr 11 14:21:36.245: INFO: Created: latency-svc-p7qj4 Apr 11 14:21:36.259: INFO: Got endpoints: latency-svc-p7qj4 [822.445464ms] Apr 11 14:21:36.299: INFO: Created: latency-svc-2kzjd Apr 11 14:21:36.302: INFO: Got endpoints: latency-svc-2kzjd [775.880381ms] Apr 11 14:21:36.328: INFO: Created: latency-svc-8rzgj Apr 11 14:21:36.337: INFO: Got endpoints: latency-svc-8rzgj [798.274817ms] Apr 11 14:21:36.373: INFO: Created: latency-svc-glrnj Apr 11 14:21:36.395: INFO: Got endpoints: latency-svc-glrnj [807.498377ms] Apr 11 14:21:36.444: INFO: Created: latency-svc-p5fn7 Apr 11 14:21:36.452: INFO: Got endpoints: latency-svc-p5fn7 [114.507677ms] Apr 11 14:21:36.473: INFO: Created: latency-svc-fl4sh Apr 11 14:21:36.488: INFO: Got endpoints: latency-svc-fl4sh [864.51962ms] Apr 11 14:21:36.514: INFO: Created: latency-svc-kcw4m Apr 11 14:21:36.586: INFO: Got endpoints: latency-svc-kcw4m [869.253223ms] Apr 11 14:21:36.611: INFO: Created: latency-svc-l74n4 Apr 11 14:21:36.627: INFO: Got endpoints: latency-svc-l74n4 [838.218623ms] Apr 11 14:21:36.648: INFO: Created: latency-svc-qqrqk Apr 11 14:21:36.663: INFO: Got endpoints: latency-svc-qqrqk [797.492808ms] Apr 11 14:21:36.736: INFO: Created: latency-svc-bxqvh Apr 11 14:21:36.739: INFO: Got endpoints: latency-svc-bxqvh [806.788071ms] Apr 11 14:21:36.791: INFO: Created: latency-svc-l79kh Apr 11 14:21:36.802: INFO: Got endpoints: latency-svc-l79kh [808.339058ms] Apr 11 14:21:36.821: INFO: Created: latency-svc-5787t Apr 11 14:21:36.832: INFO: Got endpoints: latency-svc-5787t [814.399576ms] Apr 11 14:21:36.879: INFO: Created: latency-svc-4446p Apr 11 14:21:36.892: INFO: Got endpoints: latency-svc-4446p [833.40293ms] Apr 11 14:21:36.909: INFO: Created: latency-svc-gj4zz Apr 11 14:21:36.924: INFO: Got endpoints: latency-svc-gj4zz [756.938477ms] Apr 11 14:21:36.939: INFO: Created: latency-svc-dlfdj Apr 11 14:21:36.953: INFO: Got endpoints: latency-svc-dlfdj [778.843931ms] Apr 11 14:21:37.024: INFO: Created: latency-svc-mwxvz Apr 11 14:21:37.026: INFO: Got endpoints: latency-svc-mwxvz [803.524724ms] Apr 11 14:21:37.048: INFO: Created: latency-svc-hlgjf Apr 11 14:21:37.061: INFO: Got endpoints: latency-svc-hlgjf [802.419669ms] Apr 11 14:21:37.079: INFO: Created: latency-svc-4jtvn Apr 11 14:21:37.092: INFO: Got endpoints: latency-svc-4jtvn [790.097642ms] Apr 11 14:21:37.114: INFO: Created: latency-svc-x57c8 Apr 11 14:21:37.122: INFO: Got endpoints: latency-svc-x57c8 [727.44904ms] Apr 11 14:21:37.173: INFO: Created: latency-svc-c4wx6 Apr 11 14:21:37.176: INFO: Got endpoints: latency-svc-c4wx6 [723.802916ms] Apr 11 14:21:37.199: INFO: Created: latency-svc-t6slb Apr 11 14:21:37.213: INFO: Got endpoints: latency-svc-t6slb [724.874025ms] Apr 11 14:21:37.229: INFO: Created: latency-svc-v9qkl Apr 11 14:21:37.243: INFO: Got endpoints: latency-svc-v9qkl [657.168445ms] Apr 11 14:21:37.259: INFO: Created: latency-svc-m72rr Apr 11 14:21:37.322: INFO: Got endpoints: latency-svc-m72rr [695.552529ms] Apr 11 14:21:37.324: INFO: Created: latency-svc-k54j2 Apr 11 14:21:37.334: INFO: Got endpoints: latency-svc-k54j2 [671.463652ms] Apr 11 14:21:37.359: INFO: Created: latency-svc-tl5j4 Apr 11 14:21:37.370: INFO: Got endpoints: latency-svc-tl5j4 [630.51135ms] Apr 11 14:21:37.391: INFO: Created: latency-svc-4n9pf Apr 11 14:21:37.406: INFO: Got endpoints: latency-svc-4n9pf [604.620762ms] Apr 11 14:21:37.458: INFO: Created: latency-svc-v897l Apr 11 14:21:37.460: INFO: Got endpoints: latency-svc-v897l [628.635191ms] Apr 11 14:21:37.487: INFO: Created: latency-svc-g2hfn Apr 11 14:21:37.503: INFO: Got endpoints: latency-svc-g2hfn [610.932945ms] Apr 11 14:21:37.521: INFO: Created: latency-svc-j8sd4 Apr 11 14:21:37.534: INFO: Got endpoints: latency-svc-j8sd4 [610.081738ms] Apr 11 14:21:37.592: INFO: Created: latency-svc-9t7k2 Apr 11 14:21:37.600: INFO: Got endpoints: latency-svc-9t7k2 [647.36565ms] Apr 11 14:21:37.631: INFO: Created: latency-svc-ml8m8 Apr 11 14:21:37.642: INFO: Got endpoints: latency-svc-ml8m8 [615.638357ms] Apr 11 14:21:37.661: INFO: Created: latency-svc-8c898 Apr 11 14:21:37.672: INFO: Got endpoints: latency-svc-8c898 [610.712497ms] Apr 11 14:21:37.736: INFO: Created: latency-svc-t7jhv Apr 11 14:21:37.739: INFO: Got endpoints: latency-svc-t7jhv [646.934889ms] Apr 11 14:21:37.779: INFO: Created: latency-svc-f4f7d Apr 11 14:21:37.793: INFO: Got endpoints: latency-svc-f4f7d [670.044098ms] Apr 11 14:21:37.810: INFO: Created: latency-svc-2g6hz Apr 11 14:21:37.823: INFO: Got endpoints: latency-svc-2g6hz [647.639135ms] Apr 11 14:21:37.874: INFO: Created: latency-svc-lrmf9 Apr 11 14:21:37.884: INFO: Got endpoints: latency-svc-lrmf9 [670.76076ms] Apr 11 14:21:37.918: INFO: Created: latency-svc-mfjz6 Apr 11 14:21:37.938: INFO: Got endpoints: latency-svc-mfjz6 [694.807109ms] Apr 11 14:21:37.972: INFO: Created: latency-svc-f7sfj Apr 11 14:21:38.017: INFO: Got endpoints: latency-svc-f7sfj [695.077886ms] Apr 11 14:21:38.026: INFO: Created: latency-svc-gxmp6 Apr 11 14:21:38.040: INFO: Got endpoints: latency-svc-gxmp6 [705.40986ms] Apr 11 14:21:38.058: INFO: Created: latency-svc-5457d Apr 11 14:21:38.080: INFO: Got endpoints: latency-svc-5457d [710.619865ms] Apr 11 14:21:38.105: INFO: Created: latency-svc-vkwkj Apr 11 14:21:38.173: INFO: Got endpoints: latency-svc-vkwkj [766.420409ms] Apr 11 14:21:38.178: INFO: Created: latency-svc-pb5jg Apr 11 14:21:38.186: INFO: Got endpoints: latency-svc-pb5jg [726.016481ms] Apr 11 14:21:38.207: INFO: Created: latency-svc-r52hv Apr 11 14:21:38.221: INFO: Got endpoints: latency-svc-r52hv [718.083353ms] Apr 11 14:21:38.243: INFO: Created: latency-svc-8nxhj Apr 11 14:21:38.256: INFO: Got endpoints: latency-svc-8nxhj [722.531055ms] Apr 11 14:21:38.318: INFO: Created: latency-svc-9gj6t Apr 11 14:21:38.320: INFO: Got endpoints: latency-svc-9gj6t [719.734963ms] Apr 11 14:21:38.350: INFO: Created: latency-svc-bc48l Apr 11 14:21:38.365: INFO: Got endpoints: latency-svc-bc48l [723.584124ms] Apr 11 14:21:38.386: INFO: Created: latency-svc-h4wpn Apr 11 14:21:38.401: INFO: Got endpoints: latency-svc-h4wpn [728.678345ms] Apr 11 14:21:38.455: INFO: Created: latency-svc-5qhf9 Apr 11 14:21:38.482: INFO: Got endpoints: latency-svc-5qhf9 [743.459735ms] Apr 11 14:21:38.483: INFO: Created: latency-svc-nt6bp Apr 11 14:21:38.510: INFO: Got endpoints: latency-svc-nt6bp [717.014693ms] Apr 11 14:21:38.537: INFO: Created: latency-svc-r5rff Apr 11 14:21:38.552: INFO: Got endpoints: latency-svc-r5rff [728.230129ms] Apr 11 14:21:38.610: INFO: Created: latency-svc-lgnlm Apr 11 14:21:38.618: INFO: Got endpoints: latency-svc-lgnlm [733.852784ms] Apr 11 14:21:38.655: INFO: Created: latency-svc-kgd8g Apr 11 14:21:38.684: INFO: Got endpoints: latency-svc-kgd8g [746.471685ms] Apr 11 14:21:38.772: INFO: Created: latency-svc-rpjsw Apr 11 14:21:38.775: INFO: Got endpoints: latency-svc-rpjsw [757.456665ms] Apr 11 14:21:38.830: INFO: Created: latency-svc-pn9kn Apr 11 14:21:38.865: INFO: Got endpoints: latency-svc-pn9kn [825.309734ms] Apr 11 14:21:38.910: INFO: Created: latency-svc-d5xz8 Apr 11 14:21:38.913: INFO: Got endpoints: latency-svc-d5xz8 [832.289825ms] Apr 11 14:21:38.945: INFO: Created: latency-svc-l4lpp Apr 11 14:21:38.955: INFO: Got endpoints: latency-svc-l4lpp [782.531201ms] Apr 11 14:21:38.974: INFO: Created: latency-svc-6j5cd Apr 11 14:21:38.986: INFO: Got endpoints: latency-svc-6j5cd [799.550017ms] Apr 11 14:21:39.003: INFO: Created: latency-svc-p2d24 Apr 11 14:21:39.059: INFO: Got endpoints: latency-svc-p2d24 [837.686077ms] Apr 11 14:21:39.069: INFO: Created: latency-svc-h9xrk Apr 11 14:21:39.082: INFO: Got endpoints: latency-svc-h9xrk [826.032667ms] Apr 11 14:21:39.105: INFO: Created: latency-svc-ctlqh Apr 11 14:21:39.118: INFO: Got endpoints: latency-svc-ctlqh [798.332778ms] Apr 11 14:21:39.137: INFO: Created: latency-svc-vw74q Apr 11 14:21:39.179: INFO: Got endpoints: latency-svc-vw74q [813.267728ms] Apr 11 14:21:39.219: INFO: Created: latency-svc-ns5f6 Apr 11 14:21:39.233: INFO: Got endpoints: latency-svc-ns5f6 [832.311183ms] Apr 11 14:21:39.255: INFO: Created: latency-svc-bxs42 Apr 11 14:21:39.328: INFO: Got endpoints: latency-svc-bxs42 [845.809944ms] Apr 11 14:21:39.330: INFO: Created: latency-svc-vnzn6 Apr 11 14:21:39.336: INFO: Got endpoints: latency-svc-vnzn6 [825.926011ms] Apr 11 14:21:39.365: INFO: Created: latency-svc-xs74p Apr 11 14:21:39.378: INFO: Got endpoints: latency-svc-xs74p [826.16808ms] Apr 11 14:21:39.400: INFO: Created: latency-svc-9bhm7 Apr 11 14:21:39.466: INFO: Got endpoints: latency-svc-9bhm7 [848.184426ms] Apr 11 14:21:39.478: INFO: Created: latency-svc-hgms9 Apr 11 14:21:39.493: INFO: Got endpoints: latency-svc-hgms9 [808.47887ms] Apr 11 14:21:39.527: INFO: Created: latency-svc-sb8jt Apr 11 14:21:39.541: INFO: Got endpoints: latency-svc-sb8jt [766.257218ms] Apr 11 14:21:39.563: INFO: Created: latency-svc-sgv7j Apr 11 14:21:39.604: INFO: Got endpoints: latency-svc-sgv7j [738.824328ms] Apr 11 14:21:39.617: INFO: Created: latency-svc-b547j Apr 11 14:21:39.632: INFO: Got endpoints: latency-svc-b547j [718.86241ms] Apr 11 14:21:39.658: INFO: Created: latency-svc-qgww2 Apr 11 14:21:39.668: INFO: Got endpoints: latency-svc-qgww2 [712.321164ms] Apr 11 14:21:39.688: INFO: Created: latency-svc-88l6l Apr 11 14:21:39.699: INFO: Got endpoints: latency-svc-88l6l [712.539899ms] Apr 11 14:21:39.754: INFO: Created: latency-svc-9bf96 Apr 11 14:21:39.783: INFO: Got endpoints: latency-svc-9bf96 [723.738991ms] Apr 11 14:21:39.783: INFO: Created: latency-svc-zkqrf Apr 11 14:21:39.795: INFO: Got endpoints: latency-svc-zkqrf [712.757463ms] Apr 11 14:21:39.819: INFO: Created: latency-svc-6n75w Apr 11 14:21:39.831: INFO: Got endpoints: latency-svc-6n75w [712.593163ms] Apr 11 14:21:39.849: INFO: Created: latency-svc-9hkst Apr 11 14:21:39.915: INFO: Got endpoints: latency-svc-9hkst [736.627ms] Apr 11 14:21:39.941: INFO: Created: latency-svc-nnh79 Apr 11 14:21:39.958: INFO: Got endpoints: latency-svc-nnh79 [724.758669ms] Apr 11 14:21:39.975: INFO: Created: latency-svc-tmlpg Apr 11 14:21:39.988: INFO: Got endpoints: latency-svc-tmlpg [659.762003ms] Apr 11 14:21:40.005: INFO: Created: latency-svc-2nd6p Apr 11 14:21:40.041: INFO: Got endpoints: latency-svc-2nd6p [705.408255ms] Apr 11 14:21:40.053: INFO: Created: latency-svc-58bcm Apr 11 14:21:40.067: INFO: Got endpoints: latency-svc-58bcm [688.891131ms] Apr 11 14:21:40.085: INFO: Created: latency-svc-86sgc Apr 11 14:21:40.097: INFO: Got endpoints: latency-svc-86sgc [631.052668ms] Apr 11 14:21:40.116: INFO: Created: latency-svc-zjdhf Apr 11 14:21:40.140: INFO: Got endpoints: latency-svc-zjdhf [646.712137ms] Apr 11 14:21:40.203: INFO: Created: latency-svc-28zfw Apr 11 14:21:40.245: INFO: Got endpoints: latency-svc-28zfw [703.945694ms] Apr 11 14:21:40.283: INFO: Created: latency-svc-tnwtv Apr 11 14:21:40.302: INFO: Got endpoints: latency-svc-tnwtv [697.842898ms] Apr 11 14:21:40.358: INFO: Created: latency-svc-smr4t Apr 11 14:21:40.362: INFO: Got endpoints: latency-svc-smr4t [729.929767ms] Apr 11 14:21:40.383: INFO: Created: latency-svc-hdz58 Apr 11 14:21:40.399: INFO: Got endpoints: latency-svc-hdz58 [731.449412ms] Apr 11 14:21:40.419: INFO: Created: latency-svc-79t4n Apr 11 14:21:40.435: INFO: Got endpoints: latency-svc-79t4n [736.335699ms] Apr 11 14:21:40.456: INFO: Created: latency-svc-5skl4 Apr 11 14:21:40.502: INFO: Got endpoints: latency-svc-5skl4 [719.169754ms] Apr 11 14:21:40.510: INFO: Created: latency-svc-bbvm5 Apr 11 14:21:40.526: INFO: Got endpoints: latency-svc-bbvm5 [730.766122ms] Apr 11 14:21:40.559: INFO: Created: latency-svc-8q5bd Apr 11 14:21:40.580: INFO: Got endpoints: latency-svc-8q5bd [748.901177ms] Apr 11 14:21:40.737: INFO: Created: latency-svc-xdcwf Apr 11 14:21:40.738: INFO: Got endpoints: latency-svc-xdcwf [822.683455ms] Apr 11 14:21:40.787: INFO: Created: latency-svc-6ckps Apr 11 14:21:40.802: INFO: Got endpoints: latency-svc-6ckps [844.296179ms] Apr 11 14:21:40.823: INFO: Created: latency-svc-m9bvl Apr 11 14:21:40.832: INFO: Got endpoints: latency-svc-m9bvl [844.232974ms] Apr 11 14:21:40.880: INFO: Created: latency-svc-7xx2d Apr 11 14:21:40.887: INFO: Got endpoints: latency-svc-7xx2d [845.574755ms] Apr 11 14:21:40.905: INFO: Created: latency-svc-h8tmk Apr 11 14:21:40.917: INFO: Got endpoints: latency-svc-h8tmk [850.238641ms] Apr 11 14:21:40.935: INFO: Created: latency-svc-5dzqw Apr 11 14:21:40.954: INFO: Got endpoints: latency-svc-5dzqw [857.01614ms] Apr 11 14:21:41.029: INFO: Created: latency-svc-vx27k Apr 11 14:21:41.032: INFO: Got endpoints: latency-svc-vx27k [892.189685ms] Apr 11 14:21:41.056: INFO: Created: latency-svc-7dx97 Apr 11 14:21:41.069: INFO: Got endpoints: latency-svc-7dx97 [823.695404ms] Apr 11 14:21:41.085: INFO: Created: latency-svc-xs8wl Apr 11 14:21:41.099: INFO: Got endpoints: latency-svc-xs8wl [796.990534ms] Apr 11 14:21:41.115: INFO: Created: latency-svc-6lcxf Apr 11 14:21:41.179: INFO: Got endpoints: latency-svc-6lcxf [816.878398ms] Apr 11 14:21:41.188: INFO: Created: latency-svc-jmnqt Apr 11 14:21:41.202: INFO: Got endpoints: latency-svc-jmnqt [802.718965ms] Apr 11 14:21:41.218: INFO: Created: latency-svc-6lrd9 Apr 11 14:21:41.232: INFO: Got endpoints: latency-svc-6lrd9 [796.849905ms] Apr 11 14:21:41.249: INFO: Created: latency-svc-bpmh4 Apr 11 14:21:41.263: INFO: Got endpoints: latency-svc-bpmh4 [760.607439ms] Apr 11 14:21:41.317: INFO: Created: latency-svc-t8g7k Apr 11 14:21:41.366: INFO: Got endpoints: latency-svc-t8g7k [840.152797ms] Apr 11 14:21:41.366: INFO: Created: latency-svc-mqd8t Apr 11 14:21:41.373: INFO: Got endpoints: latency-svc-mqd8t [793.065567ms] Apr 11 14:21:41.392: INFO: Created: latency-svc-9v97d Apr 11 14:21:41.484: INFO: Got endpoints: latency-svc-9v97d [746.060726ms] Apr 11 14:21:41.486: INFO: Created: latency-svc-7fghq Apr 11 14:21:41.491: INFO: Got endpoints: latency-svc-7fghq [688.645044ms] Apr 11 14:21:41.523: INFO: Created: latency-svc-gxwm8 Apr 11 14:21:41.534: INFO: Got endpoints: latency-svc-gxwm8 [701.256568ms] Apr 11 14:21:41.554: INFO: Created: latency-svc-l72td Apr 11 14:21:41.565: INFO: Got endpoints: latency-svc-l72td [678.1825ms] Apr 11 14:21:41.622: INFO: Created: latency-svc-j5z5k Apr 11 14:21:41.624: INFO: Got endpoints: latency-svc-j5z5k [707.001624ms] Apr 11 14:21:41.650: INFO: Created: latency-svc-4psf9 Apr 11 14:21:41.667: INFO: Got endpoints: latency-svc-4psf9 [712.205108ms] Apr 11 14:21:41.685: INFO: Created: latency-svc-jj8qs Apr 11 14:21:41.697: INFO: Got endpoints: latency-svc-jj8qs [665.062476ms] Apr 11 14:21:41.715: INFO: Created: latency-svc-m9jgc Apr 11 14:21:41.766: INFO: Got endpoints: latency-svc-m9jgc [696.650216ms] Apr 11 14:21:41.788: INFO: Created: latency-svc-rjkzx Apr 11 14:21:41.800: INFO: Got endpoints: latency-svc-rjkzx [700.504879ms] Apr 11 14:21:41.819: INFO: Created: latency-svc-v4lwm Apr 11 14:21:41.845: INFO: Got endpoints: latency-svc-v4lwm [665.774395ms] Apr 11 14:21:41.898: INFO: Created: latency-svc-kpwv4 Apr 11 14:21:41.919: INFO: Got endpoints: latency-svc-kpwv4 [716.795382ms] Apr 11 14:21:41.919: INFO: Created: latency-svc-9pv9d Apr 11 14:21:41.943: INFO: Got endpoints: latency-svc-9pv9d [710.857931ms] Apr 11 14:21:41.943: INFO: Latencies: [60.182275ms 110.755022ms 114.507677ms 158.987531ms 204.574113ms 276.555425ms 316.08048ms 346.607638ms 434.104033ms 443.637239ms 485.394869ms 571.793542ms 603.257213ms 604.620762ms 610.081738ms 610.712497ms 610.932945ms 615.638357ms 617.680448ms 623.172477ms 624.397594ms 625.939395ms 628.635191ms 630.51135ms 631.052668ms 634.855479ms 640.114025ms 641.010302ms 646.712137ms 646.934889ms 647.36565ms 647.639135ms 652.775464ms 657.168445ms 658.220886ms 659.762003ms 664.118403ms 665.062476ms 665.774395ms 670.044098ms 670.149928ms 670.76076ms 671.463652ms 678.1825ms 681.867829ms 682.379982ms 685.502497ms 688.645044ms 688.891131ms 694.807109ms 695.077886ms 695.552529ms 696.650216ms 697.842898ms 700.504879ms 701.256568ms 703.945694ms 705.408255ms 705.40986ms 706.553103ms 707.001624ms 710.619865ms 710.857931ms 712.205108ms 712.321164ms 712.539899ms 712.593163ms 712.757463ms 715.626752ms 716.795382ms 717.014693ms 718.083353ms 718.86241ms 719.169754ms 719.734963ms 722.531055ms 723.584124ms 723.738991ms 723.802916ms 724.43876ms 724.758669ms 724.874025ms 726.016481ms 727.44904ms 728.230129ms 728.678345ms 729.929767ms 730.739422ms 730.766122ms 731.449412ms 733.852784ms 736.335699ms 736.497599ms 736.627ms 737.04644ms 738.824328ms 742.118651ms 742.404228ms 743.459735ms 744.114592ms 746.060726ms 746.471685ms 748.901177ms 753.195622ms 754.350432ms 754.919148ms 756.938477ms 757.456665ms 760.607439ms 762.999272ms 766.257218ms 766.363129ms 766.420409ms 766.711812ms 771.451501ms 774.590471ms 774.611794ms 775.880381ms 778.025048ms 778.170495ms 778.624027ms 778.843931ms 778.918959ms 780.283346ms 781.855165ms 782.471641ms 782.531201ms 783.449301ms 784.072052ms 784.456701ms 790.097642ms 790.138211ms 793.065567ms 796.849905ms 796.990534ms 797.492808ms 798.274817ms 798.332778ms 799.550017ms 802.316002ms 802.391091ms 802.419669ms 802.718965ms 803.284642ms 803.524724ms 804.733627ms 805.701721ms 806.788071ms 807.498377ms 808.339058ms 808.47887ms 813.267728ms 813.904348ms 813.985214ms 814.399576ms 815.214516ms 815.399498ms 816.878398ms 822.129235ms 822.445464ms 822.683455ms 823.695404ms 825.309734ms 825.802505ms 825.926011ms 826.032667ms 826.16808ms 826.899096ms 832.289825ms 832.311183ms 833.40293ms 834.541524ms 836.266803ms 837.686077ms 838.218623ms 838.624487ms 840.152797ms 840.433331ms 844.232974ms 844.296179ms 845.574755ms 845.809944ms 846.064612ms 848.184426ms 850.238641ms 852.200752ms 853.160543ms 857.01614ms 859.52969ms 863.31157ms 864.51962ms 865.348052ms 865.65814ms 865.832547ms 869.253223ms 870.647915ms 880.820027ms 888.019619ms 892.189685ms 904.378395ms] Apr 11 14:21:41.943: INFO: 50 %ile: 746.060726ms Apr 11 14:21:41.943: INFO: 90 %ile: 845.574755ms Apr 11 14:21:41.943: INFO: 99 %ile: 892.189685ms Apr 11 14:21:41.943: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:21:41.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-597" for this suite. Apr 11 14:22:03.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:22:04.034: INFO: namespace svc-latency-597 deletion completed in 22.076965321s • [SLOW TEST:35.397 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:22:04.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 11 14:22:08.138: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:22:08.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1104" for this suite. Apr 11 14:22:14.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:22:14.279: INFO: namespace container-runtime-1104 deletion completed in 6.120583248s • [SLOW TEST:10.244 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:22:14.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 11 14:22:18.911: INFO: Successfully updated pod "pod-update-f84cc3a8-281a-4909-939a-044bc85a3eec" STEP: verifying the updated pod is in kubernetes Apr 11 14:22:18.937: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:22:18.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1026" for this suite. Apr 11 14:22:40.956: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:22:41.031: INFO: namespace pods-1026 deletion completed in 22.089846851s • [SLOW TEST:26.751 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:22:41.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Apr 11 14:22:41.090: INFO: Waiting up to 5m0s for pod "var-expansion-373fb9e9-6315-4518-aea5-98a60aec103c" in namespace "var-expansion-5797" to be "success or failure" Apr 11 14:22:41.114: INFO: Pod "var-expansion-373fb9e9-6315-4518-aea5-98a60aec103c": Phase="Pending", Reason="", readiness=false. Elapsed: 24.080748ms Apr 11 14:22:43.118: INFO: Pod "var-expansion-373fb9e9-6315-4518-aea5-98a60aec103c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028049087s Apr 11 14:22:45.121: INFO: Pod "var-expansion-373fb9e9-6315-4518-aea5-98a60aec103c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031409752s STEP: Saw pod success Apr 11 14:22:45.121: INFO: Pod "var-expansion-373fb9e9-6315-4518-aea5-98a60aec103c" satisfied condition "success or failure" Apr 11 14:22:45.124: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-373fb9e9-6315-4518-aea5-98a60aec103c container dapi-container: STEP: delete the pod Apr 11 14:22:45.156: INFO: Waiting for pod var-expansion-373fb9e9-6315-4518-aea5-98a60aec103c to disappear Apr 11 14:22:45.191: INFO: Pod var-expansion-373fb9e9-6315-4518-aea5-98a60aec103c no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:22:45.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5797" for this suite. Apr 11 14:22:51.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:22:51.278: INFO: namespace var-expansion-5797 deletion completed in 6.083539865s • [SLOW TEST:10.248 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:22:51.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-2128 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2128 to expose endpoints map[] Apr 11 14:22:51.378: INFO: Get endpoints failed (9.288919ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Apr 11 14:22:52.382: INFO: successfully validated that service endpoint-test2 in namespace services-2128 exposes endpoints map[] (1.013426835s elapsed) STEP: Creating pod pod1 in namespace services-2128 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2128 to expose endpoints map[pod1:[80]] Apr 11 14:22:56.454: INFO: successfully validated that service endpoint-test2 in namespace services-2128 exposes endpoints map[pod1:[80]] (4.063737713s elapsed) STEP: Creating pod pod2 in namespace services-2128 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2128 to expose endpoints map[pod1:[80] pod2:[80]] Apr 11 14:22:59.530: INFO: successfully validated that service endpoint-test2 in namespace services-2128 exposes endpoints map[pod1:[80] pod2:[80]] (3.071426511s elapsed) STEP: Deleting pod pod1 in namespace services-2128 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2128 to expose endpoints map[pod2:[80]] Apr 11 14:22:59.584: INFO: successfully validated that service endpoint-test2 in namespace services-2128 exposes endpoints map[pod2:[80]] (48.778371ms elapsed) STEP: Deleting pod pod2 in namespace services-2128 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2128 to expose endpoints map[] Apr 11 14:22:59.598: INFO: successfully validated that service endpoint-test2 in namespace services-2128 exposes endpoints map[] (9.211377ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:22:59.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2128" for this suite. Apr 11 14:23:21.854: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:23:21.932: INFO: namespace services-2128 deletion completed in 22.158514219s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:30.654 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:23:21.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Apr 11 14:23:21.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2711 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Apr 11 14:23:27.785: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0411 14:23:27.698905 2618 log.go:172] (0xc00093c160) (0xc00099c6e0) Create stream\nI0411 14:23:27.698934 2618 log.go:172] (0xc00093c160) (0xc00099c6e0) Stream added, broadcasting: 1\nI0411 14:23:27.701250 2618 log.go:172] (0xc00093c160) Reply frame received for 1\nI0411 14:23:27.701301 2618 log.go:172] (0xc00093c160) (0xc00085a000) Create stream\nI0411 14:23:27.701319 2618 log.go:172] (0xc00093c160) (0xc00085a000) Stream added, broadcasting: 3\nI0411 14:23:27.702404 2618 log.go:172] (0xc00093c160) Reply frame received for 3\nI0411 14:23:27.702452 2618 log.go:172] (0xc00093c160) (0xc00085a0a0) Create stream\nI0411 14:23:27.702466 2618 log.go:172] (0xc00093c160) (0xc00085a0a0) Stream added, broadcasting: 5\nI0411 14:23:27.703490 2618 log.go:172] (0xc00093c160) Reply frame received for 5\nI0411 14:23:27.703541 2618 log.go:172] (0xc00093c160) (0xc00085a500) Create stream\nI0411 14:23:27.703557 2618 log.go:172] (0xc00093c160) (0xc00085a500) Stream added, broadcasting: 7\nI0411 14:23:27.704369 2618 log.go:172] (0xc00093c160) Reply frame received for 7\nI0411 14:23:27.704521 2618 log.go:172] (0xc00085a000) (3) Writing data frame\nI0411 14:23:27.704645 2618 log.go:172] (0xc00085a000) (3) Writing data frame\nI0411 14:23:27.705763 2618 log.go:172] (0xc00093c160) Data frame received for 5\nI0411 14:23:27.705782 2618 log.go:172] (0xc00085a0a0) (5) Data frame handling\nI0411 14:23:27.705795 2618 log.go:172] (0xc00085a0a0) (5) Data frame sent\nI0411 14:23:27.706477 2618 log.go:172] (0xc00093c160) Data frame received for 5\nI0411 14:23:27.706491 2618 log.go:172] (0xc00085a0a0) (5) Data frame handling\nI0411 14:23:27.706507 2618 log.go:172] (0xc00085a0a0) (5) Data frame sent\nI0411 14:23:27.743790 2618 log.go:172] (0xc00093c160) Data frame received for 5\nI0411 14:23:27.743828 2618 log.go:172] (0xc00085a0a0) (5) Data frame handling\nI0411 14:23:27.744301 2618 log.go:172] (0xc00093c160) Data frame received for 7\nI0411 14:23:27.744328 2618 log.go:172] (0xc00085a500) (7) Data frame handling\nI0411 14:23:27.744621 2618 log.go:172] (0xc00093c160) Data frame received for 1\nI0411 14:23:27.744650 2618 log.go:172] (0xc00099c6e0) (1) Data frame handling\nI0411 14:23:27.744668 2618 log.go:172] (0xc00099c6e0) (1) Data frame sent\nI0411 14:23:27.744689 2618 log.go:172] (0xc00093c160) (0xc00099c6e0) Stream removed, broadcasting: 1\nI0411 14:23:27.744714 2618 log.go:172] (0xc00093c160) (0xc00085a000) Stream removed, broadcasting: 3\nI0411 14:23:27.744852 2618 log.go:172] (0xc00093c160) Go away received\nI0411 14:23:27.745362 2618 log.go:172] (0xc00093c160) (0xc00099c6e0) Stream removed, broadcasting: 1\nI0411 14:23:27.745385 2618 log.go:172] (0xc00093c160) (0xc00085a000) Stream removed, broadcasting: 3\nI0411 14:23:27.745396 2618 log.go:172] (0xc00093c160) (0xc00085a0a0) Stream removed, broadcasting: 5\nI0411 14:23:27.745406 2618 log.go:172] (0xc00093c160) (0xc00085a500) Stream removed, broadcasting: 7\n" Apr 11 14:23:27.785: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:23:29.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2711" for this suite. Apr 11 14:23:35.808: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:23:35.885: INFO: namespace kubectl-2711 deletion completed in 6.08939442s • [SLOW TEST:13.952 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:23:35.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-cab6a664-c74d-44a0-96e7-3baf58bb3aa1 STEP: Creating configMap with name cm-test-opt-upd-436fec0d-99d4-48b1-833b-24ed5a789908 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-cab6a664-c74d-44a0-96e7-3baf58bb3aa1 STEP: Updating configmap cm-test-opt-upd-436fec0d-99d4-48b1-833b-24ed5a789908 STEP: Creating configMap with name cm-test-opt-create-10cbe3f2-2540-42c3-adaa-e1745c2a6192 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:24:50.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8217" for this suite. Apr 11 14:25:12.502: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:25:12.592: INFO: namespace configmap-8217 deletion completed in 22.100429965s • [SLOW TEST:96.706 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:25:12.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-70c160f7-58cc-4926-a522-168059072e79 STEP: Creating a pod to test consume configMaps Apr 11 14:25:12.659: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1a359750-db36-45c6-90b0-ef3813e66abf" in namespace "projected-1178" to be "success or failure" Apr 11 14:25:12.727: INFO: Pod "pod-projected-configmaps-1a359750-db36-45c6-90b0-ef3813e66abf": Phase="Pending", Reason="", readiness=false. Elapsed: 67.297544ms Apr 11 14:25:14.731: INFO: Pod "pod-projected-configmaps-1a359750-db36-45c6-90b0-ef3813e66abf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071840033s Apr 11 14:25:16.734: INFO: Pod "pod-projected-configmaps-1a359750-db36-45c6-90b0-ef3813e66abf": Phase="Running", Reason="", readiness=true. Elapsed: 4.075031209s Apr 11 14:25:18.738: INFO: Pod "pod-projected-configmaps-1a359750-db36-45c6-90b0-ef3813e66abf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.078955359s STEP: Saw pod success Apr 11 14:25:18.738: INFO: Pod "pod-projected-configmaps-1a359750-db36-45c6-90b0-ef3813e66abf" satisfied condition "success or failure" Apr 11 14:25:18.741: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-1a359750-db36-45c6-90b0-ef3813e66abf container projected-configmap-volume-test: STEP: delete the pod Apr 11 14:25:18.762: INFO: Waiting for pod pod-projected-configmaps-1a359750-db36-45c6-90b0-ef3813e66abf to disappear Apr 11 14:25:18.778: INFO: Pod pod-projected-configmaps-1a359750-db36-45c6-90b0-ef3813e66abf no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:25:18.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1178" for this suite. Apr 11 14:25:24.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:25:24.872: INFO: namespace projected-1178 deletion completed in 6.089817196s • [SLOW TEST:12.279 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:25:24.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-6760 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-6760 STEP: Deleting pre-stop pod Apr 11 14:25:37.994: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:25:37.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-6760" for this suite. Apr 11 14:26:20.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:26:20.105: INFO: namespace prestop-6760 deletion completed in 42.078476622s • [SLOW TEST:55.232 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:26:20.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 11 14:26:38.819: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:26:39.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7394" for this suite. Apr 11 14:26:46.093: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:26:46.153: INFO: namespace container-runtime-7394 deletion completed in 6.205372918s • [SLOW TEST:26.048 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:26:46.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0411 14:27:17.867652 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 11 14:27:17.867: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:27:17.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7524" for this suite. Apr 11 14:27:23.885: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:27:23.950: INFO: namespace gc-7524 deletion completed in 6.080059307s • [SLOW TEST:37.797 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:27:23.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Apr 11 14:27:24.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8016' Apr 11 14:27:25.423: INFO: stderr: "" Apr 11 14:27:25.423: INFO: stdout: "pod/pause created\n" Apr 11 14:27:25.423: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Apr 11 14:27:25.423: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-8016" to be "running and ready" Apr 11 14:27:25.537: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 114.717158ms Apr 11 14:27:28.288: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.865202556s Apr 11 14:27:30.291: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.868393496s Apr 11 14:27:32.948: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 7.525550487s Apr 11 14:27:36.951: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 11.528758596s Apr 11 14:27:39.034: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 13.611371442s Apr 11 14:27:41.037: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 15.61415969s Apr 11 14:27:43.040: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 17.617341127s Apr 11 14:27:45.044: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 19.620922772s Apr 11 14:27:47.047: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 21.624485748s Apr 11 14:27:49.050: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 23.627437042s Apr 11 14:27:51.052: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 25.629851605s Apr 11 14:27:51.052: INFO: Pod "pause" satisfied condition "running and ready" Apr 11 14:27:51.052: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Apr 11 14:27:51.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-8016' Apr 11 14:27:51.149: INFO: stderr: "" Apr 11 14:27:51.149: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Apr 11 14:27:51.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-8016' Apr 11 14:27:51.233: INFO: stderr: "" Apr 11 14:27:51.233: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 26s testing-label-value\n" STEP: removing the label testing-label of a pod Apr 11 14:27:51.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-8016' Apr 11 14:27:51.323: INFO: stderr: "" Apr 11 14:27:51.323: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Apr 11 14:27:51.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-8016' Apr 11 14:27:51.416: INFO: stderr: "" Apr 11 14:27:51.416: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 26s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Apr 11 14:27:51.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8016' Apr 11 14:27:51.552: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 11 14:27:51.552: INFO: stdout: "pod \"pause\" force deleted\n" Apr 11 14:27:51.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-8016' Apr 11 14:27:51.651: INFO: stderr: "No resources found.\n" Apr 11 14:27:51.651: INFO: stdout: "" Apr 11 14:27:51.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-8016 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 11 14:27:51.752: INFO: stderr: "" Apr 11 14:27:51.752: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:27:51.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8016" for this suite. Apr 11 14:27:57.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:27:57.896: INFO: namespace kubectl-8016 deletion completed in 6.114810819s • [SLOW TEST:33.945 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:27:57.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 11 14:27:57.943: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Apr 11 14:27:59.997: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:28:00.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3395" for this suite. Apr 11 14:28:06.100: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:28:06.163: INFO: namespace replication-controller-3395 deletion completed in 6.07503416s • [SLOW TEST:8.267 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:28:06.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Apr 11 14:28:07.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5641' Apr 11 14:28:08.380: INFO: stderr: "" Apr 11 14:28:08.380: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Apr 11 14:28:09.385: INFO: Selector matched 1 pods for map[app:redis] Apr 11 14:28:09.385: INFO: Found 0 / 1 Apr 11 14:28:10.385: INFO: Selector matched 1 pods for map[app:redis] Apr 11 14:28:10.385: INFO: Found 0 / 1 Apr 11 14:28:11.384: INFO: Selector matched 1 pods for map[app:redis] Apr 11 14:28:11.384: INFO: Found 0 / 1 Apr 11 14:28:12.400: INFO: Selector matched 1 pods for map[app:redis] Apr 11 14:28:12.400: INFO: Found 0 / 1 Apr 11 14:28:13.383: INFO: Selector matched 1 pods for map[app:redis] Apr 11 14:28:13.383: INFO: Found 0 / 1 Apr 11 14:28:14.544: INFO: Selector matched 1 pods for map[app:redis] Apr 11 14:28:14.544: INFO: Found 0 / 1 Apr 11 14:28:15.384: INFO: Selector matched 1 pods for map[app:redis] Apr 11 14:28:15.384: INFO: Found 0 / 1 Apr 11 14:28:16.384: INFO: Selector matched 1 pods for map[app:redis] Apr 11 14:28:16.385: INFO: Found 1 / 1 Apr 11 14:28:16.385: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 11 14:28:16.388: INFO: Selector matched 1 pods for map[app:redis] Apr 11 14:28:16.388: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Apr 11 14:28:16.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-qwdgh redis-master --namespace=kubectl-5641' Apr 11 14:28:16.497: INFO: stderr: "" Apr 11 14:28:16.497: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 11 Apr 14:28:15.975 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 11 Apr 14:28:15.975 # Server started, Redis version 3.2.12\n1:M 11 Apr 14:28:15.975 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 11 Apr 14:28:15.975 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Apr 11 14:28:16.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-qwdgh redis-master --namespace=kubectl-5641 --tail=1' Apr 11 14:28:16.610: INFO: stderr: "" Apr 11 14:28:16.610: INFO: stdout: "1:M 11 Apr 14:28:15.975 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Apr 11 14:28:16.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-qwdgh redis-master --namespace=kubectl-5641 --limit-bytes=1' Apr 11 14:28:16.716: INFO: stderr: "" Apr 11 14:28:16.716: INFO: stdout: " " STEP: exposing timestamps Apr 11 14:28:16.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-qwdgh redis-master --namespace=kubectl-5641 --tail=1 --timestamps' Apr 11 14:28:16.824: INFO: stderr: "" Apr 11 14:28:16.824: INFO: stdout: "2020-04-11T14:28:15.975451169Z 1:M 11 Apr 14:28:15.975 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Apr 11 14:28:19.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-qwdgh redis-master --namespace=kubectl-5641 --since=1s' Apr 11 14:28:19.420: INFO: stderr: "" Apr 11 14:28:19.420: INFO: stdout: "" Apr 11 14:28:19.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-qwdgh redis-master --namespace=kubectl-5641 --since=24h' Apr 11 14:28:19.514: INFO: stderr: "" Apr 11 14:28:19.515: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 11 Apr 14:28:15.975 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 11 Apr 14:28:15.975 # Server started, Redis version 3.2.12\n1:M 11 Apr 14:28:15.975 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 11 Apr 14:28:15.975 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Apr 11 14:28:19.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5641' Apr 11 14:28:19.615: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 11 14:28:19.615: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Apr 11 14:28:19.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-5641' Apr 11 14:28:19.717: INFO: stderr: "No resources found.\n" Apr 11 14:28:19.717: INFO: stdout: "" Apr 11 14:28:19.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-5641 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 11 14:28:19.816: INFO: stderr: "" Apr 11 14:28:19.816: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:28:19.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5641" for this suite. Apr 11 14:28:43.860: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:28:43.921: INFO: namespace kubectl-5641 deletion completed in 24.072120363s • [SLOW TEST:37.757 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:28:43.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-47fp STEP: Creating a pod to test atomic-volume-subpath Apr 11 14:28:43.980: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-47fp" in namespace "subpath-7163" to be "success or failure" Apr 11 14:28:43.985: INFO: Pod "pod-subpath-test-configmap-47fp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.183232ms Apr 11 14:28:45.988: INFO: Pod "pod-subpath-test-configmap-47fp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007987213s Apr 11 14:28:47.991: INFO: Pod "pod-subpath-test-configmap-47fp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010698337s Apr 11 14:28:49.995: INFO: Pod "pod-subpath-test-configmap-47fp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014316262s Apr 11 14:28:51.998: INFO: Pod "pod-subpath-test-configmap-47fp": Phase="Running", Reason="", readiness=true. Elapsed: 8.017907931s Apr 11 14:28:54.002: INFO: Pod "pod-subpath-test-configmap-47fp": Phase="Running", Reason="", readiness=true. Elapsed: 10.02112013s Apr 11 14:28:56.005: INFO: Pod "pod-subpath-test-configmap-47fp": Phase="Running", Reason="", readiness=true. Elapsed: 12.024339926s Apr 11 14:28:58.008: INFO: Pod "pod-subpath-test-configmap-47fp": Phase="Running", Reason="", readiness=true. Elapsed: 14.027459193s Apr 11 14:29:00.012: INFO: Pod "pod-subpath-test-configmap-47fp": Phase="Running", Reason="", readiness=true. Elapsed: 16.031087867s Apr 11 14:29:02.015: INFO: Pod "pod-subpath-test-configmap-47fp": Phase="Running", Reason="", readiness=true. Elapsed: 18.034536357s Apr 11 14:29:04.019: INFO: Pod "pod-subpath-test-configmap-47fp": Phase="Running", Reason="", readiness=true. Elapsed: 20.038210375s Apr 11 14:29:06.022: INFO: Pod "pod-subpath-test-configmap-47fp": Phase="Running", Reason="", readiness=true. Elapsed: 22.041443758s Apr 11 14:29:08.025: INFO: Pod "pod-subpath-test-configmap-47fp": Phase="Running", Reason="", readiness=true. Elapsed: 24.044612148s Apr 11 14:29:10.028: INFO: Pod "pod-subpath-test-configmap-47fp": Phase="Running", Reason="", readiness=true. Elapsed: 26.047847927s Apr 11 14:29:12.032: INFO: Pod "pod-subpath-test-configmap-47fp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.051291535s STEP: Saw pod success Apr 11 14:29:12.032: INFO: Pod "pod-subpath-test-configmap-47fp" satisfied condition "success or failure" Apr 11 14:29:12.034: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-47fp container test-container-subpath-configmap-47fp: STEP: delete the pod Apr 11 14:29:12.051: INFO: Waiting for pod pod-subpath-test-configmap-47fp to disappear Apr 11 14:29:12.076: INFO: Pod pod-subpath-test-configmap-47fp no longer exists STEP: Deleting pod pod-subpath-test-configmap-47fp Apr 11 14:29:12.076: INFO: Deleting pod "pod-subpath-test-configmap-47fp" in namespace "subpath-7163" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:29:12.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7163" for this suite. Apr 11 14:29:18.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:29:18.203: INFO: namespace subpath-7163 deletion completed in 6.074622033s • [SLOW TEST:34.281 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:29:18.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 11 14:29:22.774: INFO: Successfully updated pod "labelsupdatef697ed60-d654-42d6-ac60-f1caa8fc6cf6" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:29:26.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5091" for this suite. Apr 11 14:30:02.813: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:30:02.911: INFO: namespace downward-api-5091 deletion completed in 36.108699337s • [SLOW TEST:44.708 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:30:02.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-3184 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-3184 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3184 Apr 11 14:30:02.994: INFO: Found 0 stateful pods, waiting for 1 Apr 11 14:30:13.060: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Apr 11 14:30:23.162: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Apr 11 14:30:32.996: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Apr 11 14:30:32.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 11 14:30:33.255: INFO: stderr: "I0411 14:30:33.119897 3005 log.go:172] (0xc0009c8370) (0xc0005beaa0) Create stream\nI0411 14:30:33.119939 3005 log.go:172] (0xc0009c8370) (0xc0005beaa0) Stream added, broadcasting: 1\nI0411 14:30:33.121833 3005 log.go:172] (0xc0009c8370) Reply frame received for 1\nI0411 14:30:33.121862 3005 log.go:172] (0xc0009c8370) (0xc000836000) Create stream\nI0411 14:30:33.121872 3005 log.go:172] (0xc0009c8370) (0xc000836000) Stream added, broadcasting: 3\nI0411 14:30:33.122830 3005 log.go:172] (0xc0009c8370) Reply frame received for 3\nI0411 14:30:33.122871 3005 log.go:172] (0xc0009c8370) (0xc0008360a0) Create stream\nI0411 14:30:33.122888 3005 log.go:172] (0xc0009c8370) (0xc0008360a0) Stream added, broadcasting: 5\nI0411 14:30:33.123816 3005 log.go:172] (0xc0009c8370) Reply frame received for 5\nI0411 14:30:33.200938 3005 log.go:172] (0xc0009c8370) Data frame received for 5\nI0411 14:30:33.200960 3005 log.go:172] (0xc0008360a0) (5) Data frame handling\nI0411 14:30:33.200973 3005 log.go:172] (0xc0008360a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0411 14:30:33.250040 3005 log.go:172] (0xc0009c8370) Data frame received for 3\nI0411 14:30:33.250061 3005 log.go:172] (0xc000836000) (3) Data frame handling\nI0411 14:30:33.250073 3005 log.go:172] (0xc000836000) (3) Data frame sent\nI0411 14:30:33.250083 3005 log.go:172] (0xc0009c8370) Data frame received for 3\nI0411 14:30:33.250089 3005 log.go:172] (0xc000836000) (3) Data frame handling\nI0411 14:30:33.250281 3005 log.go:172] (0xc0009c8370) Data frame received for 5\nI0411 14:30:33.250312 3005 log.go:172] (0xc0008360a0) (5) Data frame handling\nI0411 14:30:33.251551 3005 log.go:172] (0xc0009c8370) Data frame received for 1\nI0411 14:30:33.251569 3005 log.go:172] (0xc0005beaa0) (1) Data frame handling\nI0411 14:30:33.251587 3005 log.go:172] (0xc0005beaa0) (1) Data frame sent\nI0411 14:30:33.251603 3005 log.go:172] (0xc0009c8370) (0xc0005beaa0) Stream removed, broadcasting: 1\nI0411 14:30:33.251620 3005 log.go:172] (0xc0009c8370) Go away received\nI0411 14:30:33.251832 3005 log.go:172] (0xc0009c8370) (0xc0005beaa0) Stream removed, broadcasting: 1\nI0411 14:30:33.251844 3005 log.go:172] (0xc0009c8370) (0xc000836000) Stream removed, broadcasting: 3\nI0411 14:30:33.251851 3005 log.go:172] (0xc0009c8370) (0xc0008360a0) Stream removed, broadcasting: 5\n" Apr 11 14:30:33.255: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 11 14:30:33.255: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 11 14:30:33.258: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 11 14:30:43.261: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 11 14:30:43.261: INFO: Waiting for statefulset status.replicas updated to 0 Apr 11 14:30:43.288: INFO: POD NODE PHASE GRACE CONDITIONS Apr 11 14:30:43.288: INFO: ss-0 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:03 +0000 UTC }] Apr 11 14:30:43.288: INFO: Apr 11 14:30:43.288: INFO: StatefulSet ss has not reached scale 3, at 1 Apr 11 14:30:44.292: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.981370335s Apr 11 14:30:45.341: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.977386432s Apr 11 14:30:46.843: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.929092734s Apr 11 14:30:48.199: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.426399593s Apr 11 14:30:49.211: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.070713945s Apr 11 14:30:50.221: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.058929245s Apr 11 14:30:51.306: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.048682661s Apr 11 14:30:52.636: INFO: Verifying statefulset ss doesn't scale past 3 for another 963.376857ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3184 Apr 11 14:30:54.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 11 14:30:55.075: INFO: stderr: "I0411 14:30:55.013428 3025 log.go:172] (0xc000a160b0) (0xc0009ce6e0) Create stream\nI0411 14:30:55.013487 3025 log.go:172] (0xc000a160b0) (0xc0009ce6e0) Stream added, broadcasting: 1\nI0411 14:30:55.015076 3025 log.go:172] (0xc000a160b0) Reply frame received for 1\nI0411 14:30:55.015102 3025 log.go:172] (0xc000a160b0) (0xc0005a8280) Create stream\nI0411 14:30:55.015109 3025 log.go:172] (0xc000a160b0) (0xc0005a8280) Stream added, broadcasting: 3\nI0411 14:30:55.015849 3025 log.go:172] (0xc000a160b0) Reply frame received for 3\nI0411 14:30:55.015899 3025 log.go:172] (0xc000a160b0) (0xc0006ce000) Create stream\nI0411 14:30:55.015918 3025 log.go:172] (0xc000a160b0) (0xc0006ce000) Stream added, broadcasting: 5\nI0411 14:30:55.016492 3025 log.go:172] (0xc000a160b0) Reply frame received for 5\nI0411 14:30:55.068627 3025 log.go:172] (0xc000a160b0) Data frame received for 3\nI0411 14:30:55.068702 3025 log.go:172] (0xc0005a8280) (3) Data frame handling\nI0411 14:30:55.068729 3025 log.go:172] (0xc0005a8280) (3) Data frame sent\nI0411 14:30:55.068755 3025 log.go:172] (0xc000a160b0) Data frame received for 5\nI0411 14:30:55.068802 3025 log.go:172] (0xc0006ce000) (5) Data frame handling\nI0411 14:30:55.068817 3025 log.go:172] (0xc0006ce000) (5) Data frame sent\nI0411 14:30:55.068829 3025 log.go:172] (0xc000a160b0) Data frame received for 5\nI0411 14:30:55.068849 3025 log.go:172] (0xc0006ce000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0411 14:30:55.068866 3025 log.go:172] (0xc000a160b0) Data frame received for 3\nI0411 14:30:55.068904 3025 log.go:172] (0xc0005a8280) (3) Data frame handling\nI0411 14:30:55.070086 3025 log.go:172] (0xc000a160b0) Data frame received for 1\nI0411 14:30:55.070109 3025 log.go:172] (0xc0009ce6e0) (1) Data frame handling\nI0411 14:30:55.070121 3025 log.go:172] (0xc0009ce6e0) (1) Data frame sent\nI0411 14:30:55.070134 3025 log.go:172] (0xc000a160b0) (0xc0009ce6e0) Stream removed, broadcasting: 1\nI0411 14:30:55.070253 3025 log.go:172] (0xc000a160b0) Go away received\nI0411 14:30:55.070801 3025 log.go:172] (0xc000a160b0) (0xc0009ce6e0) Stream removed, broadcasting: 1\nI0411 14:30:55.070823 3025 log.go:172] (0xc000a160b0) (0xc0005a8280) Stream removed, broadcasting: 3\nI0411 14:30:55.070834 3025 log.go:172] (0xc000a160b0) (0xc0006ce000) Stream removed, broadcasting: 5\n" Apr 11 14:30:55.076: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 11 14:30:55.076: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 11 14:30:55.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 11 14:30:55.299: INFO: rc: 1 Apr 11 14:30:55.299: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc0021d0990 exit status 1 true [0xc001a4a010 0xc001a4a028 0xc001a4a040] [0xc001a4a010 0xc001a4a028 0xc001a4a040] [0xc001a4a020 0xc001a4a038] [0xba70e0 0xba70e0] 0xc002646840 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Apr 11 14:31:05.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 11 14:31:05.596: INFO: rc: 1 Apr 11 14:31:05.596: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc002fa09c0 exit status 1 true [0xc000d663c8 0xc000d66548 0xc000d66620] [0xc000d663c8 0xc000d66548 0xc000d66620] [0xc000d664a8 0xc000d665d0] [0xba70e0 0xba70e0] 0xc002840a80 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Apr 11 14:31:15.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 11 14:31:15.711: INFO: rc: 1 Apr 11 14:31:15.711: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc002408090 exit status 1 true [0xc001c98000 0xc001c98020 0xc001c98038] [0xc001c98000 0xc001c98020 0xc001c98038] [0xc001c98018 0xc001c98030] [0xba70e0 0xba70e0] 0xc0015e5b00 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Apr 11 14:31:25.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 11 14:31:25.896: INFO: stderr: "I0411 14:31:25.844294 3105 log.go:172] (0xc0005f4420) (0xc0005286e0) Create stream\nI0411 14:31:25.844331 3105 log.go:172] (0xc0005f4420) (0xc0005286e0) Stream added, broadcasting: 1\nI0411 14:31:25.846326 3105 log.go:172] (0xc0005f4420) Reply frame received for 1\nI0411 14:31:25.846351 3105 log.go:172] (0xc0005f4420) (0xc0006643c0) Create stream\nI0411 14:31:25.846359 3105 log.go:172] (0xc0005f4420) (0xc0006643c0) Stream added, broadcasting: 3\nI0411 14:31:25.846936 3105 log.go:172] (0xc0005f4420) Reply frame received for 3\nI0411 14:31:25.846965 3105 log.go:172] (0xc0005f4420) (0xc000664460) Create stream\nI0411 14:31:25.846978 3105 log.go:172] (0xc0005f4420) (0xc000664460) Stream added, broadcasting: 5\nI0411 14:31:25.847764 3105 log.go:172] (0xc0005f4420) Reply frame received for 5\nI0411 14:31:25.892710 3105 log.go:172] (0xc0005f4420) Data frame received for 5\nI0411 14:31:25.892738 3105 log.go:172] (0xc000664460) (5) Data frame handling\nI0411 14:31:25.892753 3105 log.go:172] (0xc000664460) (5) Data frame sent\nI0411 14:31:25.892763 3105 log.go:172] (0xc0005f4420) Data frame received for 5\nI0411 14:31:25.892771 3105 log.go:172] (0xc000664460) (5) Data frame handling\nI0411 14:31:25.892785 3105 log.go:172] (0xc0005f4420) Data frame received for 3\nI0411 14:31:25.892795 3105 log.go:172] (0xc0006643c0) (3) Data frame handling\nI0411 14:31:25.892805 3105 log.go:172] (0xc0006643c0) (3) Data frame sent\nI0411 14:31:25.892820 3105 log.go:172] (0xc0005f4420) Data frame received for 3\nI0411 14:31:25.892828 3105 log.go:172] (0xc0006643c0) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0411 14:31:25.894179 3105 log.go:172] (0xc0005f4420) Data frame received for 1\nI0411 14:31:25.894193 3105 log.go:172] (0xc0005286e0) (1) Data frame handling\nI0411 14:31:25.894201 3105 log.go:172] (0xc0005286e0) (1) Data frame sent\nI0411 14:31:25.894239 3105 log.go:172] (0xc0005f4420) (0xc0005286e0) Stream removed, broadcasting: 1\nI0411 14:31:25.894379 3105 log.go:172] (0xc0005f4420) Go away received\nI0411 14:31:25.894445 3105 log.go:172] (0xc0005f4420) (0xc0005286e0) Stream removed, broadcasting: 1\nI0411 14:31:25.894457 3105 log.go:172] (0xc0005f4420) (0xc0006643c0) Stream removed, broadcasting: 3\nI0411 14:31:25.894463 3105 log.go:172] (0xc0005f4420) (0xc000664460) Stream removed, broadcasting: 5\n" Apr 11 14:31:25.897: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 11 14:31:25.897: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 11 14:31:25.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 11 14:31:26.076: INFO: stderr: "I0411 14:31:26.006756 3125 log.go:172] (0xc000116fd0) (0xc000580b40) Create stream\nI0411 14:31:26.006804 3125 log.go:172] (0xc000116fd0) (0xc000580b40) Stream added, broadcasting: 1\nI0411 14:31:26.008762 3125 log.go:172] (0xc000116fd0) Reply frame received for 1\nI0411 14:31:26.008800 3125 log.go:172] (0xc000116fd0) (0xc00093e000) Create stream\nI0411 14:31:26.008813 3125 log.go:172] (0xc000116fd0) (0xc00093e000) Stream added, broadcasting: 3\nI0411 14:31:26.009825 3125 log.go:172] (0xc000116fd0) Reply frame received for 3\nI0411 14:31:26.009854 3125 log.go:172] (0xc000116fd0) (0xc000970000) Create stream\nI0411 14:31:26.009870 3125 log.go:172] (0xc000116fd0) (0xc000970000) Stream added, broadcasting: 5\nI0411 14:31:26.010845 3125 log.go:172] (0xc000116fd0) Reply frame received for 5\nI0411 14:31:26.071007 3125 log.go:172] (0xc000116fd0) Data frame received for 3\nI0411 14:31:26.071035 3125 log.go:172] (0xc00093e000) (3) Data frame handling\nI0411 14:31:26.071051 3125 log.go:172] (0xc00093e000) (3) Data frame sent\nI0411 14:31:26.071070 3125 log.go:172] (0xc000116fd0) Data frame received for 5\nI0411 14:31:26.071095 3125 log.go:172] (0xc000970000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0411 14:31:26.071114 3125 log.go:172] (0xc000116fd0) Data frame received for 3\nI0411 14:31:26.071127 3125 log.go:172] (0xc00093e000) (3) Data frame handling\nI0411 14:31:26.071144 3125 log.go:172] (0xc000970000) (5) Data frame sent\nI0411 14:31:26.071151 3125 log.go:172] (0xc000116fd0) Data frame received for 5\nI0411 14:31:26.071157 3125 log.go:172] (0xc000970000) (5) Data frame handling\nI0411 14:31:26.072359 3125 log.go:172] (0xc000116fd0) Data frame received for 1\nI0411 14:31:26.072384 3125 log.go:172] (0xc000580b40) (1) Data frame handling\nI0411 14:31:26.072402 3125 log.go:172] (0xc000580b40) (1) Data frame sent\nI0411 14:31:26.072920 3125 log.go:172] (0xc000116fd0) (0xc000580b40) Stream removed, broadcasting: 1\nI0411 14:31:26.072971 3125 log.go:172] (0xc000116fd0) Go away received\nI0411 14:31:26.073758 3125 log.go:172] (0xc000116fd0) (0xc000580b40) Stream removed, broadcasting: 1\nI0411 14:31:26.073804 3125 log.go:172] (0xc000116fd0) (0xc00093e000) Stream removed, broadcasting: 3\nI0411 14:31:26.073848 3125 log.go:172] (0xc000116fd0) (0xc000970000) Stream removed, broadcasting: 5\n" Apr 11 14:31:26.076: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 11 14:31:26.076: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 11 14:31:26.080: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 11 14:31:26.080: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 11 14:31:26.080: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Apr 11 14:31:26.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 11 14:31:26.255: INFO: stderr: "I0411 14:31:26.200130 3147 log.go:172] (0xc000a04370) (0xc000a6e6e0) Create stream\nI0411 14:31:26.200181 3147 log.go:172] (0xc000a04370) (0xc000a6e6e0) Stream added, broadcasting: 1\nI0411 14:31:26.202164 3147 log.go:172] (0xc000a04370) Reply frame received for 1\nI0411 14:31:26.202210 3147 log.go:172] (0xc000a04370) (0xc0005f2280) Create stream\nI0411 14:31:26.202225 3147 log.go:172] (0xc000a04370) (0xc0005f2280) Stream added, broadcasting: 3\nI0411 14:31:26.203236 3147 log.go:172] (0xc000a04370) Reply frame received for 3\nI0411 14:31:26.203289 3147 log.go:172] (0xc000a04370) (0xc0008b2000) Create stream\nI0411 14:31:26.203309 3147 log.go:172] (0xc000a04370) (0xc0008b2000) Stream added, broadcasting: 5\nI0411 14:31:26.204230 3147 log.go:172] (0xc000a04370) Reply frame received for 5\nI0411 14:31:26.250762 3147 log.go:172] (0xc000a04370) Data frame received for 5\nI0411 14:31:26.250805 3147 log.go:172] (0xc0008b2000) (5) Data frame handling\nI0411 14:31:26.250819 3147 log.go:172] (0xc0008b2000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0411 14:31:26.250842 3147 log.go:172] (0xc000a04370) Data frame received for 3\nI0411 14:31:26.250860 3147 log.go:172] (0xc0005f2280) (3) Data frame handling\nI0411 14:31:26.250874 3147 log.go:172] (0xc0005f2280) (3) Data frame sent\nI0411 14:31:26.250897 3147 log.go:172] (0xc000a04370) Data frame received for 5\nI0411 14:31:26.250940 3147 log.go:172] (0xc0008b2000) (5) Data frame handling\nI0411 14:31:26.250965 3147 log.go:172] (0xc000a04370) Data frame received for 3\nI0411 14:31:26.251071 3147 log.go:172] (0xc0005f2280) (3) Data frame handling\nI0411 14:31:26.252402 3147 log.go:172] (0xc000a04370) Data frame received for 1\nI0411 14:31:26.252415 3147 log.go:172] (0xc000a6e6e0) (1) Data frame handling\nI0411 14:31:26.252428 3147 log.go:172] (0xc000a6e6e0) (1) Data frame sent\nI0411 14:31:26.252511 3147 log.go:172] (0xc000a04370) (0xc000a6e6e0) Stream removed, broadcasting: 1\nI0411 14:31:26.252602 3147 log.go:172] (0xc000a04370) Go away received\nI0411 14:31:26.252724 3147 log.go:172] (0xc000a04370) (0xc000a6e6e0) Stream removed, broadcasting: 1\nI0411 14:31:26.252739 3147 log.go:172] (0xc000a04370) (0xc0005f2280) Stream removed, broadcasting: 3\nI0411 14:31:26.252746 3147 log.go:172] (0xc000a04370) (0xc0008b2000) Stream removed, broadcasting: 5\n" Apr 11 14:31:26.255: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 11 14:31:26.255: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 11 14:31:26.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 11 14:31:26.485: INFO: stderr: "I0411 14:31:26.383047 3167 log.go:172] (0xc0001166e0) (0xc0003de6e0) Create stream\nI0411 14:31:26.383092 3167 log.go:172] (0xc0001166e0) (0xc0003de6e0) Stream added, broadcasting: 1\nI0411 14:31:26.385055 3167 log.go:172] (0xc0001166e0) Reply frame received for 1\nI0411 14:31:26.385088 3167 log.go:172] (0xc0001166e0) (0xc00080c000) Create stream\nI0411 14:31:26.385102 3167 log.go:172] (0xc0001166e0) (0xc00080c000) Stream added, broadcasting: 3\nI0411 14:31:26.385859 3167 log.go:172] (0xc0001166e0) Reply frame received for 3\nI0411 14:31:26.385895 3167 log.go:172] (0xc0001166e0) (0xc0003de780) Create stream\nI0411 14:31:26.385904 3167 log.go:172] (0xc0001166e0) (0xc0003de780) Stream added, broadcasting: 5\nI0411 14:31:26.386482 3167 log.go:172] (0xc0001166e0) Reply frame received for 5\nI0411 14:31:26.427391 3167 log.go:172] (0xc0001166e0) Data frame received for 5\nI0411 14:31:26.427433 3167 log.go:172] (0xc0003de780) (5) Data frame handling\nI0411 14:31:26.427465 3167 log.go:172] (0xc0003de780) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0411 14:31:26.479159 3167 log.go:172] (0xc0001166e0) Data frame received for 5\nI0411 14:31:26.479183 3167 log.go:172] (0xc0003de780) (5) Data frame handling\nI0411 14:31:26.479195 3167 log.go:172] (0xc0001166e0) Data frame received for 3\nI0411 14:31:26.479202 3167 log.go:172] (0xc00080c000) (3) Data frame handling\nI0411 14:31:26.479213 3167 log.go:172] (0xc00080c000) (3) Data frame sent\nI0411 14:31:26.479341 3167 log.go:172] (0xc0001166e0) Data frame received for 3\nI0411 14:31:26.479356 3167 log.go:172] (0xc00080c000) (3) Data frame handling\nI0411 14:31:26.480869 3167 log.go:172] (0xc0001166e0) Data frame received for 1\nI0411 14:31:26.480882 3167 log.go:172] (0xc0003de6e0) (1) Data frame handling\nI0411 14:31:26.480892 3167 log.go:172] (0xc0003de6e0) (1) Data frame sent\nI0411 14:31:26.480917 3167 log.go:172] (0xc0001166e0) (0xc0003de6e0) Stream removed, broadcasting: 1\nI0411 14:31:26.481010 3167 log.go:172] (0xc0001166e0) Go away received\nI0411 14:31:26.481208 3167 log.go:172] (0xc0001166e0) (0xc0003de6e0) Stream removed, broadcasting: 1\nI0411 14:31:26.481223 3167 log.go:172] (0xc0001166e0) (0xc00080c000) Stream removed, broadcasting: 3\nI0411 14:31:26.481232 3167 log.go:172] (0xc0001166e0) (0xc0003de780) Stream removed, broadcasting: 5\n" Apr 11 14:31:26.485: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 11 14:31:26.485: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 11 14:31:26.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 11 14:31:26.690: INFO: stderr: "I0411 14:31:26.605095 3186 log.go:172] (0xc0001166e0) (0xc0008fc640) Create stream\nI0411 14:31:26.605349 3186 log.go:172] (0xc0001166e0) (0xc0008fc640) Stream added, broadcasting: 1\nI0411 14:31:26.607242 3186 log.go:172] (0xc0001166e0) Reply frame received for 1\nI0411 14:31:26.607272 3186 log.go:172] (0xc0001166e0) (0xc0008fc6e0) Create stream\nI0411 14:31:26.607280 3186 log.go:172] (0xc0001166e0) (0xc0008fc6e0) Stream added, broadcasting: 3\nI0411 14:31:26.607849 3186 log.go:172] (0xc0001166e0) Reply frame received for 3\nI0411 14:31:26.607875 3186 log.go:172] (0xc0001166e0) (0xc0008fc780) Create stream\nI0411 14:31:26.607889 3186 log.go:172] (0xc0001166e0) (0xc0008fc780) Stream added, broadcasting: 5\nI0411 14:31:26.608574 3186 log.go:172] (0xc0001166e0) Reply frame received for 5\nI0411 14:31:26.653861 3186 log.go:172] (0xc0001166e0) Data frame received for 5\nI0411 14:31:26.653880 3186 log.go:172] (0xc0008fc780) (5) Data frame handling\nI0411 14:31:26.653892 3186 log.go:172] (0xc0008fc780) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0411 14:31:26.682566 3186 log.go:172] (0xc0001166e0) Data frame received for 3\nI0411 14:31:26.682602 3186 log.go:172] (0xc0008fc6e0) (3) Data frame handling\nI0411 14:31:26.682622 3186 log.go:172] (0xc0008fc6e0) (3) Data frame sent\nI0411 14:31:26.683355 3186 log.go:172] (0xc0001166e0) Data frame received for 3\nI0411 14:31:26.683432 3186 log.go:172] (0xc0008fc6e0) (3) Data frame handling\nI0411 14:31:26.683482 3186 log.go:172] (0xc0001166e0) Data frame received for 5\nI0411 14:31:26.683531 3186 log.go:172] (0xc0008fc780) (5) Data frame handling\nI0411 14:31:26.684619 3186 log.go:172] (0xc0001166e0) Data frame received for 1\nI0411 14:31:26.684657 3186 log.go:172] (0xc0008fc640) (1) Data frame handling\nI0411 14:31:26.684687 3186 log.go:172] (0xc0008fc640) (1) Data frame sent\nI0411 14:31:26.684719 3186 log.go:172] (0xc0001166e0) (0xc0008fc640) Stream removed, broadcasting: 1\nI0411 14:31:26.684752 3186 log.go:172] (0xc0001166e0) Go away received\nI0411 14:31:26.685076 3186 log.go:172] (0xc0001166e0) (0xc0008fc640) Stream removed, broadcasting: 1\nI0411 14:31:26.685089 3186 log.go:172] (0xc0001166e0) (0xc0008fc6e0) Stream removed, broadcasting: 3\nI0411 14:31:26.685095 3186 log.go:172] (0xc0001166e0) (0xc0008fc780) Stream removed, broadcasting: 5\n" Apr 11 14:31:26.690: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 11 14:31:26.690: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 11 14:31:26.690: INFO: Waiting for statefulset status.replicas updated to 0 Apr 11 14:31:26.692: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Apr 11 14:31:36.699: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 11 14:31:36.699: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 11 14:31:36.699: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 11 14:31:36.713: INFO: POD NODE PHASE GRACE CONDITIONS Apr 11 14:31:36.713: INFO: ss-0 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:03 +0000 UTC }] Apr 11 14:31:36.713: INFO: ss-1 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:43 +0000 UTC }] Apr 11 14:31:36.713: INFO: ss-2 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:43 +0000 UTC }] Apr 11 14:31:36.713: INFO: Apr 11 14:31:36.713: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 11 14:31:37.717: INFO: POD NODE PHASE GRACE CONDITIONS Apr 11 14:31:37.717: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:03 +0000 UTC }] Apr 11 14:31:37.717: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:43 +0000 UTC }] Apr 11 14:31:37.717: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:43 +0000 UTC }] Apr 11 14:31:37.717: INFO: Apr 11 14:31:37.717: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 11 14:31:38.803: INFO: POD NODE PHASE GRACE CONDITIONS Apr 11 14:31:38.803: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:03 +0000 UTC }] Apr 11 14:31:38.803: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:43 +0000 UTC }] Apr 11 14:31:38.803: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:43 +0000 UTC }] Apr 11 14:31:38.803: INFO: Apr 11 14:31:38.803: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 11 14:31:40.154: INFO: POD NODE PHASE GRACE CONDITIONS Apr 11 14:31:40.154: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:03 +0000 UTC }] Apr 11 14:31:40.154: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:43 +0000 UTC }] Apr 11 14:31:40.154: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:43 +0000 UTC }] Apr 11 14:31:40.154: INFO: Apr 11 14:31:40.154: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 11 14:31:41.700: INFO: POD NODE PHASE GRACE CONDITIONS Apr 11 14:31:41.700: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:03 +0000 UTC }] Apr 11 14:31:41.700: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:43 +0000 UTC }] Apr 11 14:31:41.700: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:43 +0000 UTC }] Apr 11 14:31:41.700: INFO: Apr 11 14:31:41.700: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 11 14:31:42.867: INFO: POD NODE PHASE GRACE CONDITIONS Apr 11 14:31:42.867: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:03 +0000 UTC }] Apr 11 14:31:42.867: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:43 +0000 UTC }] Apr 11 14:31:42.867: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:43 +0000 UTC }] Apr 11 14:31:42.867: INFO: Apr 11 14:31:42.867: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 11 14:31:43.929: INFO: POD NODE PHASE GRACE CONDITIONS Apr 11 14:31:43.929: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:03 +0000 UTC }] Apr 11 14:31:43.929: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:43 +0000 UTC }] Apr 11 14:31:43.929: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:43 +0000 UTC }] Apr 11 14:31:43.929: INFO: Apr 11 14:31:43.929: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 11 14:31:44.934: INFO: POD NODE PHASE GRACE CONDITIONS Apr 11 14:31:44.934: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:03 +0000 UTC }] Apr 11 14:31:44.934: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:43 +0000 UTC }] Apr 11 14:31:44.934: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:43 +0000 UTC }] Apr 11 14:31:44.934: INFO: Apr 11 14:31:44.934: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 11 14:31:46.525: INFO: POD NODE PHASE GRACE CONDITIONS Apr 11 14:31:46.525: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:03 +0000 UTC }] Apr 11 14:31:46.526: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:43 +0000 UTC }] Apr 11 14:31:46.526: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:31:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:30:43 +0000 UTC }] Apr 11 14:31:46.526: INFO: Apr 11 14:31:46.526: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3184 Apr 11 14:31:48.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 11 14:32:25.933: INFO: rc: 1 Apr 11 14:32:25.933: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] I0411 14:31:48.486771 3201 log.go:172] (0xc000116f20) (0xc0006b0be0) Create stream I0411 14:31:48.486827 3201 log.go:172] (0xc000116f20) (0xc0006b0be0) Stream added, broadcasting: 1 I0411 14:31:48.490583 3201 log.go:172] (0xc000116f20) Reply frame received for 1 I0411 14:31:48.490619 3201 log.go:172] (0xc000116f20) (0xc0008f8000) Create stream I0411 14:31:48.490632 3201 log.go:172] (0xc000116f20) (0xc0008f8000) Stream added, broadcasting: 3 I0411 14:31:48.491575 3201 log.go:172] (0xc000116f20) Reply frame received for 3 I0411 14:31:48.491603 3201 log.go:172] (0xc000116f20) (0xc0006b0c80) Create stream I0411 14:31:48.491610 3201 log.go:172] (0xc000116f20) (0xc0006b0c80) Stream added, broadcasting: 5 I0411 14:31:48.492469 3201 log.go:172] (0xc000116f20) Reply frame received for 5 I0411 14:32:25.930007 3201 log.go:172] (0xc000116f20) Data frame received for 1 I0411 14:32:25.930048 3201 log.go:172] (0xc000116f20) (0xc0006b0c80) Stream removed, broadcasting: 5 I0411 14:32:25.930074 3201 log.go:172] (0xc0006b0be0) (1) Data frame handling I0411 14:32:25.930095 3201 log.go:172] (0xc0006b0be0) (1) Data frame sent I0411 14:32:25.930125 3201 log.go:172] (0xc000116f20) (0xc0008f8000) Stream removed, broadcasting: 3 I0411 14:32:25.930169 3201 log.go:172] (0xc000116f20) (0xc0006b0be0) Stream removed, broadcasting: 1 I0411 14:32:25.930204 3201 log.go:172] (0xc000116f20) Go away received I0411 14:32:25.930500 3201 log.go:172] (0xc000116f20) (0xc0006b0be0) Stream removed, broadcasting: 1 I0411 14:32:25.930513 3201 log.go:172] (0xc000116f20) (0xc0008f8000) Stream removed, broadcasting: 3 I0411 14:32:25.930518 3201 log.go:172] (0xc000116f20) (0xc0006b0c80) Stream removed, broadcasting: 5 error: Internal error occurred: error executing command in container: failed to exec in container: failed to create exec "1646e725cc19b818ad0f254456f51e259d6bdc8aa3a4223e5cd499997eebc08e": cannot exec in a deleted state: unknown [] 0xc0030847e0 exit status 1 true [0xc001c982a8 0xc001c982c0 0xc001c982d8] [0xc001c982a8 0xc001c982c0 0xc001c982d8] [0xc001c982b8 0xc001c982d0] [0xba70e0 0xba70e0] 0xc00167e180 }: Command stdout: stderr: I0411 14:31:48.486771 3201 log.go:172] (0xc000116f20) (0xc0006b0be0) Create stream I0411 14:31:48.486827 3201 log.go:172] (0xc000116f20) (0xc0006b0be0) Stream added, broadcasting: 1 I0411 14:31:48.490583 3201 log.go:172] (0xc000116f20) Reply frame received for 1 I0411 14:31:48.490619 3201 log.go:172] (0xc000116f20) (0xc0008f8000) Create stream I0411 14:31:48.490632 3201 log.go:172] (0xc000116f20) (0xc0008f8000) Stream added, broadcasting: 3 I0411 14:31:48.491575 3201 log.go:172] (0xc000116f20) Reply frame received for 3 I0411 14:31:48.491603 3201 log.go:172] (0xc000116f20) (0xc0006b0c80) Create stream I0411 14:31:48.491610 3201 log.go:172] (0xc000116f20) (0xc0006b0c80) Stream added, broadcasting: 5 I0411 14:31:48.492469 3201 log.go:172] (0xc000116f20) Reply frame received for 5 I0411 14:32:25.930007 3201 log.go:172] (0xc000116f20) Data frame received for 1 I0411 14:32:25.930048 3201 log.go:172] (0xc000116f20) (0xc0006b0c80) Stream removed, broadcasting: 5 I0411 14:32:25.930074 3201 log.go:172] (0xc0006b0be0) (1) Data frame handling I0411 14:32:25.930095 3201 log.go:172] (0xc0006b0be0) (1) Data frame sent I0411 14:32:25.930125 3201 log.go:172] (0xc000116f20) (0xc0008f8000) Stream removed, broadcasting: 3 I0411 14:32:25.930169 3201 log.go:172] (0xc000116f20) (0xc0006b0be0) Stream removed, broadcasting: 1 I0411 14:32:25.930204 3201 log.go:172] (0xc000116f20) Go away received I0411 14:32:25.930500 3201 log.go:172] (0xc000116f20) (0xc0006b0be0) Stream removed, broadcasting: 1 I0411 14:32:25.930513 3201 log.go:172] (0xc000116f20) (0xc0008f8000) Stream removed, broadcasting: 3 I0411 14:32:25.930518 3201 log.go:172] (0xc000116f20) (0xc0006b0c80) Stream removed, broadcasting: 5 error: Internal error occurred: error executing command in container: failed to exec in container: failed to create exec "1646e725cc19b818ad0f254456f51e259d6bdc8aa3a4223e5cd499997eebc08e": cannot exec in a deleted state: unknown error: exit status 1 Apr 11 14:32:35.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 11 14:32:36.052: INFO: rc: 1 Apr 11 14:32:36.053: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0030848d0 exit status 1 true [0xc001c982e0 0xc001c982f8 0xc001c98310] [0xc001c982e0 0xc001c982f8 0xc001c98310] [0xc001c982f0 0xc001c98308] [0xba70e0 0xba70e0] 0xc0013d4600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 11 14:32:46.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 11 14:32:46.152: INFO: rc: 1 Apr 11 14:32:46.152: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002fa1c80 exit status 1 true [0xc000d669c8 0xc000d66b20 0xc000d66c08] [0xc000d669c8 0xc000d66b20 0xc000d66c08] [0xc000d66a80 0xc000d66bd0] [0xba70e0 0xba70e0] 0xc002841980 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 11 14:32:56.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 11 14:32:56.283: INFO: rc: 1 Apr 11 14:32:56.283: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00348e090 exit status 1 true [0xc00035c168 0xc0006500c8 0xc000650308] [0xc00035c168 0xc0006500c8 0xc000650308] [0xc000650090 0xc0006502b8] [0xba70e0 0xba70e0] 0xc00167eb40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 11 14:33:06.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 11 14:33:06.359: INFO: rc: 1 Apr 11 14:33:06.359: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00348e180 exit status 1 true [0xc000650338 0xc000650500 0xc000650c70] [0xc000650338 0xc000650500 0xc000650c70] [0xc0006503c0 0xc000650c00] [0xba70e0 0xba70e0] 0xc00187bec0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 11 14:33:16.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 11 14:33:16.457: INFO: rc: 1 Apr 11 14:33:16.457: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021d07e0 exit status 1 true [0xc000d66020 0xc000d66330 0xc000d66410] [0xc000d66020 0xc000d66330 0xc000d66410] [0xc000d66250 0xc000d663c8] [0xba70e0 0xba70e0] 0xc000180ae0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 11 14:33:26.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 11 14:33:29.391: INFO: rc: 1 Apr 11 14:33:29.391: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021d08a0 exit status 1 true [0xc000d664a8 0xc000d665d0 0xc000d66698] [0xc000d664a8 0xc000d665d0 0xc000d66698] [0xc000d66570 0xc000d66648] [0xba70e0 0xba70e0] 0xc001f90420 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 11 14:33:39.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 11 14:33:39.486: INFO: rc: 1 Apr 11 14:33:39.486: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021d09c0 exit status 1 true [0xc000d666c8 0xc000d667b8 0xc000d668f0] [0xc000d666c8 0xc000d667b8 0xc000d668f0] [0xc000d66750 0xc000d668b0] [0xba70e0 0xba70e0] 0xc002074600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 11 14:33:49.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 11 14:33:49.571: INFO: rc: 1 Apr 11 14:33:49.571: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002408120 exit status 1 true [0xc001a4a000 0xc001a4a018 0xc001a4a030] [0xc001a4a000 0xc001a4a018 0xc001a4a030] [0xc001a4a010 0xc001a4a028] [0xba70e0 0xba70e0] 0xc0019c95c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 11 14:33:59.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 11 14:33:59.652: INFO: rc: 1 Apr 11 14:33:59.652: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0032a40c0 exit status 1 true [0xc001c98000 0xc001c98020 0xc001c98038] [0xc001c98000 0xc001c98020 0xc001c98038] [0xc001c98018 0xc001c98030] [0xba70e0 0xba70e0] 0xc0015e4cc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 11 14:34:09.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 11 14:34:09.751: INFO: rc: 1 Apr 11 14:34:09.751: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021d0a80 exit status 1 true [0xc000d66958 0xc000d669c8 0xc000d66b20] [0xc000d66958 0xc000d669c8 0xc000d66b20] [0xc000d66990 0xc000d66a80] [0xba70e0 0xba70e0] 0xc001eb0300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 11 14:34:19.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 11 14:34:19.839: INFO: rc: 1 Apr 11 14:34:19.839: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00348e270 exit status 1 true [0xc000650d48 0xc000650e80 0xc000651098] [0xc000650d48 0xc000650e80 0xc000651098] [0xc000650e28 0xc000650f20] [0xba70e0 0xba70e0] 0xc002646480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 11 14:34:29.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 11 14:34:29.917: INFO: rc: 1 Apr 11 14:34:29.917: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0032a4180 exit status 1 true [0xc001c98040 0xc001c98058 0xc001c98070] [0xc001c98040 0xc001c98058 0xc001c98070] [0xc001c98050 0xc001c98068] [0xba70e0 0xba70e0] 0xc002430180 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 11 14:34:39.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 11 14:34:40.007: INFO: rc: 1 Apr 11 14:34:40.007: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002408240 exit status 1 true [0xc001a4a038 0xc001a4a050 0xc001a4a068] [0xc001a4a038 0xc001a4a050 0xc001a4a068] [0xc001a4a048 0xc001a4a060] [0xba70e0 0xba70e0] 0xc00270d620 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 11 14:34:50.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 11 14:34:50.111: INFO: rc: 1 Apr 11 14:34:50.111: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002408300 exit status 1 true [0xc001a4a070 0xc001a4a088 0xc001a4a0a0] [0xc001a4a070 0xc001a4a088 0xc001a4a0a0] [0xc001a4a080 0xc001a4a098] [0xba70e0 0xba70e0] 0xc00270dbc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 11 14:35:00.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 11 14:35:00.199: INFO: rc: 1 Apr 11 14:35:00.200: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021d0810 exit status 1 true [0xc00035c168 0xc000d66140 0xc000d66388] [0xc00035c168 0xc000d66140 0xc000d66388] [0xc000d66020 0xc000d66330] [0xba70e0 0xba70e0] 0xc0015e5b00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 11 14:35:10.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 11 14:35:10.365: INFO: rc: 1 Apr 11 14:35:10.365: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021d0900 exit status 1 true [0xc000d663c8 0xc000d66548 0xc000d66620] [0xc000d663c8 0xc000d66548 0xc000d66620] [0xc000d664a8 0xc000d665d0] [0xba70e0 0xba70e0] 0xc0019c9020 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 11 14:35:20.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 11 14:35:20.457: INFO: rc: 1 Apr 11 14:35:20.457: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002408090 exit status 1 true [0xc000650090 0xc0006502b8 0xc000650368] [0xc000650090 0xc0006502b8 0xc000650368] [0xc000650138 0xc000650338] [0xba70e0 0xba70e0] 0xc002075140 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 11 14:35:30.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 11 14:35:30.550: INFO: rc: 1 Apr 11 14:35:30.550: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021d0a20 exit status 1 true [0xc000d66648 0xc000d66738 0xc000d66820] [0xc000d66648 0xc000d66738 0xc000d66820] [0xc000d666c8 0xc000d667b8] [0xba70e0 0xba70e0] 0xc001f90b40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 11 14:35:40.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 11 14:35:40.647: INFO: rc: 1 Apr 11 14:35:40.647: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00348e0f0 exit status 1 true [0xc001a4a000 0xc001a4a018 0xc001a4a030] [0xc001a4a000 0xc001a4a018 0xc001a4a030] [0xc001a4a010 0xc001a4a028] [0xba70e0 0xba70e0] 0xc000181800 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 11 14:35:50.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 11 14:35:50.737: INFO: rc: 1 Apr 11 14:35:50.737: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021d0b40 exit status 1 true [0xc000d668b0 0xc000d66970 0xc000d66a30] [0xc000d668b0 0xc000d66970 0xc000d66a30] [0xc000d66958 0xc000d669c8] [0xba70e0 0xba70e0] 0xc00187bec0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 11 14:36:00.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 11 14:36:00.835: INFO: rc: 1 Apr 11 14:36:00.836: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00348e1e0 exit status 1 true [0xc001a4a038 0xc001a4a050 0xc001a4a068] [0xc001a4a038 0xc001a4a050 0xc001a4a068] [0xc001a4a048 0xc001a4a060] [0xba70e0 0xba70e0] 0xc00167eb40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 11 14:36:10.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 11 14:36:10.919: INFO: rc: 1 Apr 11 14:36:10.919: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021d0c60 exit status 1 true [0xc000d66a80 0xc000d66bd0 0xc000d66c68] [0xc000d66a80 0xc000d66bd0 0xc000d66c68] [0xc000d66b50 0xc000d66c28] [0xba70e0 0xba70e0] 0xc001037f80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 11 14:36:20.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 11 14:36:21.018: INFO: rc: 1 Apr 11 14:36:21.018: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00348e2d0 exit status 1 true [0xc001a4a070 0xc001a4a088 0xc001a4a0a0] [0xc001a4a070 0xc001a4a088 0xc001a4a0a0] [0xc001a4a080 0xc001a4a098] [0xba70e0 0xba70e0] 0xc001eb0c00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 11 14:36:31.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 11 14:36:31.106: INFO: rc: 1 Apr 11 14:36:31.106: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021d0d50 exit status 1 true [0xc000d66d48 0xc000d66e08 0xc000d66e40] [0xc000d66d48 0xc000d66e08 0xc000d66e40] [0xc000d66dd0 0xc000d66e20] [0xba70e0 0xba70e0] 0xc0026465a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 11 14:36:41.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 11 14:36:41.240: INFO: rc: 1 Apr 11 14:36:41.241: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0032a41b0 exit status 1 true [0xc001c98000 0xc001c98020 0xc001c98038] [0xc001c98000 0xc001c98020 0xc001c98038] [0xc001c98018 0xc001c98030] [0xba70e0 0xba70e0] 0xc00270d6e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 11 14:36:51.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 11 14:36:51.332: INFO: rc: 1 Apr 11 14:36:51.332: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Apr 11 14:36:51.332: INFO: Scaling statefulset ss to 0 Apr 11 14:36:51.339: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 11 14:36:51.341: INFO: Deleting all statefulset in ns statefulset-3184 Apr 11 14:36:51.343: INFO: Scaling statefulset ss to 0 Apr 11 14:36:51.349: INFO: Waiting for statefulset status.replicas updated to 0 Apr 11 14:36:51.350: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:36:51.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3184" for this suite. Apr 11 14:36:57.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:36:57.478: INFO: namespace statefulset-3184 deletion completed in 6.07708731s • [SLOW TEST:414.566 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:36:57.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-f97547d2-1cc3-464b-9862-f660b80a97c1 STEP: Creating a pod to test consume configMaps Apr 11 14:36:57.544: INFO: Waiting up to 5m0s for pod "pod-configmaps-92c24971-a153-41e9-b5c8-1b2b65a066e5" in namespace "configmap-8889" to be "success or failure" Apr 11 14:36:57.548: INFO: Pod "pod-configmaps-92c24971-a153-41e9-b5c8-1b2b65a066e5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.352319ms Apr 11 14:37:00.026: INFO: Pod "pod-configmaps-92c24971-a153-41e9-b5c8-1b2b65a066e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.481720029s Apr 11 14:37:02.028: INFO: Pod "pod-configmaps-92c24971-a153-41e9-b5c8-1b2b65a066e5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.484111251s Apr 11 14:37:04.032: INFO: Pod "pod-configmaps-92c24971-a153-41e9-b5c8-1b2b65a066e5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.48769245s Apr 11 14:37:06.036: INFO: Pod "pod-configmaps-92c24971-a153-41e9-b5c8-1b2b65a066e5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.491651844s Apr 11 14:37:08.039: INFO: Pod "pod-configmaps-92c24971-a153-41e9-b5c8-1b2b65a066e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.495505962s STEP: Saw pod success Apr 11 14:37:08.039: INFO: Pod "pod-configmaps-92c24971-a153-41e9-b5c8-1b2b65a066e5" satisfied condition "success or failure" Apr 11 14:37:08.043: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-92c24971-a153-41e9-b5c8-1b2b65a066e5 container configmap-volume-test: STEP: delete the pod Apr 11 14:37:08.236: INFO: Waiting for pod pod-configmaps-92c24971-a153-41e9-b5c8-1b2b65a066e5 to disappear Apr 11 14:37:08.243: INFO: Pod pod-configmaps-92c24971-a153-41e9-b5c8-1b2b65a066e5 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:37:08.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8889" for this suite. Apr 11 14:37:14.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:37:14.327: INFO: namespace configmap-8889 deletion completed in 6.080584262s • [SLOW TEST:16.849 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:37:14.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 11 14:37:18.955: INFO: Successfully updated pod "annotationupdate990586a5-2de1-4b79-bfce-28ecee253b34" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:37:20.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5224" for this suite. Apr 11 14:37:42.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:37:43.067: INFO: namespace downward-api-5224 deletion completed in 22.092624189s • [SLOW TEST:28.740 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:37:43.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3059.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3059.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3059.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3059.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3059.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3059.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3059.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3059.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3059.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3059.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3059.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 164.7.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.7.164_udp@PTR;check="$$(dig +tcp +noall +answer +search 164.7.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.7.164_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3059.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3059.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3059.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3059.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3059.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3059.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3059.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3059.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3059.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3059.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3059.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 164.7.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.7.164_udp@PTR;check="$$(dig +tcp +noall +answer +search 164.7.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.7.164_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 11 14:37:49.247: INFO: Unable to read wheezy_udp@dns-test-service.dns-3059.svc.cluster.local from pod dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c: the server could not find the requested resource (get pods dns-test-80f165ce-eda8-4e27-bf65-670b1def100c) Apr 11 14:37:49.250: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3059.svc.cluster.local from pod dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c: the server could not find the requested resource (get pods dns-test-80f165ce-eda8-4e27-bf65-670b1def100c) Apr 11 14:37:49.253: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local from pod dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c: the server could not find the requested resource (get pods dns-test-80f165ce-eda8-4e27-bf65-670b1def100c) Apr 11 14:37:49.255: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local from pod dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c: the server could not find the requested resource (get pods dns-test-80f165ce-eda8-4e27-bf65-670b1def100c) Apr 11 14:37:49.273: INFO: Unable to read jessie_udp@dns-test-service.dns-3059.svc.cluster.local from pod dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c: the server could not find the requested resource (get pods dns-test-80f165ce-eda8-4e27-bf65-670b1def100c) Apr 11 14:37:49.276: INFO: Unable to read jessie_tcp@dns-test-service.dns-3059.svc.cluster.local from pod dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c: the server could not find the requested resource (get pods dns-test-80f165ce-eda8-4e27-bf65-670b1def100c) Apr 11 14:37:49.279: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local from pod dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c: the server could not find the requested resource (get pods dns-test-80f165ce-eda8-4e27-bf65-670b1def100c) Apr 11 14:37:49.282: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local from pod dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c: the server could not find the requested resource (get pods dns-test-80f165ce-eda8-4e27-bf65-670b1def100c) Apr 11 14:37:49.300: INFO: Lookups using dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c failed for: [wheezy_udp@dns-test-service.dns-3059.svc.cluster.local wheezy_tcp@dns-test-service.dns-3059.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local jessie_udp@dns-test-service.dns-3059.svc.cluster.local jessie_tcp@dns-test-service.dns-3059.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local] Apr 11 14:37:54.305: INFO: Unable to read wheezy_udp@dns-test-service.dns-3059.svc.cluster.local from pod dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c: the server could not find the requested resource (get pods dns-test-80f165ce-eda8-4e27-bf65-670b1def100c) Apr 11 14:37:54.309: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3059.svc.cluster.local from pod dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c: the server could not find the requested resource (get pods dns-test-80f165ce-eda8-4e27-bf65-670b1def100c) Apr 11 14:37:54.312: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local from pod dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c: the server could not find the requested resource (get pods dns-test-80f165ce-eda8-4e27-bf65-670b1def100c) Apr 11 14:37:54.316: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local from pod dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c: the server could not find the requested resource (get pods dns-test-80f165ce-eda8-4e27-bf65-670b1def100c) Apr 11 14:37:54.340: INFO: Unable to read jessie_udp@dns-test-service.dns-3059.svc.cluster.local from pod dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c: the server could not find the requested resource (get pods dns-test-80f165ce-eda8-4e27-bf65-670b1def100c) Apr 11 14:37:54.343: INFO: Unable to read jessie_tcp@dns-test-service.dns-3059.svc.cluster.local from pod dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c: the server could not find the requested resource (get pods dns-test-80f165ce-eda8-4e27-bf65-670b1def100c) Apr 11 14:37:54.346: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local from pod dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c: the server could not find the requested resource (get pods dns-test-80f165ce-eda8-4e27-bf65-670b1def100c) Apr 11 14:37:54.349: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local from pod dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c: the server could not find the requested resource (get pods dns-test-80f165ce-eda8-4e27-bf65-670b1def100c) Apr 11 14:37:54.369: INFO: Lookups using dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c failed for: [wheezy_udp@dns-test-service.dns-3059.svc.cluster.local wheezy_tcp@dns-test-service.dns-3059.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local jessie_udp@dns-test-service.dns-3059.svc.cluster.local jessie_tcp@dns-test-service.dns-3059.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local] Apr 11 14:37:59.305: INFO: Unable to read wheezy_udp@dns-test-service.dns-3059.svc.cluster.local from pod dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c: the server could not find the requested resource (get pods dns-test-80f165ce-eda8-4e27-bf65-670b1def100c) Apr 11 14:37:59.309: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3059.svc.cluster.local from pod dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c: the server could not find the requested resource (get pods dns-test-80f165ce-eda8-4e27-bf65-670b1def100c) Apr 11 14:37:59.312: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local from pod dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c: the server could not find the requested resource (get pods dns-test-80f165ce-eda8-4e27-bf65-670b1def100c) Apr 11 14:37:59.316: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local from pod dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c: the server could not find the requested resource (get pods dns-test-80f165ce-eda8-4e27-bf65-670b1def100c) Apr 11 14:37:59.339: INFO: Unable to read jessie_udp@dns-test-service.dns-3059.svc.cluster.local from pod dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c: the server could not find the requested resource (get pods dns-test-80f165ce-eda8-4e27-bf65-670b1def100c) Apr 11 14:37:59.345: INFO: Unable to read jessie_tcp@dns-test-service.dns-3059.svc.cluster.local from pod dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c: the server could not find the requested resource (get pods dns-test-80f165ce-eda8-4e27-bf65-670b1def100c) Apr 11 14:37:59.348: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local from pod dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c: the server could not find the requested resource (get pods dns-test-80f165ce-eda8-4e27-bf65-670b1def100c) Apr 11 14:37:59.351: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local from pod dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c: the server could not find the requested resource (get pods dns-test-80f165ce-eda8-4e27-bf65-670b1def100c) Apr 11 14:37:59.369: INFO: Lookups using dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c failed for: [wheezy_udp@dns-test-service.dns-3059.svc.cluster.local wheezy_tcp@dns-test-service.dns-3059.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local jessie_udp@dns-test-service.dns-3059.svc.cluster.local jessie_tcp@dns-test-service.dns-3059.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local] Apr 11 14:38:04.305: INFO: Unable to read wheezy_udp@dns-test-service.dns-3059.svc.cluster.local from pod dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c: the server could not find the requested resource (get pods dns-test-80f165ce-eda8-4e27-bf65-670b1def100c) Apr 11 14:38:04.308: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3059.svc.cluster.local from pod dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c: the server could not find the requested resource (get pods dns-test-80f165ce-eda8-4e27-bf65-670b1def100c) Apr 11 14:38:04.312: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local from pod dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c: the server could not find the requested resource (get pods dns-test-80f165ce-eda8-4e27-bf65-670b1def100c) Apr 11 14:38:04.315: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local from pod dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c: the server could not find the requested resource (get pods dns-test-80f165ce-eda8-4e27-bf65-670b1def100c) Apr 11 14:38:04.339: INFO: Unable to read jessie_udp@dns-test-service.dns-3059.svc.cluster.local from pod dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c: the server could not find the requested resource (get pods dns-test-80f165ce-eda8-4e27-bf65-670b1def100c) Apr 11 14:38:04.342: INFO: Unable to read jessie_tcp@dns-test-service.dns-3059.svc.cluster.local from pod dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c: the server could not find the requested resource (get pods dns-test-80f165ce-eda8-4e27-bf65-670b1def100c) Apr 11 14:38:04.345: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local from pod dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c: the server could not find the requested resource (get pods dns-test-80f165ce-eda8-4e27-bf65-670b1def100c) Apr 11 14:38:04.348: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local from pod dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c: the server could not find the requested resource (get pods dns-test-80f165ce-eda8-4e27-bf65-670b1def100c) Apr 11 14:38:04.365: INFO: Lookups using dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c failed for: [wheezy_udp@dns-test-service.dns-3059.svc.cluster.local wheezy_tcp@dns-test-service.dns-3059.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local jessie_udp@dns-test-service.dns-3059.svc.cluster.local jessie_tcp@dns-test-service.dns-3059.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local] Apr 11 14:38:09.305: INFO: Unable to read wheezy_udp@dns-test-service.dns-3059.svc.cluster.local from pod dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c: the server could not find the requested resource (get pods dns-test-80f165ce-eda8-4e27-bf65-670b1def100c) Apr 11 14:38:09.309: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3059.svc.cluster.local from pod dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c: the server could not find the requested resource (get pods dns-test-80f165ce-eda8-4e27-bf65-670b1def100c) Apr 11 14:38:09.312: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local from pod dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c: the server could not find the requested resource (get pods dns-test-80f165ce-eda8-4e27-bf65-670b1def100c) Apr 11 14:38:09.315: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local from pod dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c: the server could not find the requested resource (get pods dns-test-80f165ce-eda8-4e27-bf65-670b1def100c) Apr 11 14:38:09.336: INFO: Unable to read jessie_udp@dns-test-service.dns-3059.svc.cluster.local from pod dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c: the server could not find the requested resource (get pods dns-test-80f165ce-eda8-4e27-bf65-670b1def100c) Apr 11 14:38:09.339: INFO: Unable to read jessie_tcp@dns-test-service.dns-3059.svc.cluster.local from pod dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c: the server could not find the requested resource (get pods dns-test-80f165ce-eda8-4e27-bf65-670b1def100c) Apr 11 14:38:09.342: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local from pod dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c: the server could not find the requested resource (get pods dns-test-80f165ce-eda8-4e27-bf65-670b1def100c) Apr 11 14:38:09.345: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local from pod dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c: the server could not find the requested resource (get pods dns-test-80f165ce-eda8-4e27-bf65-670b1def100c) Apr 11 14:38:09.363: INFO: Lookups using dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c failed for: [wheezy_udp@dns-test-service.dns-3059.svc.cluster.local wheezy_tcp@dns-test-service.dns-3059.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local jessie_udp@dns-test-service.dns-3059.svc.cluster.local jessie_tcp@dns-test-service.dns-3059.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local] Apr 11 14:38:14.305: INFO: Unable to read wheezy_udp@dns-test-service.dns-3059.svc.cluster.local from pod dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c: the server could not find the requested resource (get pods dns-test-80f165ce-eda8-4e27-bf65-670b1def100c) Apr 11 14:38:14.309: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3059.svc.cluster.local from pod dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c: the server could not find the requested resource (get pods dns-test-80f165ce-eda8-4e27-bf65-670b1def100c) Apr 11 14:38:14.313: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local from pod dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c: the server could not find the requested resource (get pods dns-test-80f165ce-eda8-4e27-bf65-670b1def100c) Apr 11 14:38:14.317: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local from pod dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c: the server could not find the requested resource (get pods dns-test-80f165ce-eda8-4e27-bf65-670b1def100c) Apr 11 14:38:14.338: INFO: Unable to read jessie_udp@dns-test-service.dns-3059.svc.cluster.local from pod dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c: the server could not find the requested resource (get pods dns-test-80f165ce-eda8-4e27-bf65-670b1def100c) Apr 11 14:38:14.341: INFO: Unable to read jessie_tcp@dns-test-service.dns-3059.svc.cluster.local from pod dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c: the server could not find the requested resource (get pods dns-test-80f165ce-eda8-4e27-bf65-670b1def100c) Apr 11 14:38:14.344: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local from pod dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c: the server could not find the requested resource (get pods dns-test-80f165ce-eda8-4e27-bf65-670b1def100c) Apr 11 14:38:14.347: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local from pod dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c: the server could not find the requested resource (get pods dns-test-80f165ce-eda8-4e27-bf65-670b1def100c) Apr 11 14:38:14.366: INFO: Lookups using dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c failed for: [wheezy_udp@dns-test-service.dns-3059.svc.cluster.local wheezy_tcp@dns-test-service.dns-3059.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local jessie_udp@dns-test-service.dns-3059.svc.cluster.local jessie_tcp@dns-test-service.dns-3059.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3059.svc.cluster.local] Apr 11 14:38:19.367: INFO: DNS probes using dns-3059/dns-test-80f165ce-eda8-4e27-bf65-670b1def100c succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:38:19.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3059" for this suite. Apr 11 14:38:25.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:38:25.985: INFO: namespace dns-3059 deletion completed in 6.140645533s • [SLOW TEST:42.918 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:38:25.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 11 14:38:26.107: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Apr 11 14:38:31.122: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 11 14:38:31.122: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 11 14:38:31.158: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-7129,SelfLink:/apis/apps/v1/namespaces/deployment-7129/deployments/test-cleanup-deployment,UID:1d2f1ebe-f1a1-4f8f-b9b5-36b552f7bb0e,ResourceVersion:4860416,Generation:1,CreationTimestamp:2020-04-11 14:38:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Apr 11 14:38:31.166: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-7129,SelfLink:/apis/apps/v1/namespaces/deployment-7129/replicasets/test-cleanup-deployment-55bbcbc84c,UID:91c26e72-cf65-48e6-a03f-2c131a3fa546,ResourceVersion:4860418,Generation:1,CreationTimestamp:2020-04-11 14:38:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 1d2f1ebe-f1a1-4f8f-b9b5-36b552f7bb0e 0xc002b07bf7 0xc002b07bf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 11 14:38:31.166: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Apr 11 14:38:31.167: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-7129,SelfLink:/apis/apps/v1/namespaces/deployment-7129/replicasets/test-cleanup-controller,UID:503300b9-59fa-4b6f-a45c-63fab3f4a23d,ResourceVersion:4860417,Generation:1,CreationTimestamp:2020-04-11 14:38:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 1d2f1ebe-f1a1-4f8f-b9b5-36b552f7bb0e 0xc002b07b27 0xc002b07b28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Apr 11 14:38:31.195: INFO: Pod "test-cleanup-controller-88hnb" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-88hnb,GenerateName:test-cleanup-controller-,Namespace:deployment-7129,SelfLink:/api/v1/namespaces/deployment-7129/pods/test-cleanup-controller-88hnb,UID:723e61cd-d948-4024-80c9-e71adb9fd2a8,ResourceVersion:4860410,Generation:0,CreationTimestamp:2020-04-11 14:38:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 503300b9-59fa-4b6f-a45c-63fab3f4a23d 0xc0029a44e7 0xc0029a44e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rstjn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rstjn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rstjn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0029a4560} {node.kubernetes.io/unreachable Exists NoExecute 0xc0029a4580}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:38:26 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:38:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:38:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:38:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.178,StartTime:2020-04-11 14:38:26 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-11 14:38:28 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://00e8a193f83591a452d1ba776ea5fb8acedc1b40cb13b4cf3caf83afa85a2fed}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 11 14:38:31.195: INFO: Pod "test-cleanup-deployment-55bbcbc84c-m44gz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-m44gz,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-7129,SelfLink:/api/v1/namespaces/deployment-7129/pods/test-cleanup-deployment-55bbcbc84c-m44gz,UID:eb0a1858-6f7a-47e9-989c-d6376ddf78cf,ResourceVersion:4860422,Generation:0,CreationTimestamp:2020-04-11 14:38:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 91c26e72-cf65-48e6-a03f-2c131a3fa546 0xc0029a4667 0xc0029a4668}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rstjn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rstjn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-rstjn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0029a46e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0029a4700}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:38:31 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:38:31.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7129" for this suite. Apr 11 14:38:37.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:38:37.371: INFO: namespace deployment-7129 deletion completed in 6.15576623s • [SLOW TEST:11.385 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:38:37.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-4550/configmap-test-0b24d4a4-eb04-4e44-935b-86dcd8c5f55d STEP: Creating a pod to test consume configMaps Apr 11 14:38:37.491: INFO: Waiting up to 5m0s for pod "pod-configmaps-4477a9b8-5799-4c4b-9df9-0f697ff8598e" in namespace "configmap-4550" to be "success or failure" Apr 11 14:38:37.506: INFO: Pod "pod-configmaps-4477a9b8-5799-4c4b-9df9-0f697ff8598e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.272381ms Apr 11 14:38:39.509: INFO: Pod "pod-configmaps-4477a9b8-5799-4c4b-9df9-0f697ff8598e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0176162s Apr 11 14:38:41.514: INFO: Pod "pod-configmaps-4477a9b8-5799-4c4b-9df9-0f697ff8598e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022178113s STEP: Saw pod success Apr 11 14:38:41.514: INFO: Pod "pod-configmaps-4477a9b8-5799-4c4b-9df9-0f697ff8598e" satisfied condition "success or failure" Apr 11 14:38:41.517: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-4477a9b8-5799-4c4b-9df9-0f697ff8598e container env-test: STEP: delete the pod Apr 11 14:38:41.584: INFO: Waiting for pod pod-configmaps-4477a9b8-5799-4c4b-9df9-0f697ff8598e to disappear Apr 11 14:38:41.595: INFO: Pod pod-configmaps-4477a9b8-5799-4c4b-9df9-0f697ff8598e no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:38:41.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4550" for this suite. Apr 11 14:38:47.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:38:47.703: INFO: namespace configmap-4550 deletion completed in 6.104774934s • [SLOW TEST:10.332 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:38:47.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:38:53.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3502" for this suite. Apr 11 14:38:59.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:39:00.128: INFO: namespace namespaces-3502 deletion completed in 6.163529555s STEP: Destroying namespace "nsdeletetest-1489" for this suite. Apr 11 14:39:00.157: INFO: Namespace nsdeletetest-1489 was already deleted STEP: Destroying namespace "nsdeletetest-1188" for this suite. Apr 11 14:39:06.193: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:39:06.275: INFO: namespace nsdeletetest-1188 deletion completed in 6.118125678s • [SLOW TEST:18.572 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:39:06.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 11 14:39:06.337: INFO: Waiting up to 5m0s for pod "pod-bfe9c7d8-6882-4f02-8f4b-b0613dcc7b0c" in namespace "emptydir-6623" to be "success or failure" Apr 11 14:39:06.341: INFO: Pod "pod-bfe9c7d8-6882-4f02-8f4b-b0613dcc7b0c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.125226ms Apr 11 14:39:08.345: INFO: Pod "pod-bfe9c7d8-6882-4f02-8f4b-b0613dcc7b0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007981001s Apr 11 14:39:10.350: INFO: Pod "pod-bfe9c7d8-6882-4f02-8f4b-b0613dcc7b0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012401599s STEP: Saw pod success Apr 11 14:39:10.350: INFO: Pod "pod-bfe9c7d8-6882-4f02-8f4b-b0613dcc7b0c" satisfied condition "success or failure" Apr 11 14:39:10.352: INFO: Trying to get logs from node iruya-worker2 pod pod-bfe9c7d8-6882-4f02-8f4b-b0613dcc7b0c container test-container: STEP: delete the pod Apr 11 14:39:10.384: INFO: Waiting for pod pod-bfe9c7d8-6882-4f02-8f4b-b0613dcc7b0c to disappear Apr 11 14:39:10.395: INFO: Pod pod-bfe9c7d8-6882-4f02-8f4b-b0613dcc7b0c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:39:10.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6623" for this suite. Apr 11 14:39:16.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:39:16.507: INFO: namespace emptydir-6623 deletion completed in 6.108735235s • [SLOW TEST:10.232 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:39:16.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 11 14:39:16.579: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Apr 11 14:39:16.592: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 11 14:39:21.597: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 11 14:39:21.597: INFO: Creating deployment "test-rolling-update-deployment" Apr 11 14:39:21.602: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Apr 11 14:39:21.611: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Apr 11 14:39:23.619: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Apr 11 14:39:23.622: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722212761, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722212761, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63722212761, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63722212761, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 11 14:39:25.626: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 11 14:39:25.634: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-6847,SelfLink:/apis/apps/v1/namespaces/deployment-6847/deployments/test-rolling-update-deployment,UID:a405db47-1ce7-4c8c-968a-c0f1a13a6f23,ResourceVersion:4860689,Generation:1,CreationTimestamp:2020-04-11 14:39:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-04-11 14:39:21 +0000 UTC 2020-04-11 14:39:21 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-04-11 14:39:25 +0000 UTC 2020-04-11 14:39:21 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Apr 11 14:39:25.637: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-6847,SelfLink:/apis/apps/v1/namespaces/deployment-6847/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:9d3c9723-9e86-46ab-8c08-1f0167886db7,ResourceVersion:4860678,Generation:1,CreationTimestamp:2020-04-11 14:39:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment a405db47-1ce7-4c8c-968a-c0f1a13a6f23 0xc0033ae2b7 0xc0033ae2b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Apr 11 14:39:25.637: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Apr 11 14:39:25.637: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-6847,SelfLink:/apis/apps/v1/namespaces/deployment-6847/replicasets/test-rolling-update-controller,UID:07958f5e-f293-4509-a7c9-32fda97c914d,ResourceVersion:4860687,Generation:2,CreationTimestamp:2020-04-11 14:39:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment a405db47-1ce7-4c8c-968a-c0f1a13a6f23 0xc0033ae1cf 0xc0033ae1e0}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 11 14:39:25.641: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-vm6tp" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-vm6tp,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-6847,SelfLink:/api/v1/namespaces/deployment-6847/pods/test-rolling-update-deployment-79f6b9d75c-vm6tp,UID:00b6c437-cdc6-42b5-9cf4-7625d4a6b2ec,ResourceVersion:4860677,Generation:0,CreationTimestamp:2020-04-11 14:39:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 9d3c9723-9e86-46ab-8c08-1f0167886db7 0xc0033aeb77 0xc0033aeb78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-x6hrb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x6hrb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-x6hrb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0033aebf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0033aec10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:39:21 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:39:24 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:39:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-11 14:39:21 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.181,StartTime:2020-04-11 14:39:21 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-04-11 14:39:24 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://328af1d4916006c0e1acf209ca9df794e56498fc3fe43651a8f3920c9f629ea0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:39:25.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6847" for this suite. Apr 11 14:39:31.808: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:39:31.919: INFO: namespace deployment-6847 deletion completed in 6.275022478s • [SLOW TEST:15.411 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:39:31.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3378.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3378.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3378.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3378.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 11 14:39:38.020: INFO: DNS probes using dns-test-dc5a0f53-36c9-4562-bbf0-f91e91f8cd18 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3378.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3378.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3378.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3378.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 11 14:39:44.162: INFO: File wheezy_udp@dns-test-service-3.dns-3378.svc.cluster.local from pod dns-3378/dns-test-7ec2d237-7ffe-427c-92ef-c50514ebe012 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 11 14:39:44.166: INFO: File jessie_udp@dns-test-service-3.dns-3378.svc.cluster.local from pod dns-3378/dns-test-7ec2d237-7ffe-427c-92ef-c50514ebe012 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 11 14:39:44.166: INFO: Lookups using dns-3378/dns-test-7ec2d237-7ffe-427c-92ef-c50514ebe012 failed for: [wheezy_udp@dns-test-service-3.dns-3378.svc.cluster.local jessie_udp@dns-test-service-3.dns-3378.svc.cluster.local] Apr 11 14:39:49.171: INFO: File wheezy_udp@dns-test-service-3.dns-3378.svc.cluster.local from pod dns-3378/dns-test-7ec2d237-7ffe-427c-92ef-c50514ebe012 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 11 14:39:49.175: INFO: File jessie_udp@dns-test-service-3.dns-3378.svc.cluster.local from pod dns-3378/dns-test-7ec2d237-7ffe-427c-92ef-c50514ebe012 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 11 14:39:49.175: INFO: Lookups using dns-3378/dns-test-7ec2d237-7ffe-427c-92ef-c50514ebe012 failed for: [wheezy_udp@dns-test-service-3.dns-3378.svc.cluster.local jessie_udp@dns-test-service-3.dns-3378.svc.cluster.local] Apr 11 14:39:54.172: INFO: File wheezy_udp@dns-test-service-3.dns-3378.svc.cluster.local from pod dns-3378/dns-test-7ec2d237-7ffe-427c-92ef-c50514ebe012 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 11 14:39:54.176: INFO: File jessie_udp@dns-test-service-3.dns-3378.svc.cluster.local from pod dns-3378/dns-test-7ec2d237-7ffe-427c-92ef-c50514ebe012 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 11 14:39:54.176: INFO: Lookups using dns-3378/dns-test-7ec2d237-7ffe-427c-92ef-c50514ebe012 failed for: [wheezy_udp@dns-test-service-3.dns-3378.svc.cluster.local jessie_udp@dns-test-service-3.dns-3378.svc.cluster.local] Apr 11 14:39:59.172: INFO: File wheezy_udp@dns-test-service-3.dns-3378.svc.cluster.local from pod dns-3378/dns-test-7ec2d237-7ffe-427c-92ef-c50514ebe012 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 11 14:39:59.176: INFO: File jessie_udp@dns-test-service-3.dns-3378.svc.cluster.local from pod dns-3378/dns-test-7ec2d237-7ffe-427c-92ef-c50514ebe012 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 11 14:39:59.176: INFO: Lookups using dns-3378/dns-test-7ec2d237-7ffe-427c-92ef-c50514ebe012 failed for: [wheezy_udp@dns-test-service-3.dns-3378.svc.cluster.local jessie_udp@dns-test-service-3.dns-3378.svc.cluster.local] Apr 11 14:40:04.171: INFO: File wheezy_udp@dns-test-service-3.dns-3378.svc.cluster.local from pod dns-3378/dns-test-7ec2d237-7ffe-427c-92ef-c50514ebe012 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 11 14:40:04.177: INFO: File jessie_udp@dns-test-service-3.dns-3378.svc.cluster.local from pod dns-3378/dns-test-7ec2d237-7ffe-427c-92ef-c50514ebe012 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 11 14:40:04.177: INFO: Lookups using dns-3378/dns-test-7ec2d237-7ffe-427c-92ef-c50514ebe012 failed for: [wheezy_udp@dns-test-service-3.dns-3378.svc.cluster.local jessie_udp@dns-test-service-3.dns-3378.svc.cluster.local] Apr 11 14:40:09.176: INFO: DNS probes using dns-test-7ec2d237-7ffe-427c-92ef-c50514ebe012 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3378.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3378.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3378.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-3378.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 11 14:40:15.761: INFO: DNS probes using dns-test-6adc7ad3-45d2-40fb-93c7-e8a0498dc0f2 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:40:15.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3378" for this suite. Apr 11 14:40:21.885: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:40:21.955: INFO: namespace dns-3378 deletion completed in 6.08807056s • [SLOW TEST:50.036 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:40:21.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 11 14:40:26.567: INFO: Successfully updated pod "pod-update-activedeadlineseconds-5417d6a5-e03a-49fd-8c33-4a266bb8196f" Apr 11 14:40:26.567: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-5417d6a5-e03a-49fd-8c33-4a266bb8196f" in namespace "pods-5879" to be "terminated due to deadline exceeded" Apr 11 14:40:26.570: INFO: Pod "pod-update-activedeadlineseconds-5417d6a5-e03a-49fd-8c33-4a266bb8196f": Phase="Running", Reason="", readiness=true. Elapsed: 3.393006ms Apr 11 14:40:28.575: INFO: Pod "pod-update-activedeadlineseconds-5417d6a5-e03a-49fd-8c33-4a266bb8196f": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.00762806s Apr 11 14:40:28.575: INFO: Pod "pod-update-activedeadlineseconds-5417d6a5-e03a-49fd-8c33-4a266bb8196f" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:40:28.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5879" for this suite. Apr 11 14:40:34.592: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:40:34.668: INFO: namespace pods-5879 deletion completed in 6.089037929s • [SLOW TEST:12.712 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:40:34.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-3702 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 11 14:40:34.730: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 11 14:41:00.890: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.161:8080/dial?request=hostName&protocol=http&host=10.244.2.160&port=8080&tries=1'] Namespace:pod-network-test-3702 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 11 14:41:00.890: INFO: >>> kubeConfig: /root/.kube/config I0411 14:41:00.924405 6 log.go:172] (0xc0000ed3f0) (0xc002b226e0) Create stream I0411 14:41:00.924441 6 log.go:172] (0xc0000ed3f0) (0xc002b226e0) Stream added, broadcasting: 1 I0411 14:41:00.927097 6 log.go:172] (0xc0000ed3f0) Reply frame received for 1 I0411 14:41:00.927127 6 log.go:172] (0xc0000ed3f0) (0xc001fce000) Create stream I0411 14:41:00.927142 6 log.go:172] (0xc0000ed3f0) (0xc001fce000) Stream added, broadcasting: 3 I0411 14:41:00.928073 6 log.go:172] (0xc0000ed3f0) Reply frame received for 3 I0411 14:41:00.928123 6 log.go:172] (0xc0000ed3f0) (0xc002b22780) Create stream I0411 14:41:00.928136 6 log.go:172] (0xc0000ed3f0) (0xc002b22780) Stream added, broadcasting: 5 I0411 14:41:00.929085 6 log.go:172] (0xc0000ed3f0) Reply frame received for 5 I0411 14:41:01.015933 6 log.go:172] (0xc0000ed3f0) Data frame received for 3 I0411 14:41:01.015966 6 log.go:172] (0xc001fce000) (3) Data frame handling I0411 14:41:01.015984 6 log.go:172] (0xc001fce000) (3) Data frame sent I0411 14:41:01.016785 6 log.go:172] (0xc0000ed3f0) Data frame received for 3 I0411 14:41:01.016825 6 log.go:172] (0xc001fce000) (3) Data frame handling I0411 14:41:01.016855 6 log.go:172] (0xc0000ed3f0) Data frame received for 5 I0411 14:41:01.016882 6 log.go:172] (0xc002b22780) (5) Data frame handling I0411 14:41:01.018857 6 log.go:172] (0xc0000ed3f0) Data frame received for 1 I0411 14:41:01.018880 6 log.go:172] (0xc002b226e0) (1) Data frame handling I0411 14:41:01.018900 6 log.go:172] (0xc002b226e0) (1) Data frame sent I0411 14:41:01.019003 6 log.go:172] (0xc0000ed3f0) (0xc002b226e0) Stream removed, broadcasting: 1 I0411 14:41:01.019059 6 log.go:172] (0xc0000ed3f0) Go away received I0411 14:41:01.019146 6 log.go:172] (0xc0000ed3f0) (0xc002b226e0) Stream removed, broadcasting: 1 I0411 14:41:01.019169 6 log.go:172] (0xc0000ed3f0) (0xc001fce000) Stream removed, broadcasting: 3 I0411 14:41:01.019182 6 log.go:172] (0xc0000ed3f0) (0xc002b22780) Stream removed, broadcasting: 5 Apr 11 14:41:01.019: INFO: Waiting for endpoints: map[] Apr 11 14:41:01.022: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.161:8080/dial?request=hostName&protocol=http&host=10.244.1.185&port=8080&tries=1'] Namespace:pod-network-test-3702 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 11 14:41:01.022: INFO: >>> kubeConfig: /root/.kube/config I0411 14:41:01.051973 6 log.go:172] (0xc0009b6580) (0xc001fce640) Create stream I0411 14:41:01.052002 6 log.go:172] (0xc0009b6580) (0xc001fce640) Stream added, broadcasting: 1 I0411 14:41:01.054315 6 log.go:172] (0xc0009b6580) Reply frame received for 1 I0411 14:41:01.054362 6 log.go:172] (0xc0009b6580) (0xc00113e6e0) Create stream I0411 14:41:01.054378 6 log.go:172] (0xc0009b6580) (0xc00113e6e0) Stream added, broadcasting: 3 I0411 14:41:01.055422 6 log.go:172] (0xc0009b6580) Reply frame received for 3 I0411 14:41:01.055461 6 log.go:172] (0xc0009b6580) (0xc001fce780) Create stream I0411 14:41:01.055472 6 log.go:172] (0xc0009b6580) (0xc001fce780) Stream added, broadcasting: 5 I0411 14:41:01.056263 6 log.go:172] (0xc0009b6580) Reply frame received for 5 I0411 14:41:01.128535 6 log.go:172] (0xc0009b6580) Data frame received for 3 I0411 14:41:01.128560 6 log.go:172] (0xc00113e6e0) (3) Data frame handling I0411 14:41:01.128574 6 log.go:172] (0xc00113e6e0) (3) Data frame sent I0411 14:41:01.129091 6 log.go:172] (0xc0009b6580) Data frame received for 3 I0411 14:41:01.129244 6 log.go:172] (0xc00113e6e0) (3) Data frame handling I0411 14:41:01.129331 6 log.go:172] (0xc0009b6580) Data frame received for 5 I0411 14:41:01.129351 6 log.go:172] (0xc001fce780) (5) Data frame handling I0411 14:41:01.130576 6 log.go:172] (0xc0009b6580) Data frame received for 1 I0411 14:41:01.130589 6 log.go:172] (0xc001fce640) (1) Data frame handling I0411 14:41:01.130596 6 log.go:172] (0xc001fce640) (1) Data frame sent I0411 14:41:01.130608 6 log.go:172] (0xc0009b6580) (0xc001fce640) Stream removed, broadcasting: 1 I0411 14:41:01.130664 6 log.go:172] (0xc0009b6580) Go away received I0411 14:41:01.130702 6 log.go:172] (0xc0009b6580) (0xc001fce640) Stream removed, broadcasting: 1 I0411 14:41:01.130725 6 log.go:172] (0xc0009b6580) (0xc00113e6e0) Stream removed, broadcasting: 3 I0411 14:41:01.130740 6 log.go:172] (0xc0009b6580) (0xc001fce780) Stream removed, broadcasting: 5 Apr 11 14:41:01.130: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:41:01.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3702" for this suite. Apr 11 14:41:25.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:41:25.235: INFO: namespace pod-network-test-3702 deletion completed in 24.101891374s • [SLOW TEST:50.567 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:41:25.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-a09aefa5-15ec-4484-a4a1-d630a04a9437 STEP: Creating a pod to test consume secrets Apr 11 14:41:25.316: INFO: Waiting up to 5m0s for pod "pod-secrets-292b300e-0aa1-4225-8560-47f3cd115d35" in namespace "secrets-406" to be "success or failure" Apr 11 14:41:25.361: INFO: Pod "pod-secrets-292b300e-0aa1-4225-8560-47f3cd115d35": Phase="Pending", Reason="", readiness=false. Elapsed: 44.292772ms Apr 11 14:41:27.365: INFO: Pod "pod-secrets-292b300e-0aa1-4225-8560-47f3cd115d35": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048331326s Apr 11 14:41:29.369: INFO: Pod "pod-secrets-292b300e-0aa1-4225-8560-47f3cd115d35": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052250292s STEP: Saw pod success Apr 11 14:41:29.369: INFO: Pod "pod-secrets-292b300e-0aa1-4225-8560-47f3cd115d35" satisfied condition "success or failure" Apr 11 14:41:29.372: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-292b300e-0aa1-4225-8560-47f3cd115d35 container secret-volume-test: STEP: delete the pod Apr 11 14:41:29.393: INFO: Waiting for pod pod-secrets-292b300e-0aa1-4225-8560-47f3cd115d35 to disappear Apr 11 14:41:29.471: INFO: Pod pod-secrets-292b300e-0aa1-4225-8560-47f3cd115d35 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:41:29.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-406" for this suite. Apr 11 14:41:35.490: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:41:35.573: INFO: namespace secrets-406 deletion completed in 6.098475991s • [SLOW TEST:10.337 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:41:35.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-d50cf370-6241-4dd8-99e4-ac0e5edc1b70 STEP: Creating secret with name s-test-opt-upd-033625a0-ff27-4c75-8868-562267282059 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-d50cf370-6241-4dd8-99e4-ac0e5edc1b70 STEP: Updating secret s-test-opt-upd-033625a0-ff27-4c75-8868-562267282059 STEP: Creating secret with name s-test-opt-create-1731bc10-d83d-4beb-85db-b7f89c0bbfda STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:43:04.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4745" for this suite. Apr 11 14:43:26.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:43:26.273: INFO: namespace secrets-4745 deletion completed in 22.117185927s • [SLOW TEST:110.699 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:43:26.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-76932a0f-755d-4fbc-a212-ab16d4ca3c3f STEP: Creating a pod to test consume configMaps Apr 11 14:43:26.322: INFO: Waiting up to 5m0s for pod "pod-configmaps-05f7f72b-c209-440a-95db-3b825d1227ea" in namespace "configmap-4069" to be "success or failure" Apr 11 14:43:26.333: INFO: Pod "pod-configmaps-05f7f72b-c209-440a-95db-3b825d1227ea": Phase="Pending", Reason="", readiness=false. Elapsed: 11.078931ms Apr 11 14:43:28.337: INFO: Pod "pod-configmaps-05f7f72b-c209-440a-95db-3b825d1227ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015342683s Apr 11 14:43:30.342: INFO: Pod "pod-configmaps-05f7f72b-c209-440a-95db-3b825d1227ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020169162s STEP: Saw pod success Apr 11 14:43:30.342: INFO: Pod "pod-configmaps-05f7f72b-c209-440a-95db-3b825d1227ea" satisfied condition "success or failure" Apr 11 14:43:30.345: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-05f7f72b-c209-440a-95db-3b825d1227ea container configmap-volume-test: STEP: delete the pod Apr 11 14:43:30.375: INFO: Waiting for pod pod-configmaps-05f7f72b-c209-440a-95db-3b825d1227ea to disappear Apr 11 14:43:30.387: INFO: Pod pod-configmaps-05f7f72b-c209-440a-95db-3b825d1227ea no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:43:30.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4069" for this suite. Apr 11 14:43:36.402: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:43:36.487: INFO: namespace configmap-4069 deletion completed in 6.097393498s • [SLOW TEST:10.213 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:43:36.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 11 14:43:40.565: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:43:40.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8200" for this suite. Apr 11 14:43:46.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:43:46.718: INFO: namespace container-runtime-8200 deletion completed in 6.097331001s • [SLOW TEST:10.231 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:43:46.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3844.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3844.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 11 14:43:52.872: INFO: DNS probes using dns-3844/dns-test-a86199b5-8c64-4413-bd72-8b07f7ed94af succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:43:52.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3844" for this suite. Apr 11 14:43:58.994: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:43:59.067: INFO: namespace dns-3844 deletion completed in 6.152904213s • [SLOW TEST:12.349 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:43:59.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-af80f355-0de2-4754-83de-83e76833f418 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-af80f355-0de2-4754-83de-83e76833f418 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:44:05.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6010" for this suite. Apr 11 14:44:27.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:44:27.278: INFO: namespace configmap-6010 deletion completed in 22.084655601s • [SLOW TEST:28.210 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:44:27.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 11 14:44:35.414: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 11 14:44:35.436: INFO: Pod pod-with-poststart-http-hook still exists Apr 11 14:44:37.436: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 11 14:44:37.441: INFO: Pod pod-with-poststart-http-hook still exists Apr 11 14:44:39.436: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 11 14:44:39.441: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:44:39.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4099" for this suite. Apr 11 14:45:01.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:45:01.535: INFO: namespace container-lifecycle-hook-4099 deletion completed in 22.089626676s • [SLOW TEST:34.256 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 11 14:45:01.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 11 14:45:01.600: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a5c03121-22e0-4c05-955f-93df8008f5e9" in namespace "projected-6440" to be "success or failure" Apr 11 14:45:01.611: INFO: Pod "downwardapi-volume-a5c03121-22e0-4c05-955f-93df8008f5e9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.977541ms Apr 11 14:45:03.615: INFO: Pod "downwardapi-volume-a5c03121-22e0-4c05-955f-93df8008f5e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015205329s Apr 11 14:45:05.618: INFO: Pod "downwardapi-volume-a5c03121-22e0-4c05-955f-93df8008f5e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0187065s STEP: Saw pod success Apr 11 14:45:05.618: INFO: Pod "downwardapi-volume-a5c03121-22e0-4c05-955f-93df8008f5e9" satisfied condition "success or failure" Apr 11 14:45:05.621: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-a5c03121-22e0-4c05-955f-93df8008f5e9 container client-container: STEP: delete the pod Apr 11 14:45:05.682: INFO: Waiting for pod downwardapi-volume-a5c03121-22e0-4c05-955f-93df8008f5e9 to disappear Apr 11 14:45:05.737: INFO: Pod downwardapi-volume-a5c03121-22e0-4c05-955f-93df8008f5e9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 11 14:45:05.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6440" for this suite. Apr 11 14:45:11.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 11 14:45:11.834: INFO: namespace projected-6440 deletion completed in 6.092320011s • [SLOW TEST:10.298 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SApr 11 14:45:11.834: INFO: Running AfterSuite actions on all nodes Apr 11 14:45:11.834: INFO: Running AfterSuite actions on node 1 Apr 11 14:45:11.834: INFO: Skipping dumping logs from cluster Ran 215 of 4412 Specs in 6567.159 seconds SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped PASS