I0529 12:55:54.621010 7 e2e.go:243] Starting e2e run "26abe1af-5de1-4004-8563-3494e36fb2cd" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1590756953 - Will randomize all specs Will run 215 of 4412 specs May 29 12:55:54.819: INFO: >>> kubeConfig: /root/.kube/config May 29 12:55:54.822: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 29 12:55:54.839: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 29 12:55:54.875: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 29 12:55:54.875: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 29 12:55:54.875: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 29 12:55:54.900: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 29 12:55:54.900: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 29 12:55:54.900: INFO: e2e test version: v1.15.11 May 29 12:55:54.901: INFO: kube-apiserver version: v1.15.7 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 12:55:54.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment May 29 12:55:54.969: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 29 12:55:54.971: INFO: Creating deployment "nginx-deployment" May 29 12:55:54.975: INFO: Waiting for observed generation 1 May 29 12:55:57.596: INFO: Waiting for all required pods to come up May 29 12:55:57.636: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running May 29 12:56:07.645: INFO: Waiting for deployment "nginx-deployment" to complete May 29 12:56:07.649: INFO: Updating deployment "nginx-deployment" with a non-existent image May 29 12:56:07.654: INFO: Updating deployment nginx-deployment May 29 12:56:07.654: INFO: Waiting for observed generation 2 May 29 12:56:09.821: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 29 12:56:09.824: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 29 12:56:09.826: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 29 12:56:09.832: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 29 12:56:09.832: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 29 12:56:09.834: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 29 12:56:09.837: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas May 29 12:56:09.837: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 May 29 12:56:09.842: INFO: Updating deployment nginx-deployment May 29 12:56:09.842: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas May 29 12:56:10.151: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 29 12:56:10.160: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 29 12:56:10.594: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-8851,SelfLink:/apis/apps/v1/namespaces/deployment-8851/deployments/nginx-deployment,UID:c14955a4-c3b1-440a-87d2-c4392a27c999,ResourceVersion:13539841,Generation:3,CreationTimestamp:2020-05-29 12:55:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-05-29 12:56:08 +0000 UTC 2020-05-29 12:55:54 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-05-29 12:56:10 +0000 UTC 2020-05-29 12:56:10 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} May 29 12:56:10.757: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-8851,SelfLink:/apis/apps/v1/namespaces/deployment-8851/replicasets/nginx-deployment-55fb7cb77f,UID:d419a091-38e6-48b0-b8c6-68f373480e78,ResourceVersion:13539880,Generation:3,CreationTimestamp:2020-05-29 12:56:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment c14955a4-c3b1-440a-87d2-c4392a27c999 0xc002248027 0xc002248028}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 29 12:56:10.757: INFO: All old ReplicaSets of Deployment "nginx-deployment": May 29 12:56:10.757: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-8851,SelfLink:/apis/apps/v1/namespaces/deployment-8851/replicasets/nginx-deployment-7b8c6f4498,UID:bc9b58a0-9f98-47d4-beef-9f32a17865cc,ResourceVersion:13539874,Generation:3,CreationTimestamp:2020-05-29 12:55:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment c14955a4-c3b1-440a-87d2-c4392a27c999 0xc0022480f7 0xc0022480f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} May 29 12:56:10.948: INFO: Pod "nginx-deployment-55fb7cb77f-4tksd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-4tksd,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8851,SelfLink:/api/v1/namespaces/deployment-8851/pods/nginx-deployment-55fb7cb77f-4tksd,UID:f9938efe-d79c-4d3e-8f6a-651d04ed9dc5,ResourceVersion:13539811,Generation:0,CreationTimestamp:2020-05-29 12:56:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d419a091-38e6-48b0-b8c6-68f373480e78 0xc002248a57 0xc002248a58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z42fr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z42fr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-z42fr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002248ad0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002248af0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:07 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-29 12:56:07 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 29 12:56:10.948: INFO: Pod "nginx-deployment-55fb7cb77f-5kc46" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-5kc46,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8851,SelfLink:/api/v1/namespaces/deployment-8851/pods/nginx-deployment-55fb7cb77f-5kc46,UID:c5c5af18-3f6c-4083-9299-f4d8cebf1da7,ResourceVersion:13539816,Generation:0,CreationTimestamp:2020-05-29 12:56:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d419a091-38e6-48b0-b8c6-68f373480e78 0xc002248bc7 0xc002248bc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z42fr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z42fr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-z42fr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002248c40} {node.kubernetes.io/unreachable Exists NoExecute 0xc002248c60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:07 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-29 12:56:08 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 29 12:56:10.948: INFO: Pod "nginx-deployment-55fb7cb77f-8bjq8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8bjq8,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8851,SelfLink:/api/v1/namespaces/deployment-8851/pods/nginx-deployment-55fb7cb77f-8bjq8,UID:4fd84158-1200-47d6-b3b7-2b864c2ea162,ResourceVersion:13539801,Generation:0,CreationTimestamp:2020-05-29 12:56:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d419a091-38e6-48b0-b8c6-68f373480e78 0xc002248d37 0xc002248d38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z42fr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z42fr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-z42fr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002248db0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002248dd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:07 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-29 12:56:07 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 29 12:56:10.948: INFO: Pod "nginx-deployment-55fb7cb77f-8m44m" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8m44m,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8851,SelfLink:/api/v1/namespaces/deployment-8851/pods/nginx-deployment-55fb7cb77f-8m44m,UID:1070512c-83be-4d20-8e4a-9f898655f1d7,ResourceVersion:13539886,Generation:0,CreationTimestamp:2020-05-29 12:56:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d419a091-38e6-48b0-b8c6-68f373480e78 0xc002248ea7 0xc002248ea8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z42fr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z42fr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-z42fr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002248f20} {node.kubernetes.io/unreachable Exists NoExecute 0xc002248f40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-29 12:56:10 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 29 12:56:10.948: INFO: Pod "nginx-deployment-55fb7cb77f-8qvgf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8qvgf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8851,SelfLink:/api/v1/namespaces/deployment-8851/pods/nginx-deployment-55fb7cb77f-8qvgf,UID:e96ad3f7-a9ed-4026-8c79-4bfdbec4237e,ResourceVersion:13539869,Generation:0,CreationTimestamp:2020-05-29 12:56:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d419a091-38e6-48b0-b8c6-68f373480e78 0xc002249017 0xc002249018}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z42fr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z42fr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-z42fr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002249090} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022490b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:10 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 29 12:56:10.948: INFO: Pod "nginx-deployment-55fb7cb77f-cgf9q" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-cgf9q,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8851,SelfLink:/api/v1/namespaces/deployment-8851/pods/nginx-deployment-55fb7cb77f-cgf9q,UID:d0edfe85-c7c7-43d4-ae1a-64ff0294cdd0,ResourceVersion:13539875,Generation:0,CreationTimestamp:2020-05-29 12:56:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d419a091-38e6-48b0-b8c6-68f373480e78 0xc002249137 0xc002249138}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z42fr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z42fr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-z42fr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022491b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022491d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:10 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 29 12:56:10.948: INFO: Pod "nginx-deployment-55fb7cb77f-kdfh5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-kdfh5,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8851,SelfLink:/api/v1/namespaces/deployment-8851/pods/nginx-deployment-55fb7cb77f-kdfh5,UID:87396f39-934b-4ed3-90cd-cda4f32e9ad9,ResourceVersion:13539861,Generation:0,CreationTimestamp:2020-05-29 12:56:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d419a091-38e6-48b0-b8c6-68f373480e78 0xc002249257 0xc002249258}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z42fr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z42fr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-z42fr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022492d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022492f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:10 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 29 12:56:10.948: INFO: Pod "nginx-deployment-55fb7cb77f-pppkb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-pppkb,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8851,SelfLink:/api/v1/namespaces/deployment-8851/pods/nginx-deployment-55fb7cb77f-pppkb,UID:b07e0144-75a5-4ef2-a828-002437c6121e,ResourceVersion:13539873,Generation:0,CreationTimestamp:2020-05-29 12:56:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d419a091-38e6-48b0-b8c6-68f373480e78 0xc002249377 0xc002249378}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z42fr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z42fr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-z42fr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022493f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002249410}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:10 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 29 12:56:10.949: INFO: Pod "nginx-deployment-55fb7cb77f-t79zl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-t79zl,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8851,SelfLink:/api/v1/namespaces/deployment-8851/pods/nginx-deployment-55fb7cb77f-t79zl,UID:498b33da-8541-451e-88f5-0cbbee871332,ResourceVersion:13539793,Generation:0,CreationTimestamp:2020-05-29 12:56:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d419a091-38e6-48b0-b8c6-68f373480e78 0xc002249497 0xc002249498}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z42fr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z42fr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-z42fr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002249510} {node.kubernetes.io/unreachable Exists NoExecute 0xc002249530}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:07 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-29 12:56:07 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 29 12:56:10.949: INFO: Pod "nginx-deployment-55fb7cb77f-tx7s2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-tx7s2,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8851,SelfLink:/api/v1/namespaces/deployment-8851/pods/nginx-deployment-55fb7cb77f-tx7s2,UID:23f30bd6-7a74-44ee-ab60-8aecb64b0cdb,ResourceVersion:13539864,Generation:0,CreationTimestamp:2020-05-29 12:56:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d419a091-38e6-48b0-b8c6-68f373480e78 0xc002249607 0xc002249608}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z42fr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z42fr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-z42fr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002249680} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022496a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:10 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 29 12:56:10.949: INFO: Pod "nginx-deployment-55fb7cb77f-whgx9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-whgx9,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8851,SelfLink:/api/v1/namespaces/deployment-8851/pods/nginx-deployment-55fb7cb77f-whgx9,UID:249f67cf-40ba-4698-8383-3383a029dcc0,ResourceVersion:13539814,Generation:0,CreationTimestamp:2020-05-29 12:56:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d419a091-38e6-48b0-b8c6-68f373480e78 0xc002249727 0xc002249728}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z42fr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z42fr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-z42fr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022497a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022497c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:07 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-29 12:56:07 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 29 12:56:10.949: INFO: Pod "nginx-deployment-55fb7cb77f-xb5rm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-xb5rm,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8851,SelfLink:/api/v1/namespaces/deployment-8851/pods/nginx-deployment-55fb7cb77f-xb5rm,UID:bf602ea4-5e77-493b-9959-0ed2452fe928,ResourceVersion:13539862,Generation:0,CreationTimestamp:2020-05-29 12:56:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d419a091-38e6-48b0-b8c6-68f373480e78 0xc002249897 0xc002249898}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z42fr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z42fr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-z42fr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002249910} {node.kubernetes.io/unreachable Exists NoExecute 0xc002249930}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:10 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 29 12:56:10.949: INFO: Pod "nginx-deployment-55fb7cb77f-xt9lp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-xt9lp,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8851,SelfLink:/api/v1/namespaces/deployment-8851/pods/nginx-deployment-55fb7cb77f-xt9lp,UID:42b4871d-4fc6-4f94-91d1-5a7f38c000cf,ResourceVersion:13539865,Generation:0,CreationTimestamp:2020-05-29 12:56:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d419a091-38e6-48b0-b8c6-68f373480e78 0xc0022499b7 0xc0022499b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z42fr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z42fr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-z42fr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002249a30} {node.kubernetes.io/unreachable Exists NoExecute 0xc002249a50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:10 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 29 12:56:10.949: INFO: Pod "nginx-deployment-7b8c6f4498-2zctr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2zctr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8851,SelfLink:/api/v1/namespaces/deployment-8851/pods/nginx-deployment-7b8c6f4498-2zctr,UID:6b1dd25c-ac07-45fe-a3a0-606ca134140e,ResourceVersion:13539884,Generation:0,CreationTimestamp:2020-05-29 12:56:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bc9b58a0-9f98-47d4-beef-9f32a17865cc 0xc002249ad7 0xc002249ad8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z42fr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z42fr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z42fr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002249b50} {node.kubernetes.io/unreachable Exists NoExecute 0xc002249b70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-29 12:56:10 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 29 12:56:10.949: INFO: Pod "nginx-deployment-7b8c6f4498-4qsxs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4qsxs,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8851,SelfLink:/api/v1/namespaces/deployment-8851/pods/nginx-deployment-7b8c6f4498-4qsxs,UID:19be0dc0-8b00-4035-9537-c31019b014d3,ResourceVersion:13539867,Generation:0,CreationTimestamp:2020-05-29 12:56:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bc9b58a0-9f98-47d4-beef-9f32a17865cc 0xc002249c37 0xc002249c38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z42fr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z42fr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z42fr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002249cb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002249cd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:10 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 29 12:56:10.949: INFO: Pod "nginx-deployment-7b8c6f4498-5nsmd" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5nsmd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8851,SelfLink:/api/v1/namespaces/deployment-8851/pods/nginx-deployment-7b8c6f4498-5nsmd,UID:b672c4ca-ece1-4936-b71b-6c57e7288ea4,ResourceVersion:13539724,Generation:0,CreationTimestamp:2020-05-29 12:55:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bc9b58a0-9f98-47d4-beef-9f32a17865cc 0xc002249d57 0xc002249d58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z42fr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z42fr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z42fr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002249dd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002249df0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:55:55 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:04 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:04 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:55:55 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.165,StartTime:2020-05-29 12:55:55 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-29 12:56:03 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://72fcae17296409fdaa4fe65a755e2550a6f4a69336058410877344f45a3df339}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 29 12:56:10.950: INFO: Pod "nginx-deployment-7b8c6f4498-5wbxk" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5wbxk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8851,SelfLink:/api/v1/namespaces/deployment-8851/pods/nginx-deployment-7b8c6f4498-5wbxk,UID:6506598a-33b0-429e-ac7c-68ae6d8ebb3c,ResourceVersion:13539753,Generation:0,CreationTimestamp:2020-05-29 12:55:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bc9b58a0-9f98-47d4-beef-9f32a17865cc 0xc002249ec7 0xc002249ec8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z42fr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z42fr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z42fr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002249f40} {node.kubernetes.io/unreachable Exists NoExecute 0xc002249f60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:55:55 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:06 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:55:55 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.78,StartTime:2020-05-29 12:55:55 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-29 12:56:05 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://81d4b185dd096c71a75ea1cb285b40a173000441b6bf881fac760b3b15849ce8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 29 12:56:10.950: INFO: Pod "nginx-deployment-7b8c6f4498-7czf4" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7czf4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8851,SelfLink:/api/v1/namespaces/deployment-8851/pods/nginx-deployment-7b8c6f4498-7czf4,UID:80a37d79-7e87-45b5-8ad4-ff1f420a3fa7,ResourceVersion:13539757,Generation:0,CreationTimestamp:2020-05-29 12:55:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bc9b58a0-9f98-47d4-beef-9f32a17865cc 0xc002d36037 0xc002d36038}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z42fr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z42fr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z42fr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d360b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d360d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:55:55 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:07 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:55:55 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.169,StartTime:2020-05-29 12:55:55 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-29 12:56:06 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://0efd3c26115ea795b7baf9107ec95a781145d3b57e9199750d1fc09027b4f618}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 29 12:56:10.950: INFO: Pod "nginx-deployment-7b8c6f4498-7dh5k" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7dh5k,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8851,SelfLink:/api/v1/namespaces/deployment-8851/pods/nginx-deployment-7b8c6f4498-7dh5k,UID:528c3b52-5a5e-450b-81a2-7cb3712466b4,ResourceVersion:13539848,Generation:0,CreationTimestamp:2020-05-29 12:56:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bc9b58a0-9f98-47d4-beef-9f32a17865cc 0xc002d361a7 0xc002d361a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z42fr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z42fr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z42fr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d36220} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d36240}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:10 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 29 12:56:10.950: INFO: Pod "nginx-deployment-7b8c6f4498-7pcnn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7pcnn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8851,SelfLink:/api/v1/namespaces/deployment-8851/pods/nginx-deployment-7b8c6f4498-7pcnn,UID:e8d84ec7-d8a4-4608-b793-1788f4ac12bf,ResourceVersion:13539871,Generation:0,CreationTimestamp:2020-05-29 12:56:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bc9b58a0-9f98-47d4-beef-9f32a17865cc 0xc002d362c7 0xc002d362c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z42fr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z42fr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z42fr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d36340} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d36360}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:10 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 29 12:56:10.950: INFO: Pod "nginx-deployment-7b8c6f4498-7wlz8" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7wlz8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8851,SelfLink:/api/v1/namespaces/deployment-8851/pods/nginx-deployment-7b8c6f4498-7wlz8,UID:58f41b06-d366-4688-9e48-d155c68babf6,ResourceVersion:13539750,Generation:0,CreationTimestamp:2020-05-29 12:55:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bc9b58a0-9f98-47d4-beef-9f32a17865cc 0xc002d363e7 0xc002d363e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z42fr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z42fr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z42fr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d36460} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d36480}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:55:55 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:06 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:55:55 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.76,StartTime:2020-05-29 12:55:55 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-29 12:56:04 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://5d67d73bd2e4f52744b399ef5a55ed06ef04295fc5ccc6cf2c6750f7ecb6fa93}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 29 12:56:10.950: INFO: Pod "nginx-deployment-7b8c6f4498-86dgf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-86dgf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8851,SelfLink:/api/v1/namespaces/deployment-8851/pods/nginx-deployment-7b8c6f4498-86dgf,UID:409f5aa4-24a7-4914-9498-119bdcb7faca,ResourceVersion:13539870,Generation:0,CreationTimestamp:2020-05-29 12:56:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bc9b58a0-9f98-47d4-beef-9f32a17865cc 0xc002d36557 0xc002d36558}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z42fr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z42fr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z42fr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d365d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d365f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:10 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 29 12:56:10.950: INFO: Pod "nginx-deployment-7b8c6f4498-8d8v2" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8d8v2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8851,SelfLink:/api/v1/namespaces/deployment-8851/pods/nginx-deployment-7b8c6f4498-8d8v2,UID:7b3bfaae-cb80-4872-8f52-c5044cee3050,ResourceVersion:13539747,Generation:0,CreationTimestamp:2020-05-29 12:55:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bc9b58a0-9f98-47d4-beef-9f32a17865cc 0xc002d36677 0xc002d36678}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z42fr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z42fr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z42fr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d366f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d36710}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:55:55 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:06 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:55:55 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.77,StartTime:2020-05-29 12:55:55 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-29 12:56:06 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://f5b0f692ff779b61d68b17924a6084bf4c90c165e0d8a80b40f58c18784f6e13}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 29 12:56:10.950: INFO: Pod "nginx-deployment-7b8c6f4498-czx2x" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-czx2x,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8851,SelfLink:/api/v1/namespaces/deployment-8851/pods/nginx-deployment-7b8c6f4498-czx2x,UID:a2ab389c-4325-4a15-9937-24d87834986f,ResourceVersion:13539876,Generation:0,CreationTimestamp:2020-05-29 12:56:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bc9b58a0-9f98-47d4-beef-9f32a17865cc 0xc002d367e7 0xc002d367e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z42fr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z42fr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z42fr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d36860} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d36880}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-29 12:56:10 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 29 12:56:10.950: INFO: Pod "nginx-deployment-7b8c6f4498-d56sj" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-d56sj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8851,SelfLink:/api/v1/namespaces/deployment-8851/pods/nginx-deployment-7b8c6f4498-d56sj,UID:be79dd49-cd49-4e3e-a15c-c19934993012,ResourceVersion:13539736,Generation:0,CreationTimestamp:2020-05-29 12:55:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bc9b58a0-9f98-47d4-beef-9f32a17865cc 0xc002d36947 0xc002d36948}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z42fr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z42fr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z42fr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d369c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d369e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:55:55 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:06 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:55:55 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.166,StartTime:2020-05-29 12:55:55 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-29 12:56:05 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://c31e7af347d378e200cf7d22b325986b4c81cd91a4d4c4602ec2e092b022961d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 29 12:56:10.951: INFO: Pod "nginx-deployment-7b8c6f4498-d66dl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-d66dl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8851,SelfLink:/api/v1/namespaces/deployment-8851/pods/nginx-deployment-7b8c6f4498-d66dl,UID:f1841672-4677-4cc1-8fa4-7e38eff772f4,ResourceVersion:13539846,Generation:0,CreationTimestamp:2020-05-29 12:56:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bc9b58a0-9f98-47d4-beef-9f32a17865cc 0xc002d36ab7 0xc002d36ab8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z42fr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z42fr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z42fr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d36b30} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d36b50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:10 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 29 12:56:10.951: INFO: Pod "nginx-deployment-7b8c6f4498-dnp75" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-dnp75,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8851,SelfLink:/api/v1/namespaces/deployment-8851/pods/nginx-deployment-7b8c6f4498-dnp75,UID:f96a32c0-7b4a-4e94-8975-1241dbf91f7e,ResourceVersion:13539717,Generation:0,CreationTimestamp:2020-05-29 12:55:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bc9b58a0-9f98-47d4-beef-9f32a17865cc 0xc002d36bd7 0xc002d36bd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z42fr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z42fr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z42fr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d36c50} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d36c70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:55:55 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:03 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:03 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:55:55 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.75,StartTime:2020-05-29 12:55:55 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-29 12:56:02 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://a3dd166e1480deded2509832922efdb20a68efc81eb823b17f0d43187f7aa4e6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 29 12:56:10.951: INFO: Pod "nginx-deployment-7b8c6f4498-f9m6f" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-f9m6f,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8851,SelfLink:/api/v1/namespaces/deployment-8851/pods/nginx-deployment-7b8c6f4498-f9m6f,UID:57b6b54d-34ad-43d3-acca-8bbdd60f2b0c,ResourceVersion:13539849,Generation:0,CreationTimestamp:2020-05-29 12:56:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bc9b58a0-9f98-47d4-beef-9f32a17865cc 0xc002d36d47 0xc002d36d48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z42fr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z42fr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z42fr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d36dc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d36de0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:10 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 29 12:56:10.951: INFO: Pod "nginx-deployment-7b8c6f4498-hjqfx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hjqfx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8851,SelfLink:/api/v1/namespaces/deployment-8851/pods/nginx-deployment-7b8c6f4498-hjqfx,UID:032332da-861a-48c7-80ff-527213a44284,ResourceVersion:13539866,Generation:0,CreationTimestamp:2020-05-29 12:56:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bc9b58a0-9f98-47d4-beef-9f32a17865cc 0xc002d36e67 0xc002d36e68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z42fr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z42fr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z42fr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d36ee0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d36f00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:10 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 29 12:56:10.951: INFO: Pod "nginx-deployment-7b8c6f4498-m9crw" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-m9crw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8851,SelfLink:/api/v1/namespaces/deployment-8851/pods/nginx-deployment-7b8c6f4498-m9crw,UID:ed251249-24aa-4744-9180-8b936e2f1991,ResourceVersion:13539713,Generation:0,CreationTimestamp:2020-05-29 12:55:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bc9b58a0-9f98-47d4-beef-9f32a17865cc 0xc002d36f87 0xc002d36f88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z42fr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z42fr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z42fr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d37000} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d37020}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:55:55 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:03 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:03 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:55:55 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.74,StartTime:2020-05-29 12:55:55 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-29 12:56:01 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://6e30014b9aedeb3f8f40d35fd6ab6e6d1f9ee12b01a5467514f461c25f738bd9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 29 12:56:10.951: INFO: Pod "nginx-deployment-7b8c6f4498-nlftz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nlftz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8851,SelfLink:/api/v1/namespaces/deployment-8851/pods/nginx-deployment-7b8c6f4498-nlftz,UID:dae4c246-c472-47f3-ae2a-0272979b213a,ResourceVersion:13539838,Generation:0,CreationTimestamp:2020-05-29 12:56:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bc9b58a0-9f98-47d4-beef-9f32a17865cc 0xc002d370f7 0xc002d370f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z42fr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z42fr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z42fr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d37170} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d37190}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:10 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 29 12:56:10.951: INFO: Pod "nginx-deployment-7b8c6f4498-sfthk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-sfthk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8851,SelfLink:/api/v1/namespaces/deployment-8851/pods/nginx-deployment-7b8c6f4498-sfthk,UID:e8f44d73-4542-4331-9a87-7b2fe501413a,ResourceVersion:13539859,Generation:0,CreationTimestamp:2020-05-29 12:56:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bc9b58a0-9f98-47d4-beef-9f32a17865cc 0xc002d37217 0xc002d37218}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z42fr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z42fr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z42fr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d37290} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d372b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:10 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 29 12:56:10.951: INFO: Pod "nginx-deployment-7b8c6f4498-w2lc9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-w2lc9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8851,SelfLink:/api/v1/namespaces/deployment-8851/pods/nginx-deployment-7b8c6f4498-w2lc9,UID:f77f89b8-7dc5-4aa9-9b5b-4d2e9b538851,ResourceVersion:13539872,Generation:0,CreationTimestamp:2020-05-29 12:56:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 bc9b58a0-9f98-47d4-beef-9f32a17865cc 0xc002d37337 0xc002d37338}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z42fr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z42fr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z42fr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d373b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d373d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 12:56:10 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 12:56:10.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8851" for this suite. May 29 12:56:37.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 12:56:37.333: INFO: namespace deployment-8851 deletion completed in 26.259413547s • [SLOW TEST:42.431 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 12:56:37.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 12:57:11.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2433" for this suite. May 29 12:57:17.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 12:57:17.191: INFO: namespace namespaces-2433 deletion completed in 6.084064773s STEP: Destroying namespace "nsdeletetest-1235" for this suite. May 29 12:57:17.193: INFO: Namespace nsdeletetest-1235 was already deleted STEP: Destroying namespace "nsdeletetest-9495" for this suite. May 29 12:57:23.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 12:57:23.286: INFO: namespace nsdeletetest-9495 deletion completed in 6.093271004s • [SLOW TEST:45.953 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 12:57:23.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-0406cb6b-156a-4de4-ae9d-154770ec4206 in namespace container-probe-1693 May 29 12:57:27.375: INFO: Started pod liveness-0406cb6b-156a-4de4-ae9d-154770ec4206 in namespace container-probe-1693 STEP: checking the pod's current state and verifying that restartCount is present May 29 12:57:27.378: INFO: Initial restart count of pod liveness-0406cb6b-156a-4de4-ae9d-154770ec4206 is 0 May 29 12:57:49.428: INFO: Restart count of pod container-probe-1693/liveness-0406cb6b-156a-4de4-ae9d-154770ec4206 is now 1 (22.050114778s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 12:57:49.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1693" for this suite. May 29 12:57:55.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 12:57:55.585: INFO: namespace container-probe-1693 deletion completed in 6.124661515s • [SLOW TEST:32.298 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 12:57:55.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-8694 I0529 12:57:55.674284 7 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-8694, replica count: 1 I0529 12:57:56.724779 7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0529 12:57:57.725007 7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0529 12:57:58.725386 7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 29 12:57:58.857: INFO: Created: latency-svc-bxpn5 May 29 12:57:58.863: INFO: Got endpoints: latency-svc-bxpn5 [37.48536ms] May 29 12:57:58.888: INFO: Created: latency-svc-cbd8j May 29 12:57:58.893: INFO: Got endpoints: latency-svc-cbd8j [30.395426ms] May 29 12:57:58.936: INFO: Created: latency-svc-pm8kt May 29 12:57:58.941: INFO: Got endpoints: latency-svc-pm8kt [78.120642ms] May 29 12:57:58.966: INFO: Created: latency-svc-wgpzx May 29 12:57:58.975: INFO: Got endpoints: latency-svc-wgpzx [111.884718ms] May 29 12:57:59.031: INFO: Created: latency-svc-cn4mg May 29 12:57:59.098: INFO: Got endpoints: latency-svc-cn4mg [235.073083ms] May 29 12:57:59.101: INFO: Created: latency-svc-2wqkc May 29 12:57:59.113: INFO: Got endpoints: latency-svc-2wqkc [250.502106ms] May 29 12:57:59.134: INFO: Created: latency-svc-cd26k May 29 12:57:59.143: INFO: Got endpoints: latency-svc-cd26k [280.175422ms] May 29 12:57:59.164: INFO: Created: latency-svc-vhsv5 May 29 12:57:59.174: INFO: Got endpoints: latency-svc-vhsv5 [311.224461ms] May 29 12:57:59.242: INFO: Created: latency-svc-xpr82 May 29 12:57:59.245: INFO: Got endpoints: latency-svc-xpr82 [382.31229ms] May 29 12:57:59.294: INFO: Created: latency-svc-lvntx May 29 12:57:59.315: INFO: Got endpoints: latency-svc-lvntx [452.128101ms] May 29 12:57:59.337: INFO: Created: latency-svc-bmqg8 May 29 12:57:59.370: INFO: Got endpoints: latency-svc-bmqg8 [506.800893ms] May 29 12:57:59.398: INFO: Created: latency-svc-2kttt May 29 12:57:59.424: INFO: Got endpoints: latency-svc-2kttt [560.770268ms] May 29 12:57:59.518: INFO: Created: latency-svc-g66dl May 29 12:57:59.523: INFO: Got endpoints: latency-svc-g66dl [660.098577ms] May 29 12:57:59.608: INFO: Created: latency-svc-b7slq May 29 12:57:59.673: INFO: Got endpoints: latency-svc-b7slq [809.934576ms] May 29 12:57:59.675: INFO: Created: latency-svc-n9ltq May 29 12:57:59.685: INFO: Got endpoints: latency-svc-n9ltq [821.967484ms] May 29 12:57:59.709: INFO: Created: latency-svc-dcvjc May 29 12:57:59.723: INFO: Got endpoints: latency-svc-dcvjc [859.797307ms] May 29 12:57:59.739: INFO: Created: latency-svc-qb2h9 May 29 12:57:59.752: INFO: Got endpoints: latency-svc-qb2h9 [858.611162ms] May 29 12:57:59.823: INFO: Created: latency-svc-7856t May 29 12:57:59.830: INFO: Got endpoints: latency-svc-7856t [888.65022ms] May 29 12:57:59.882: INFO: Created: latency-svc-7rcsr May 29 12:57:59.902: INFO: Got endpoints: latency-svc-7rcsr [927.383006ms] May 29 12:57:59.960: INFO: Created: latency-svc-lrq2z May 29 12:57:59.963: INFO: Got endpoints: latency-svc-lrq2z [865.189329ms] May 29 12:57:59.992: INFO: Created: latency-svc-zw52r May 29 12:57:59.995: INFO: Got endpoints: latency-svc-zw52r [881.861297ms] May 29 12:58:00.098: INFO: Created: latency-svc-rtmfc May 29 12:58:00.101: INFO: Got endpoints: latency-svc-rtmfc [958.263504ms] May 29 12:58:00.140: INFO: Created: latency-svc-vhfsm May 29 12:58:00.153: INFO: Got endpoints: latency-svc-vhfsm [978.894351ms] May 29 12:58:00.172: INFO: Created: latency-svc-s6pxm May 29 12:58:00.182: INFO: Got endpoints: latency-svc-s6pxm [936.959174ms] May 29 12:58:00.236: INFO: Created: latency-svc-8rjf4 May 29 12:58:00.238: INFO: Got endpoints: latency-svc-8rjf4 [923.265369ms] May 29 12:58:00.260: INFO: Created: latency-svc-b8s2p May 29 12:58:00.273: INFO: Got endpoints: latency-svc-b8s2p [903.283951ms] May 29 12:58:00.296: INFO: Created: latency-svc-v8k76 May 29 12:58:00.309: INFO: Got endpoints: latency-svc-v8k76 [885.771995ms] May 29 12:58:00.334: INFO: Created: latency-svc-smjnl May 29 12:58:00.373: INFO: Got endpoints: latency-svc-smjnl [849.960574ms] May 29 12:58:00.394: INFO: Created: latency-svc-95ssv May 29 12:58:00.406: INFO: Got endpoints: latency-svc-95ssv [733.084929ms] May 29 12:58:00.449: INFO: Created: latency-svc-kfvfz May 29 12:58:00.510: INFO: Got endpoints: latency-svc-kfvfz [825.518003ms] May 29 12:58:00.518: INFO: Created: latency-svc-hh9g7 May 29 12:58:00.529: INFO: Got endpoints: latency-svc-hh9g7 [806.508512ms] May 29 12:58:00.560: INFO: Created: latency-svc-bcdks May 29 12:58:00.584: INFO: Got endpoints: latency-svc-bcdks [831.795066ms] May 29 12:58:00.603: INFO: Created: latency-svc-wjgnn May 29 12:58:00.654: INFO: Got endpoints: latency-svc-wjgnn [824.535156ms] May 29 12:58:00.695: INFO: Created: latency-svc-5tjng May 29 12:58:00.704: INFO: Got endpoints: latency-svc-5tjng [801.630443ms] May 29 12:58:00.722: INFO: Created: latency-svc-sfk26 May 29 12:58:00.734: INFO: Got endpoints: latency-svc-sfk26 [771.048858ms] May 29 12:58:00.752: INFO: Created: latency-svc-9ckjs May 29 12:58:00.799: INFO: Got endpoints: latency-svc-9ckjs [803.274011ms] May 29 12:58:00.808: INFO: Created: latency-svc-j86bb May 29 12:58:00.819: INFO: Got endpoints: latency-svc-j86bb [717.624712ms] May 29 12:58:00.839: INFO: Created: latency-svc-4ht9p May 29 12:58:00.861: INFO: Got endpoints: latency-svc-4ht9p [708.346709ms] May 29 12:58:00.890: INFO: Created: latency-svc-95zz7 May 29 12:58:00.930: INFO: Got endpoints: latency-svc-95zz7 [747.838009ms] May 29 12:58:00.944: INFO: Created: latency-svc-tcq95 May 29 12:58:00.962: INFO: Got endpoints: latency-svc-tcq95 [723.289455ms] May 29 12:58:00.982: INFO: Created: latency-svc-h4hnb May 29 12:58:00.998: INFO: Got endpoints: latency-svc-h4hnb [724.617071ms] May 29 12:58:01.074: INFO: Created: latency-svc-llrlj May 29 12:58:01.088: INFO: Got endpoints: latency-svc-llrlj [778.958136ms] May 29 12:58:01.124: INFO: Created: latency-svc-phgnx May 29 12:58:01.595: INFO: Got endpoints: latency-svc-phgnx [1.221648434s] May 29 12:58:01.629: INFO: Created: latency-svc-h9c2n May 29 12:58:02.080: INFO: Got endpoints: latency-svc-h9c2n [991.579684ms] May 29 12:58:02.119: INFO: Created: latency-svc-hx2v6 May 29 12:58:02.143: INFO: Got endpoints: latency-svc-hx2v6 [1.736985795s] May 29 12:58:02.167: INFO: Created: latency-svc-dm59f May 29 12:58:02.179: INFO: Got endpoints: latency-svc-dm59f [1.668820318s] May 29 12:58:02.250: INFO: Created: latency-svc-t9j7t May 29 12:58:02.257: INFO: Got endpoints: latency-svc-t9j7t [1.727912295s] May 29 12:58:02.276: INFO: Created: latency-svc-v2js4 May 29 12:58:02.287: INFO: Got endpoints: latency-svc-v2js4 [1.703640608s] May 29 12:58:02.304: INFO: Created: latency-svc-dmdx9 May 29 12:58:02.318: INFO: Got endpoints: latency-svc-dmdx9 [1.663666877s] May 29 12:58:02.341: INFO: Created: latency-svc-4dh7j May 29 12:58:02.385: INFO: Got endpoints: latency-svc-4dh7j [1.681514497s] May 29 12:58:02.409: INFO: Created: latency-svc-8sq2j May 29 12:58:02.441: INFO: Got endpoints: latency-svc-8sq2j [1.706777543s] May 29 12:58:02.480: INFO: Created: latency-svc-k4xvt May 29 12:58:02.535: INFO: Got endpoints: latency-svc-k4xvt [1.735964118s] May 29 12:58:02.562: INFO: Created: latency-svc-j2gs5 May 29 12:58:02.573: INFO: Got endpoints: latency-svc-j2gs5 [1.754309161s] May 29 12:58:02.592: INFO: Created: latency-svc-bjcff May 29 12:58:02.604: INFO: Got endpoints: latency-svc-bjcff [1.742786716s] May 29 12:58:02.697: INFO: Created: latency-svc-cfcv9 May 29 12:58:02.706: INFO: Got endpoints: latency-svc-cfcv9 [1.775703664s] May 29 12:58:02.726: INFO: Created: latency-svc-mjnps May 29 12:58:02.748: INFO: Got endpoints: latency-svc-mjnps [1.786128628s] May 29 12:58:02.782: INFO: Created: latency-svc-zbc8k May 29 12:58:02.790: INFO: Got endpoints: latency-svc-zbc8k [1.792596277s] May 29 12:58:02.840: INFO: Created: latency-svc-67btx May 29 12:58:02.845: INFO: Got endpoints: latency-svc-67btx [1.249766902s] May 29 12:58:02.870: INFO: Created: latency-svc-98j5l May 29 12:58:02.881: INFO: Got endpoints: latency-svc-98j5l [801.119966ms] May 29 12:58:02.928: INFO: Created: latency-svc-h2pqh May 29 12:58:02.978: INFO: Got endpoints: latency-svc-h2pqh [835.0185ms] May 29 12:58:03.006: INFO: Created: latency-svc-8n7fg May 29 12:58:03.036: INFO: Got endpoints: latency-svc-8n7fg [856.909386ms] May 29 12:58:03.074: INFO: Created: latency-svc-dz8tc May 29 12:58:03.109: INFO: Got endpoints: latency-svc-dz8tc [851.966891ms] May 29 12:58:03.150: INFO: Created: latency-svc-7mjwq May 29 12:58:03.180: INFO: Got endpoints: latency-svc-7mjwq [892.469974ms] May 29 12:58:03.254: INFO: Created: latency-svc-ms4pg May 29 12:58:03.257: INFO: Got endpoints: latency-svc-ms4pg [938.662744ms] May 29 12:58:03.303: INFO: Created: latency-svc-dqkh9 May 29 12:58:03.318: INFO: Got endpoints: latency-svc-dqkh9 [933.015495ms] May 29 12:58:03.338: INFO: Created: latency-svc-4lxtm May 29 12:58:03.379: INFO: Got endpoints: latency-svc-4lxtm [938.263275ms] May 29 12:58:03.402: INFO: Created: latency-svc-7cm4h May 29 12:58:03.415: INFO: Got endpoints: latency-svc-7cm4h [879.98111ms] May 29 12:58:03.439: INFO: Created: latency-svc-fhj99 May 29 12:58:03.451: INFO: Got endpoints: latency-svc-fhj99 [877.478546ms] May 29 12:58:03.474: INFO: Created: latency-svc-xvdd2 May 29 12:58:03.517: INFO: Got endpoints: latency-svc-xvdd2 [912.450326ms] May 29 12:58:03.524: INFO: Created: latency-svc-k5pcl May 29 12:58:03.538: INFO: Got endpoints: latency-svc-k5pcl [832.181698ms] May 29 12:58:03.554: INFO: Created: latency-svc-vpbvv May 29 12:58:03.568: INFO: Got endpoints: latency-svc-vpbvv [820.415706ms] May 29 12:58:03.607: INFO: Created: latency-svc-phxhc May 29 12:58:03.660: INFO: Got endpoints: latency-svc-phxhc [869.745044ms] May 29 12:58:03.663: INFO: Created: latency-svc-mx542 May 29 12:58:03.671: INFO: Got endpoints: latency-svc-mx542 [825.957597ms] May 29 12:58:03.728: INFO: Created: latency-svc-cttn5 May 29 12:58:03.737: INFO: Got endpoints: latency-svc-cttn5 [855.844814ms] May 29 12:58:03.805: INFO: Created: latency-svc-8pcn4 May 29 12:58:03.815: INFO: Got endpoints: latency-svc-8pcn4 [836.863126ms] May 29 12:58:03.846: INFO: Created: latency-svc-trrkw May 29 12:58:03.864: INFO: Got endpoints: latency-svc-trrkw [827.145326ms] May 29 12:58:03.890: INFO: Created: latency-svc-n466g May 29 12:58:03.948: INFO: Got endpoints: latency-svc-n466g [839.009108ms] May 29 12:58:03.951: INFO: Created: latency-svc-x2n2n May 29 12:58:03.960: INFO: Got endpoints: latency-svc-x2n2n [779.970047ms] May 29 12:58:03.979: INFO: Created: latency-svc-vfnq5 May 29 12:58:03.991: INFO: Got endpoints: latency-svc-vfnq5 [733.530885ms] May 29 12:58:04.027: INFO: Created: latency-svc-ctrkh May 29 12:58:04.098: INFO: Got endpoints: latency-svc-ctrkh [779.199207ms] May 29 12:58:04.099: INFO: Created: latency-svc-xtlks May 29 12:58:04.112: INFO: Got endpoints: latency-svc-xtlks [732.826105ms] May 29 12:58:04.136: INFO: Created: latency-svc-f5ng4 May 29 12:58:04.153: INFO: Got endpoints: latency-svc-f5ng4 [738.333322ms] May 29 12:58:04.170: INFO: Created: latency-svc-mv2pm May 29 12:58:04.247: INFO: Got endpoints: latency-svc-mv2pm [796.090447ms] May 29 12:58:04.268: INFO: Created: latency-svc-xrprz May 29 12:58:04.284: INFO: Got endpoints: latency-svc-xrprz [766.981091ms] May 29 12:58:04.304: INFO: Created: latency-svc-jz55d May 29 12:58:04.320: INFO: Got endpoints: latency-svc-jz55d [782.098391ms] May 29 12:58:04.340: INFO: Created: latency-svc-vw7xv May 29 12:58:04.379: INFO: Got endpoints: latency-svc-vw7xv [810.854929ms] May 29 12:58:04.405: INFO: Created: latency-svc-5jngh May 29 12:58:04.459: INFO: Got endpoints: latency-svc-5jngh [798.395738ms] May 29 12:58:04.553: INFO: Created: latency-svc-nx6dx May 29 12:58:04.556: INFO: Got endpoints: latency-svc-nx6dx [885.581609ms] May 29 12:58:04.586: INFO: Created: latency-svc-jpcz6 May 29 12:58:04.597: INFO: Got endpoints: latency-svc-jpcz6 [859.67679ms] May 29 12:58:04.632: INFO: Created: latency-svc-sh85m May 29 12:58:04.691: INFO: Got endpoints: latency-svc-sh85m [875.389759ms] May 29 12:58:04.700: INFO: Created: latency-svc-rfkff May 29 12:58:04.712: INFO: Got endpoints: latency-svc-rfkff [847.931833ms] May 29 12:58:04.730: INFO: Created: latency-svc-pzz7k May 29 12:58:04.742: INFO: Got endpoints: latency-svc-pzz7k [793.640639ms] May 29 12:58:04.761: INFO: Created: latency-svc-ljmqz May 29 12:58:04.782: INFO: Got endpoints: latency-svc-ljmqz [821.930352ms] May 29 12:58:04.840: INFO: Created: latency-svc-79gsj May 29 12:58:04.851: INFO: Got endpoints: latency-svc-79gsj [860.296482ms] May 29 12:58:04.902: INFO: Created: latency-svc-rbtcc May 29 12:58:04.911: INFO: Got endpoints: latency-svc-rbtcc [813.087934ms] May 29 12:58:05.003: INFO: Created: latency-svc-g2stm May 29 12:58:05.005: INFO: Got endpoints: latency-svc-g2stm [892.780762ms] May 29 12:58:05.034: INFO: Created: latency-svc-29wrq May 29 12:58:05.050: INFO: Got endpoints: latency-svc-29wrq [896.403436ms] May 29 12:58:05.072: INFO: Created: latency-svc-blc9g May 29 12:58:05.080: INFO: Got endpoints: latency-svc-blc9g [833.224058ms] May 29 12:58:05.100: INFO: Created: latency-svc-wl57h May 29 12:58:05.134: INFO: Got endpoints: latency-svc-wl57h [849.835348ms] May 29 12:58:05.163: INFO: Created: latency-svc-r6f5t May 29 12:58:05.176: INFO: Got endpoints: latency-svc-r6f5t [855.967956ms] May 29 12:58:05.232: INFO: Created: latency-svc-t7plf May 29 12:58:05.283: INFO: Got endpoints: latency-svc-t7plf [903.988548ms] May 29 12:58:05.299: INFO: Created: latency-svc-qq5w8 May 29 12:58:05.311: INFO: Got endpoints: latency-svc-qq5w8 [852.193387ms] May 29 12:58:05.330: INFO: Created: latency-svc-gxhcd May 29 12:58:05.354: INFO: Got endpoints: latency-svc-gxhcd [797.180511ms] May 29 12:58:05.415: INFO: Created: latency-svc-smx69 May 29 12:58:05.418: INFO: Got endpoints: latency-svc-smx69 [821.252664ms] May 29 12:58:05.448: INFO: Created: latency-svc-gsvlr May 29 12:58:05.462: INFO: Got endpoints: latency-svc-gsvlr [771.173902ms] May 29 12:58:05.478: INFO: Created: latency-svc-ktr2n May 29 12:58:05.492: INFO: Got endpoints: latency-svc-ktr2n [780.438475ms] May 29 12:58:05.510: INFO: Created: latency-svc-89s82 May 29 12:58:05.570: INFO: Got endpoints: latency-svc-89s82 [828.319929ms] May 29 12:58:05.572: INFO: Created: latency-svc-9zhfv May 29 12:58:05.583: INFO: Got endpoints: latency-svc-9zhfv [800.702903ms] May 29 12:58:05.616: INFO: Created: latency-svc-xh9kf May 29 12:58:05.631: INFO: Got endpoints: latency-svc-xh9kf [779.782576ms] May 29 12:58:05.652: INFO: Created: latency-svc-fbdlk May 29 12:58:05.667: INFO: Got endpoints: latency-svc-fbdlk [756.179663ms] May 29 12:58:05.732: INFO: Created: latency-svc-k8xhw May 29 12:58:05.739: INFO: Got endpoints: latency-svc-k8xhw [734.219461ms] May 29 12:58:05.757: INFO: Created: latency-svc-cjmbr May 29 12:58:05.770: INFO: Got endpoints: latency-svc-cjmbr [720.235932ms] May 29 12:58:05.826: INFO: Created: latency-svc-7hhll May 29 12:58:05.906: INFO: Got endpoints: latency-svc-7hhll [825.975862ms] May 29 12:58:05.930: INFO: Created: latency-svc-pflds May 29 12:58:05.944: INFO: Got endpoints: latency-svc-pflds [810.118698ms] May 29 12:58:05.988: INFO: Created: latency-svc-mrgfj May 29 12:58:06.005: INFO: Got endpoints: latency-svc-mrgfj [828.241376ms] May 29 12:58:06.051: INFO: Created: latency-svc-w4x9m May 29 12:58:06.110: INFO: Got endpoints: latency-svc-w4x9m [826.370306ms] May 29 12:58:06.200: INFO: Created: latency-svc-kvs2p May 29 12:58:06.218: INFO: Got endpoints: latency-svc-kvs2p [907.01991ms] May 29 12:58:06.243: INFO: Created: latency-svc-lfg6z May 29 12:58:06.254: INFO: Got endpoints: latency-svc-lfg6z [900.479321ms] May 29 12:58:06.272: INFO: Created: latency-svc-mlktt May 29 12:58:06.284: INFO: Got endpoints: latency-svc-mlktt [865.770665ms] May 29 12:58:06.332: INFO: Created: latency-svc-lnp6p May 29 12:58:06.334: INFO: Got endpoints: latency-svc-lnp6p [872.226324ms] May 29 12:58:06.361: INFO: Created: latency-svc-fxlcs May 29 12:58:06.375: INFO: Got endpoints: latency-svc-fxlcs [882.634823ms] May 29 12:58:06.411: INFO: Created: latency-svc-qq6nh May 29 12:58:06.423: INFO: Got endpoints: latency-svc-qq6nh [852.444548ms] May 29 12:58:06.471: INFO: Created: latency-svc-dq2pd May 29 12:58:06.483: INFO: Got endpoints: latency-svc-dq2pd [900.354173ms] May 29 12:58:06.505: INFO: Created: latency-svc-jnq5b May 29 12:58:06.520: INFO: Got endpoints: latency-svc-jnq5b [888.765077ms] May 29 12:58:06.547: INFO: Created: latency-svc-j5phk May 29 12:58:06.555: INFO: Got endpoints: latency-svc-j5phk [888.407556ms] May 29 12:58:06.596: INFO: Created: latency-svc-l7q89 May 29 12:58:06.610: INFO: Got endpoints: latency-svc-l7q89 [870.643745ms] May 29 12:58:06.626: INFO: Created: latency-svc-g4g5m May 29 12:58:06.641: INFO: Got endpoints: latency-svc-g4g5m [870.833084ms] May 29 12:58:06.661: INFO: Created: latency-svc-l7ntz May 29 12:58:06.685: INFO: Got endpoints: latency-svc-l7ntz [778.6985ms] May 29 12:58:06.751: INFO: Created: latency-svc-phm8f May 29 12:58:06.763: INFO: Got endpoints: latency-svc-phm8f [819.5304ms] May 29 12:58:06.795: INFO: Created: latency-svc-qmt8z May 29 12:58:06.809: INFO: Got endpoints: latency-svc-qmt8z [804.846578ms] May 29 12:58:06.831: INFO: Created: latency-svc-7xtxf May 29 12:58:06.846: INFO: Got endpoints: latency-svc-7xtxf [735.957197ms] May 29 12:58:06.895: INFO: Created: latency-svc-hlf2h May 29 12:58:06.897: INFO: Got endpoints: latency-svc-hlf2h [679.128814ms] May 29 12:58:06.950: INFO: Created: latency-svc-s7xdr May 29 12:58:06.972: INFO: Got endpoints: latency-svc-s7xdr [718.007208ms] May 29 12:58:07.044: INFO: Created: latency-svc-95zwv May 29 12:58:07.047: INFO: Got endpoints: latency-svc-95zwv [763.1133ms] May 29 12:58:07.081: INFO: Created: latency-svc-nrr6c May 29 12:58:07.093: INFO: Got endpoints: latency-svc-nrr6c [758.414653ms] May 29 12:58:07.111: INFO: Created: latency-svc-mqckn May 29 12:58:07.134: INFO: Got endpoints: latency-svc-mqckn [759.341108ms] May 29 12:58:07.206: INFO: Created: latency-svc-hb2mm May 29 12:58:07.214: INFO: Got endpoints: latency-svc-hb2mm [791.463383ms] May 29 12:58:07.238: INFO: Created: latency-svc-cf8pj May 29 12:58:07.262: INFO: Got endpoints: latency-svc-cf8pj [778.721825ms] May 29 12:58:07.303: INFO: Created: latency-svc-tbx9b May 29 12:58:07.373: INFO: Got endpoints: latency-svc-tbx9b [853.604275ms] May 29 12:58:07.383: INFO: Created: latency-svc-kr9xw May 29 12:58:07.394: INFO: Got endpoints: latency-svc-kr9xw [838.606603ms] May 29 12:58:07.412: INFO: Created: latency-svc-n42m9 May 29 12:58:07.425: INFO: Got endpoints: latency-svc-n42m9 [814.813472ms] May 29 12:58:07.441: INFO: Created: latency-svc-vlb2t May 29 12:58:07.455: INFO: Got endpoints: latency-svc-vlb2t [813.758368ms] May 29 12:58:07.471: INFO: Created: latency-svc-znm9b May 29 12:58:07.517: INFO: Got endpoints: latency-svc-znm9b [832.004832ms] May 29 12:58:07.530: INFO: Created: latency-svc-8sgxm May 29 12:58:07.546: INFO: Got endpoints: latency-svc-8sgxm [782.333011ms] May 29 12:58:07.562: INFO: Created: latency-svc-ndm7g May 29 12:58:07.576: INFO: Got endpoints: latency-svc-ndm7g [766.269566ms] May 29 12:58:07.594: INFO: Created: latency-svc-fr2fh May 29 12:58:07.606: INFO: Got endpoints: latency-svc-fr2fh [760.312267ms] May 29 12:58:07.680: INFO: Created: latency-svc-mjgwd May 29 12:58:07.682: INFO: Got endpoints: latency-svc-mjgwd [784.899433ms] May 29 12:58:07.844: INFO: Created: latency-svc-zxx77 May 29 12:58:07.871: INFO: Got endpoints: latency-svc-zxx77 [898.745121ms] May 29 12:58:07.902: INFO: Created: latency-svc-rt6pg May 29 12:58:07.919: INFO: Got endpoints: latency-svc-rt6pg [871.593059ms] May 29 12:58:07.939: INFO: Created: latency-svc-dbgfd May 29 12:58:07.978: INFO: Got endpoints: latency-svc-dbgfd [885.263858ms] May 29 12:58:08.000: INFO: Created: latency-svc-gn2bf May 29 12:58:08.015: INFO: Got endpoints: latency-svc-gn2bf [881.133803ms] May 29 12:58:08.036: INFO: Created: latency-svc-g5l8f May 29 12:58:08.046: INFO: Got endpoints: latency-svc-g5l8f [831.342565ms] May 29 12:58:08.067: INFO: Created: latency-svc-jq8zp May 29 12:58:08.076: INFO: Got endpoints: latency-svc-jq8zp [813.890893ms] May 29 12:58:08.128: INFO: Created: latency-svc-57vrm May 29 12:58:08.137: INFO: Got endpoints: latency-svc-57vrm [763.255222ms] May 29 12:58:08.173: INFO: Created: latency-svc-2ss2f May 29 12:58:08.197: INFO: Got endpoints: latency-svc-2ss2f [803.125869ms] May 29 12:58:08.216: INFO: Created: latency-svc-dlsc8 May 29 12:58:08.227: INFO: Got endpoints: latency-svc-dlsc8 [801.870713ms] May 29 12:58:08.278: INFO: Created: latency-svc-wdxhn May 29 12:58:08.287: INFO: Got endpoints: latency-svc-wdxhn [832.391085ms] May 29 12:58:08.304: INFO: Created: latency-svc-6g546 May 29 12:58:08.318: INFO: Got endpoints: latency-svc-6g546 [800.856931ms] May 29 12:58:08.347: INFO: Created: latency-svc-5nf6z May 29 12:58:08.366: INFO: Got endpoints: latency-svc-5nf6z [820.400235ms] May 29 12:58:08.421: INFO: Created: latency-svc-dnl8g May 29 12:58:08.423: INFO: Got endpoints: latency-svc-dnl8g [847.479075ms] May 29 12:58:08.462: INFO: Created: latency-svc-4l9nx May 29 12:58:08.474: INFO: Got endpoints: latency-svc-4l9nx [868.257653ms] May 29 12:58:08.492: INFO: Created: latency-svc-wmmh7 May 29 12:58:08.505: INFO: Got endpoints: latency-svc-wmmh7 [822.670181ms] May 29 12:58:08.520: INFO: Created: latency-svc-mzj5f May 29 12:58:08.565: INFO: Got endpoints: latency-svc-mzj5f [694.093495ms] May 29 12:58:08.580: INFO: Created: latency-svc-gsqb6 May 29 12:58:08.596: INFO: Got endpoints: latency-svc-gsqb6 [676.733923ms] May 29 12:58:08.622: INFO: Created: latency-svc-7khjr May 29 12:58:08.632: INFO: Got endpoints: latency-svc-7khjr [653.746784ms] May 29 12:58:08.648: INFO: Created: latency-svc-4qn6j May 29 12:58:08.662: INFO: Got endpoints: latency-svc-4qn6j [646.809164ms] May 29 12:58:08.727: INFO: Created: latency-svc-dl9z8 May 29 12:58:08.742: INFO: Got endpoints: latency-svc-dl9z8 [696.533788ms] May 29 12:58:08.780: INFO: Created: latency-svc-2bfjv May 29 12:58:08.795: INFO: Got endpoints: latency-svc-2bfjv [719.026361ms] May 29 12:58:08.816: INFO: Created: latency-svc-mtckn May 29 12:58:08.858: INFO: Got endpoints: latency-svc-mtckn [721.65084ms] May 29 12:58:08.874: INFO: Created: latency-svc-dslzd May 29 12:58:08.880: INFO: Got endpoints: latency-svc-dslzd [682.217694ms] May 29 12:58:08.898: INFO: Created: latency-svc-bm59l May 29 12:58:08.916: INFO: Got endpoints: latency-svc-bm59l [689.384806ms] May 29 12:58:08.942: INFO: Created: latency-svc-sb24h May 29 12:58:08.952: INFO: Got endpoints: latency-svc-sb24h [664.75582ms] May 29 12:58:09.002: INFO: Created: latency-svc-zw2zv May 29 12:58:09.013: INFO: Got endpoints: latency-svc-zw2zv [694.347695ms] May 29 12:58:09.032: INFO: Created: latency-svc-8cngc May 29 12:58:09.044: INFO: Got endpoints: latency-svc-8cngc [677.478366ms] May 29 12:58:09.067: INFO: Created: latency-svc-t4z6f May 29 12:58:09.073: INFO: Got endpoints: latency-svc-t4z6f [649.748139ms] May 29 12:58:09.142: INFO: Created: latency-svc-bq28x May 29 12:58:09.143: INFO: Got endpoints: latency-svc-bq28x [668.794832ms] May 29 12:58:09.171: INFO: Created: latency-svc-vgwlh May 29 12:58:09.183: INFO: Got endpoints: latency-svc-vgwlh [678.480411ms] May 29 12:58:09.200: INFO: Created: latency-svc-bmwwb May 29 12:58:09.213: INFO: Got endpoints: latency-svc-bmwwb [648.275591ms] May 29 12:58:09.228: INFO: Created: latency-svc-jd749 May 29 12:58:09.271: INFO: Got endpoints: latency-svc-jd749 [675.76941ms] May 29 12:58:09.301: INFO: Created: latency-svc-l6rxd May 29 12:58:09.344: INFO: Got endpoints: latency-svc-l6rxd [712.047947ms] May 29 12:58:09.403: INFO: Created: latency-svc-hb52c May 29 12:58:09.406: INFO: Got endpoints: latency-svc-hb52c [744.075545ms] May 29 12:58:09.443: INFO: Created: latency-svc-nn87k May 29 12:58:09.459: INFO: Got endpoints: latency-svc-nn87k [716.842777ms] May 29 12:58:09.490: INFO: Created: latency-svc-sp659 May 29 12:58:09.495: INFO: Got endpoints: latency-svc-sp659 [700.422534ms] May 29 12:58:09.547: INFO: Created: latency-svc-mhlbm May 29 12:58:09.550: INFO: Got endpoints: latency-svc-mhlbm [691.875393ms] May 29 12:58:09.623: INFO: Created: latency-svc-jjxk4 May 29 12:58:09.691: INFO: Got endpoints: latency-svc-jjxk4 [811.014986ms] May 29 12:58:09.702: INFO: Created: latency-svc-zdbqn May 29 12:58:09.718: INFO: Got endpoints: latency-svc-zdbqn [801.914765ms] May 29 12:58:09.739: INFO: Created: latency-svc-q7bz8 May 29 12:58:09.756: INFO: Got endpoints: latency-svc-q7bz8 [804.428827ms] May 29 12:58:09.785: INFO: Created: latency-svc-bk9xw May 29 12:58:09.828: INFO: Got endpoints: latency-svc-bk9xw [815.0156ms] May 29 12:58:09.857: INFO: Created: latency-svc-m99bt May 29 12:58:09.870: INFO: Got endpoints: latency-svc-m99bt [826.092275ms] May 29 12:58:09.912: INFO: Created: latency-svc-clbsd May 29 12:58:09.990: INFO: Got endpoints: latency-svc-clbsd [917.109244ms] May 29 12:58:10.031: INFO: Created: latency-svc-jkrz4 May 29 12:58:10.044: INFO: Got endpoints: latency-svc-jkrz4 [900.564133ms] May 29 12:58:10.061: INFO: Created: latency-svc-g98d4 May 29 12:58:10.110: INFO: Got endpoints: latency-svc-g98d4 [926.486829ms] May 29 12:58:10.116: INFO: Created: latency-svc-j6jzf May 29 12:58:10.129: INFO: Got endpoints: latency-svc-j6jzf [915.414513ms] May 29 12:58:10.159: INFO: Created: latency-svc-dvvws May 29 12:58:10.183: INFO: Got endpoints: latency-svc-dvvws [911.663319ms] May 29 12:58:10.198: INFO: Created: latency-svc-68sxv May 29 12:58:10.247: INFO: Got endpoints: latency-svc-68sxv [903.448569ms] May 29 12:58:10.258: INFO: Created: latency-svc-bsrd8 May 29 12:58:10.267: INFO: Got endpoints: latency-svc-bsrd8 [861.034901ms] May 29 12:58:10.290: INFO: Created: latency-svc-dh9sk May 29 12:58:10.745: INFO: Got endpoints: latency-svc-dh9sk [1.285946212s] May 29 12:58:11.183: INFO: Created: latency-svc-w94rl May 29 12:58:11.186: INFO: Got endpoints: latency-svc-w94rl [1.691106242s] May 29 12:58:11.207: INFO: Created: latency-svc-z6nld May 29 12:58:11.221: INFO: Got endpoints: latency-svc-z6nld [1.671011905s] May 29 12:58:11.262: INFO: Created: latency-svc-dtb66 May 29 12:58:11.275: INFO: Got endpoints: latency-svc-dtb66 [1.584601113s] May 29 12:58:11.320: INFO: Created: latency-svc-6jgcf May 29 12:58:11.324: INFO: Got endpoints: latency-svc-6jgcf [1.6052131s] May 29 12:58:11.324: INFO: Latencies: [30.395426ms 78.120642ms 111.884718ms 235.073083ms 250.502106ms 280.175422ms 311.224461ms 382.31229ms 452.128101ms 506.800893ms 560.770268ms 646.809164ms 648.275591ms 649.748139ms 653.746784ms 660.098577ms 664.75582ms 668.794832ms 675.76941ms 676.733923ms 677.478366ms 678.480411ms 679.128814ms 682.217694ms 689.384806ms 691.875393ms 694.093495ms 694.347695ms 696.533788ms 700.422534ms 708.346709ms 712.047947ms 716.842777ms 717.624712ms 718.007208ms 719.026361ms 720.235932ms 721.65084ms 723.289455ms 724.617071ms 732.826105ms 733.084929ms 733.530885ms 734.219461ms 735.957197ms 738.333322ms 744.075545ms 747.838009ms 756.179663ms 758.414653ms 759.341108ms 760.312267ms 763.1133ms 763.255222ms 766.269566ms 766.981091ms 771.048858ms 771.173902ms 778.6985ms 778.721825ms 778.958136ms 779.199207ms 779.782576ms 779.970047ms 780.438475ms 782.098391ms 782.333011ms 784.899433ms 791.463383ms 793.640639ms 796.090447ms 797.180511ms 798.395738ms 800.702903ms 800.856931ms 801.119966ms 801.630443ms 801.870713ms 801.914765ms 803.125869ms 803.274011ms 804.428827ms 804.846578ms 806.508512ms 809.934576ms 810.118698ms 810.854929ms 811.014986ms 813.087934ms 813.758368ms 813.890893ms 814.813472ms 815.0156ms 819.5304ms 820.400235ms 820.415706ms 821.252664ms 821.930352ms 821.967484ms 822.670181ms 824.535156ms 825.518003ms 825.957597ms 825.975862ms 826.092275ms 826.370306ms 827.145326ms 828.241376ms 828.319929ms 831.342565ms 831.795066ms 832.004832ms 832.181698ms 832.391085ms 833.224058ms 835.0185ms 836.863126ms 838.606603ms 839.009108ms 847.479075ms 847.931833ms 849.835348ms 849.960574ms 851.966891ms 852.193387ms 852.444548ms 853.604275ms 855.844814ms 855.967956ms 856.909386ms 858.611162ms 859.67679ms 859.797307ms 860.296482ms 861.034901ms 865.189329ms 865.770665ms 868.257653ms 869.745044ms 870.643745ms 870.833084ms 871.593059ms 872.226324ms 875.389759ms 877.478546ms 879.98111ms 881.133803ms 881.861297ms 882.634823ms 885.263858ms 885.581609ms 885.771995ms 888.407556ms 888.65022ms 888.765077ms 892.469974ms 892.780762ms 896.403436ms 898.745121ms 900.354173ms 900.479321ms 900.564133ms 903.283951ms 903.448569ms 903.988548ms 907.01991ms 911.663319ms 912.450326ms 915.414513ms 917.109244ms 923.265369ms 926.486829ms 927.383006ms 933.015495ms 936.959174ms 938.263275ms 938.662744ms 958.263504ms 978.894351ms 991.579684ms 1.221648434s 1.249766902s 1.285946212s 1.584601113s 1.6052131s 1.663666877s 1.668820318s 1.671011905s 1.681514497s 1.691106242s 1.703640608s 1.706777543s 1.727912295s 1.735964118s 1.736985795s 1.742786716s 1.754309161s 1.775703664s 1.786128628s 1.792596277s] May 29 12:58:11.324: INFO: 50 %ile: 824.535156ms May 29 12:58:11.324: INFO: 90 %ile: 1.221648434s May 29 12:58:11.324: INFO: 99 %ile: 1.786128628s May 29 12:58:11.324: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 12:58:11.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-8694" for this suite. May 29 12:58:35.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 12:58:35.400: INFO: namespace svc-latency-8694 deletion completed in 24.070379184s • [SLOW TEST:39.815 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 12:58:35.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 12:58:40.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4377" for this suite. May 29 12:58:47.110: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 12:58:47.191: INFO: namespace watch-4377 deletion completed in 6.182555151s • [SLOW TEST:11.790 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 12:58:47.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs May 29 12:58:47.230: INFO: Waiting up to 5m0s for pod "pod-40234dc5-579f-4f72-8129-319c16df2bef" in namespace "emptydir-8741" to be "success or failure" May 29 12:58:47.272: INFO: Pod "pod-40234dc5-579f-4f72-8129-319c16df2bef": Phase="Pending", Reason="", readiness=false. Elapsed: 41.849987ms May 29 12:58:49.277: INFO: Pod "pod-40234dc5-579f-4f72-8129-319c16df2bef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046552632s May 29 12:58:51.282: INFO: Pod "pod-40234dc5-579f-4f72-8129-319c16df2bef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051415255s STEP: Saw pod success May 29 12:58:51.282: INFO: Pod "pod-40234dc5-579f-4f72-8129-319c16df2bef" satisfied condition "success or failure" May 29 12:58:51.285: INFO: Trying to get logs from node iruya-worker2 pod pod-40234dc5-579f-4f72-8129-319c16df2bef container test-container: STEP: delete the pod May 29 12:58:51.308: INFO: Waiting for pod pod-40234dc5-579f-4f72-8129-319c16df2bef to disappear May 29 12:58:51.318: INFO: Pod pod-40234dc5-579f-4f72-8129-319c16df2bef no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 12:58:51.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8741" for this suite. May 29 12:58:57.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 12:58:57.465: INFO: namespace emptydir-8741 deletion completed in 6.143050833s • [SLOW TEST:10.274 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 12:58:57.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller May 29 12:58:57.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3058' May 29 12:59:00.103: INFO: stderr: "" May 29 12:59:00.103: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 29 12:59:00.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3058' May 29 12:59:00.240: INFO: stderr: "" May 29 12:59:00.240: INFO: stdout: "update-demo-nautilus-47f4f update-demo-nautilus-hzvdd " May 29 12:59:00.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-47f4f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3058' May 29 12:59:00.343: INFO: stderr: "" May 29 12:59:00.343: INFO: stdout: "" May 29 12:59:00.343: INFO: update-demo-nautilus-47f4f is created but not running May 29 12:59:05.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3058' May 29 12:59:05.447: INFO: stderr: "" May 29 12:59:05.447: INFO: stdout: "update-demo-nautilus-47f4f update-demo-nautilus-hzvdd " May 29 12:59:05.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-47f4f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3058' May 29 12:59:05.542: INFO: stderr: "" May 29 12:59:05.542: INFO: stdout: "true" May 29 12:59:05.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-47f4f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3058' May 29 12:59:05.637: INFO: stderr: "" May 29 12:59:05.637: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 29 12:59:05.637: INFO: validating pod update-demo-nautilus-47f4f May 29 12:59:05.643: INFO: got data: { "image": "nautilus.jpg" } May 29 12:59:05.643: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 29 12:59:05.643: INFO: update-demo-nautilus-47f4f is verified up and running May 29 12:59:05.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hzvdd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3058' May 29 12:59:05.727: INFO: stderr: "" May 29 12:59:05.727: INFO: stdout: "true" May 29 12:59:05.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hzvdd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3058' May 29 12:59:05.824: INFO: stderr: "" May 29 12:59:05.825: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 29 12:59:05.825: INFO: validating pod update-demo-nautilus-hzvdd May 29 12:59:05.836: INFO: got data: { "image": "nautilus.jpg" } May 29 12:59:05.836: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 29 12:59:05.836: INFO: update-demo-nautilus-hzvdd is verified up and running STEP: scaling down the replication controller May 29 12:59:05.837: INFO: scanned /root for discovery docs: May 29 12:59:05.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-3058' May 29 12:59:07.017: INFO: stderr: "" May 29 12:59:07.018: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 29 12:59:07.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3058' May 29 12:59:07.115: INFO: stderr: "" May 29 12:59:07.115: INFO: stdout: "update-demo-nautilus-47f4f update-demo-nautilus-hzvdd " STEP: Replicas for name=update-demo: expected=1 actual=2 May 29 12:59:12.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3058' May 29 12:59:12.245: INFO: stderr: "" May 29 12:59:12.245: INFO: stdout: "update-demo-nautilus-47f4f " May 29 12:59:12.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-47f4f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3058' May 29 12:59:12.345: INFO: stderr: "" May 29 12:59:12.345: INFO: stdout: "true" May 29 12:59:12.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-47f4f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3058' May 29 12:59:12.428: INFO: stderr: "" May 29 12:59:12.428: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 29 12:59:12.428: INFO: validating pod update-demo-nautilus-47f4f May 29 12:59:12.435: INFO: got data: { "image": "nautilus.jpg" } May 29 12:59:12.435: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 29 12:59:12.435: INFO: update-demo-nautilus-47f4f is verified up and running STEP: scaling up the replication controller May 29 12:59:12.438: INFO: scanned /root for discovery docs: May 29 12:59:12.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-3058' May 29 12:59:13.554: INFO: stderr: "" May 29 12:59:13.555: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 29 12:59:13.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3058' May 29 12:59:13.654: INFO: stderr: "" May 29 12:59:13.654: INFO: stdout: "update-demo-nautilus-47f4f update-demo-nautilus-l7clw " May 29 12:59:13.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-47f4f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3058' May 29 12:59:13.812: INFO: stderr: "" May 29 12:59:13.812: INFO: stdout: "true" May 29 12:59:13.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-47f4f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3058' May 29 12:59:14.363: INFO: stderr: "" May 29 12:59:14.363: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 29 12:59:14.363: INFO: validating pod update-demo-nautilus-47f4f May 29 12:59:14.742: INFO: got data: { "image": "nautilus.jpg" } May 29 12:59:14.742: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 29 12:59:14.742: INFO: update-demo-nautilus-47f4f is verified up and running May 29 12:59:14.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l7clw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3058' May 29 12:59:14.877: INFO: stderr: "" May 29 12:59:14.878: INFO: stdout: "" May 29 12:59:14.878: INFO: update-demo-nautilus-l7clw is created but not running May 29 12:59:19.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3058' May 29 12:59:19.973: INFO: stderr: "" May 29 12:59:19.973: INFO: stdout: "update-demo-nautilus-47f4f update-demo-nautilus-l7clw " May 29 12:59:19.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-47f4f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3058' May 29 12:59:20.077: INFO: stderr: "" May 29 12:59:20.077: INFO: stdout: "true" May 29 12:59:20.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-47f4f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3058' May 29 12:59:20.170: INFO: stderr: "" May 29 12:59:20.170: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 29 12:59:20.170: INFO: validating pod update-demo-nautilus-47f4f May 29 12:59:20.173: INFO: got data: { "image": "nautilus.jpg" } May 29 12:59:20.173: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 29 12:59:20.173: INFO: update-demo-nautilus-47f4f is verified up and running May 29 12:59:20.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l7clw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3058' May 29 12:59:20.259: INFO: stderr: "" May 29 12:59:20.260: INFO: stdout: "true" May 29 12:59:20.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l7clw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3058' May 29 12:59:20.358: INFO: stderr: "" May 29 12:59:20.358: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 29 12:59:20.358: INFO: validating pod update-demo-nautilus-l7clw May 29 12:59:20.362: INFO: got data: { "image": "nautilus.jpg" } May 29 12:59:20.362: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 29 12:59:20.362: INFO: update-demo-nautilus-l7clw is verified up and running STEP: using delete to clean up resources May 29 12:59:20.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3058' May 29 12:59:20.472: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 29 12:59:20.472: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 29 12:59:20.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3058' May 29 12:59:20.571: INFO: stderr: "No resources found.\n" May 29 12:59:20.571: INFO: stdout: "" May 29 12:59:20.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3058 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 29 12:59:20.666: INFO: stderr: "" May 29 12:59:20.666: INFO: stdout: "update-demo-nautilus-47f4f\nupdate-demo-nautilus-l7clw\n" May 29 12:59:21.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3058' May 29 12:59:21.255: INFO: stderr: "No resources found.\n" May 29 12:59:21.255: INFO: stdout: "" May 29 12:59:21.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3058 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 29 12:59:21.342: INFO: stderr: "" May 29 12:59:21.342: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 12:59:21.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3058" for this suite. May 29 12:59:43.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 12:59:43.510: INFO: namespace kubectl-3058 deletion completed in 22.164494569s • [SLOW TEST:46.045 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 12:59:43.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 29 12:59:43.681: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 12:59:43.683: INFO: Number of nodes with available pods: 0 May 29 12:59:43.683: INFO: Node iruya-worker is running more than one daemon pod May 29 12:59:44.688: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 12:59:44.692: INFO: Number of nodes with available pods: 0 May 29 12:59:44.692: INFO: Node iruya-worker is running more than one daemon pod May 29 12:59:45.689: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 12:59:45.693: INFO: Number of nodes with available pods: 0 May 29 12:59:45.693: INFO: Node iruya-worker is running more than one daemon pod May 29 12:59:46.689: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 12:59:46.692: INFO: Number of nodes with available pods: 0 May 29 12:59:46.692: INFO: Node iruya-worker is running more than one daemon pod May 29 12:59:47.688: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 12:59:47.691: INFO: Number of nodes with available pods: 1 May 29 12:59:47.691: INFO: Node iruya-worker2 is running more than one daemon pod May 29 12:59:48.689: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 12:59:48.692: INFO: Number of nodes with available pods: 2 May 29 12:59:48.692: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 29 12:59:48.750: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 12:59:48.753: INFO: Number of nodes with available pods: 1 May 29 12:59:48.753: INFO: Node iruya-worker is running more than one daemon pod May 29 12:59:49.758: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 12:59:49.762: INFO: Number of nodes with available pods: 1 May 29 12:59:49.762: INFO: Node iruya-worker is running more than one daemon pod May 29 12:59:50.762: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 12:59:50.764: INFO: Number of nodes with available pods: 1 May 29 12:59:50.764: INFO: Node iruya-worker is running more than one daemon pod May 29 12:59:51.759: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 12:59:51.762: INFO: Number of nodes with available pods: 1 May 29 12:59:51.762: INFO: Node iruya-worker is running more than one daemon pod May 29 12:59:52.759: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 12:59:52.763: INFO: Number of nodes with available pods: 1 May 29 12:59:52.763: INFO: Node iruya-worker is running more than one daemon pod May 29 12:59:53.758: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 12:59:53.762: INFO: Number of nodes with available pods: 1 May 29 12:59:53.762: INFO: Node iruya-worker is running more than one daemon pod May 29 12:59:54.759: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 12:59:54.763: INFO: Number of nodes with available pods: 1 May 29 12:59:54.763: INFO: Node iruya-worker is running more than one daemon pod May 29 12:59:55.759: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 12:59:55.763: INFO: Number of nodes with available pods: 1 May 29 12:59:55.763: INFO: Node iruya-worker is running more than one daemon pod May 29 12:59:56.759: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 12:59:56.763: INFO: Number of nodes with available pods: 1 May 29 12:59:56.763: INFO: Node iruya-worker is running more than one daemon pod May 29 12:59:57.759: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 12:59:57.763: INFO: Number of nodes with available pods: 1 May 29 12:59:57.763: INFO: Node iruya-worker is running more than one daemon pod May 29 12:59:58.758: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 12:59:58.762: INFO: Number of nodes with available pods: 1 May 29 12:59:58.762: INFO: Node iruya-worker is running more than one daemon pod May 29 12:59:59.757: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 12:59:59.760: INFO: Number of nodes with available pods: 1 May 29 12:59:59.760: INFO: Node iruya-worker is running more than one daemon pod May 29 13:00:00.758: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 13:00:00.762: INFO: Number of nodes with available pods: 1 May 29 13:00:00.762: INFO: Node iruya-worker is running more than one daemon pod May 29 13:00:01.759: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 13:00:01.763: INFO: Number of nodes with available pods: 1 May 29 13:00:01.763: INFO: Node iruya-worker is running more than one daemon pod May 29 13:00:02.758: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 13:00:02.762: INFO: Number of nodes with available pods: 1 May 29 13:00:02.762: INFO: Node iruya-worker is running more than one daemon pod May 29 13:00:03.758: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 13:00:03.762: INFO: Number of nodes with available pods: 1 May 29 13:00:03.762: INFO: Node iruya-worker is running more than one daemon pod May 29 13:00:04.766: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 13:00:04.770: INFO: Number of nodes with available pods: 1 May 29 13:00:04.770: INFO: Node iruya-worker is running more than one daemon pod May 29 13:00:05.758: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 13:00:05.762: INFO: Number of nodes with available pods: 2 May 29 13:00:05.762: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1506, will wait for the garbage collector to delete the pods May 29 13:00:05.826: INFO: Deleting DaemonSet.extensions daemon-set took: 7.492063ms May 29 13:00:07.826: INFO: Terminating DaemonSet.extensions daemon-set pods took: 2.000410992s May 29 13:00:21.938: INFO: Number of nodes with available pods: 0 May 29 13:00:21.938: INFO: Number of running nodes: 0, number of available pods: 0 May 29 13:00:21.946: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1506/daemonsets","resourceVersion":"13542487"},"items":null} May 29 13:00:21.949: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1506/pods","resourceVersion":"13542487"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:00:21.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1506" for this suite. May 29 13:00:27.977: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:00:28.075: INFO: namespace daemonsets-1506 deletion completed in 6.112642022s • [SLOW TEST:44.564 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:00:28.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-376 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-376 STEP: Creating statefulset with conflicting port in namespace statefulset-376 STEP: Waiting until pod test-pod will start running in namespace statefulset-376 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-376 May 29 13:00:32.210: INFO: Observed stateful pod in namespace: statefulset-376, name: ss-0, uid: 4b4990e1-da64-4308-b590-c3dcb9afe9f1, status phase: Pending. Waiting for statefulset controller to delete. May 29 13:00:32.277: INFO: Observed stateful pod in namespace: statefulset-376, name: ss-0, uid: 4b4990e1-da64-4308-b590-c3dcb9afe9f1, status phase: Failed. Waiting for statefulset controller to delete. May 29 13:00:32.324: INFO: Observed stateful pod in namespace: statefulset-376, name: ss-0, uid: 4b4990e1-da64-4308-b590-c3dcb9afe9f1, status phase: Failed. Waiting for statefulset controller to delete. May 29 13:00:32.335: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-376 STEP: Removing pod with conflicting port in namespace statefulset-376 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-376 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 29 13:00:42.505: INFO: Deleting all statefulset in ns statefulset-376 May 29 13:00:42.508: INFO: Scaling statefulset ss to 0 May 29 13:00:52.529: INFO: Waiting for statefulset status.replicas updated to 0 May 29 13:00:52.532: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:00:52.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-376" for this suite. May 29 13:00:58.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:00:58.657: INFO: namespace statefulset-376 deletion completed in 6.109522991s • [SLOW TEST:30.581 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:00:58.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs May 29 13:00:58.737: INFO: Waiting up to 5m0s for pod "pod-09c6ec35-b1e6-4533-b6fc-41b9b8160bfe" in namespace "emptydir-4609" to be "success or failure" May 29 13:00:58.741: INFO: Pod "pod-09c6ec35-b1e6-4533-b6fc-41b9b8160bfe": Phase="Pending", Reason="", readiness=false. Elapsed: 3.555488ms May 29 13:01:00.745: INFO: Pod "pod-09c6ec35-b1e6-4533-b6fc-41b9b8160bfe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008076309s May 29 13:01:02.750: INFO: Pod "pod-09c6ec35-b1e6-4533-b6fc-41b9b8160bfe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012348633s STEP: Saw pod success May 29 13:01:02.750: INFO: Pod "pod-09c6ec35-b1e6-4533-b6fc-41b9b8160bfe" satisfied condition "success or failure" May 29 13:01:02.753: INFO: Trying to get logs from node iruya-worker2 pod pod-09c6ec35-b1e6-4533-b6fc-41b9b8160bfe container test-container: STEP: delete the pod May 29 13:01:02.784: INFO: Waiting for pod pod-09c6ec35-b1e6-4533-b6fc-41b9b8160bfe to disappear May 29 13:01:02.792: INFO: Pod pod-09c6ec35-b1e6-4533-b6fc-41b9b8160bfe no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:01:02.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4609" for this suite. May 29 13:01:08.812: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:01:08.890: INFO: namespace emptydir-4609 deletion completed in 6.094612114s • [SLOW TEST:10.231 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:01:08.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command May 29 13:01:08.936: INFO: Waiting up to 5m0s for pod "client-containers-4a921f59-8b89-49cb-a0b5-4d9f50156bbe" in namespace "containers-1842" to be "success or failure" May 29 13:01:08.953: INFO: Pod "client-containers-4a921f59-8b89-49cb-a0b5-4d9f50156bbe": Phase="Pending", Reason="", readiness=false. Elapsed: 16.670656ms May 29 13:01:10.957: INFO: Pod "client-containers-4a921f59-8b89-49cb-a0b5-4d9f50156bbe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02070311s May 29 13:01:12.961: INFO: Pod "client-containers-4a921f59-8b89-49cb-a0b5-4d9f50156bbe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024767392s STEP: Saw pod success May 29 13:01:12.961: INFO: Pod "client-containers-4a921f59-8b89-49cb-a0b5-4d9f50156bbe" satisfied condition "success or failure" May 29 13:01:12.965: INFO: Trying to get logs from node iruya-worker2 pod client-containers-4a921f59-8b89-49cb-a0b5-4d9f50156bbe container test-container: STEP: delete the pod May 29 13:01:12.990: INFO: Waiting for pod client-containers-4a921f59-8b89-49cb-a0b5-4d9f50156bbe to disappear May 29 13:01:12.994: INFO: Pod client-containers-4a921f59-8b89-49cb-a0b5-4d9f50156bbe no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:01:12.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1842" for this suite. May 29 13:01:19.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:01:19.107: INFO: namespace containers-1842 deletion completed in 6.108933277s • [SLOW TEST:10.217 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:01:19.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 29 13:01:19.167: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:01:25.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7369" for this suite. May 29 13:01:31.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:01:31.326: INFO: namespace init-container-7369 deletion completed in 6.088510715s • [SLOW TEST:12.218 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:01:31.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 29 13:01:31.445: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d44b1674-8737-4e1b-b3c1-ac40a3b79550" in namespace "projected-2745" to be "success or failure" May 29 13:01:31.455: INFO: Pod "downwardapi-volume-d44b1674-8737-4e1b-b3c1-ac40a3b79550": Phase="Pending", Reason="", readiness=false. Elapsed: 9.100866ms May 29 13:01:33.508: INFO: Pod "downwardapi-volume-d44b1674-8737-4e1b-b3c1-ac40a3b79550": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062700422s May 29 13:01:35.512: INFO: Pod "downwardapi-volume-d44b1674-8737-4e1b-b3c1-ac40a3b79550": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.066354332s STEP: Saw pod success May 29 13:01:35.512: INFO: Pod "downwardapi-volume-d44b1674-8737-4e1b-b3c1-ac40a3b79550" satisfied condition "success or failure" May 29 13:01:35.514: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-d44b1674-8737-4e1b-b3c1-ac40a3b79550 container client-container: STEP: delete the pod May 29 13:01:35.564: INFO: Waiting for pod downwardapi-volume-d44b1674-8737-4e1b-b3c1-ac40a3b79550 to disappear May 29 13:01:35.586: INFO: Pod downwardapi-volume-d44b1674-8737-4e1b-b3c1-ac40a3b79550 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:01:35.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2745" for this suite. May 29 13:01:41.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:01:41.683: INFO: namespace projected-2745 deletion completed in 6.092616181s • [SLOW TEST:10.357 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:01:41.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 29 13:01:46.294: INFO: Successfully updated pod "pod-update-395ed9cc-bd71-496b-8fbd-8331986b7d19" STEP: verifying the updated pod is in kubernetes May 29 13:01:46.306: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:01:46.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6551" for this suite. May 29 13:02:08.340: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:02:08.411: INFO: namespace pods-6551 deletion completed in 22.101329882s • [SLOW TEST:26.728 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:02:08.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 29 13:02:08.490: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9b635a00-e9f5-4ea3-b884-0d320f5e4e8c" in namespace "projected-3335" to be "success or failure" May 29 13:02:08.492: INFO: Pod "downwardapi-volume-9b635a00-e9f5-4ea3-b884-0d320f5e4e8c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.181148ms May 29 13:02:10.497: INFO: Pod "downwardapi-volume-9b635a00-e9f5-4ea3-b884-0d320f5e4e8c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00652989s May 29 13:02:12.501: INFO: Pod "downwardapi-volume-9b635a00-e9f5-4ea3-b884-0d320f5e4e8c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011126215s STEP: Saw pod success May 29 13:02:12.501: INFO: Pod "downwardapi-volume-9b635a00-e9f5-4ea3-b884-0d320f5e4e8c" satisfied condition "success or failure" May 29 13:02:12.505: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-9b635a00-e9f5-4ea3-b884-0d320f5e4e8c container client-container: STEP: delete the pod May 29 13:02:12.542: INFO: Waiting for pod downwardapi-volume-9b635a00-e9f5-4ea3-b884-0d320f5e4e8c to disappear May 29 13:02:12.551: INFO: Pod downwardapi-volume-9b635a00-e9f5-4ea3-b884-0d320f5e4e8c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:02:12.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3335" for this suite. May 29 13:02:18.584: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:02:18.664: INFO: namespace projected-3335 deletion completed in 6.110171817s • [SLOW TEST:10.253 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:02:18.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0529 13:02:28.762996 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 29 13:02:28.763: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:02:28.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7766" for this suite. May 29 13:02:34.778: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:02:34.853: INFO: namespace gc-7766 deletion completed in 6.08683583s • [SLOW TEST:16.189 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:02:34.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-bef636ab-2368-4a39-bfc4-3c0661a1030a STEP: Creating a pod to test consume configMaps May 29 13:02:34.944: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-aee16242-b823-464a-929c-cfe3e9229c3a" in namespace "projected-5451" to be "success or failure" May 29 13:02:34.975: INFO: Pod "pod-projected-configmaps-aee16242-b823-464a-929c-cfe3e9229c3a": Phase="Pending", Reason="", readiness=false. Elapsed: 30.762188ms May 29 13:02:36.978: INFO: Pod "pod-projected-configmaps-aee16242-b823-464a-929c-cfe3e9229c3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034276071s May 29 13:02:38.983: INFO: Pod "pod-projected-configmaps-aee16242-b823-464a-929c-cfe3e9229c3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038918121s STEP: Saw pod success May 29 13:02:38.983: INFO: Pod "pod-projected-configmaps-aee16242-b823-464a-929c-cfe3e9229c3a" satisfied condition "success or failure" May 29 13:02:38.987: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-aee16242-b823-464a-929c-cfe3e9229c3a container projected-configmap-volume-test: STEP: delete the pod May 29 13:02:39.025: INFO: Waiting for pod pod-projected-configmaps-aee16242-b823-464a-929c-cfe3e9229c3a to disappear May 29 13:02:39.038: INFO: Pod pod-projected-configmaps-aee16242-b823-464a-929c-cfe3e9229c3a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:02:39.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5451" for this suite. May 29 13:02:45.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:02:45.141: INFO: namespace projected-5451 deletion completed in 6.098562001s • [SLOW TEST:10.287 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:02:45.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 29 13:02:45.202: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 29 13:02:54.257: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:02:54.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5309" for this suite. May 29 13:03:00.295: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:03:00.395: INFO: namespace pods-5309 deletion completed in 6.127182046s • [SLOW TEST:15.254 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:03:00.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 May 29 13:03:00.506: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 29 13:03:00.512: INFO: Waiting for terminating namespaces to be deleted... May 29 13:03:00.514: INFO: Logging pods the kubelet thinks is on node iruya-worker before test May 29 13:03:00.519: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 29 13:03:00.519: INFO: Container kube-proxy ready: true, restart count 0 May 29 13:03:00.519: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 29 13:03:00.519: INFO: Container kindnet-cni ready: true, restart count 2 May 29 13:03:00.519: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test May 29 13:03:00.526: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) May 29 13:03:00.526: INFO: Container coredns ready: true, restart count 0 May 29 13:03:00.526: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) May 29 13:03:00.526: INFO: Container coredns ready: true, restart count 0 May 29 13:03:00.526: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) May 29 13:03:00.526: INFO: Container kube-proxy ready: true, restart count 0 May 29 13:03:00.526: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) May 29 13:03:00.526: INFO: Container kindnet-cni ready: true, restart count 2 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-worker STEP: verifying the node has the label node iruya-worker2 May 29 13:03:00.688: INFO: Pod coredns-5d4dd4b4db-6jcgz requesting resource cpu=100m on Node iruya-worker2 May 29 13:03:00.688: INFO: Pod coredns-5d4dd4b4db-gm7vr requesting resource cpu=100m on Node iruya-worker2 May 29 13:03:00.688: INFO: Pod kindnet-gwz5g requesting resource cpu=100m on Node iruya-worker May 29 13:03:00.688: INFO: Pod kindnet-mgd8b requesting resource cpu=100m on Node iruya-worker2 May 29 13:03:00.688: INFO: Pod kube-proxy-pmz4p requesting resource cpu=0m on Node iruya-worker May 29 13:03:00.688: INFO: Pod kube-proxy-vwbcj requesting resource cpu=0m on Node iruya-worker2 STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-3ccf86bf-4f00-4568-9248-0ec9229ffee9.16138168dac3939d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5441/filler-pod-3ccf86bf-4f00-4568-9248-0ec9229ffee9 to iruya-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-3ccf86bf-4f00-4568-9248-0ec9229ffee9.16138169510da285], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-3ccf86bf-4f00-4568-9248-0ec9229ffee9.16138169aede85bd], Reason = [Created], Message = [Created container filler-pod-3ccf86bf-4f00-4568-9248-0ec9229ffee9] STEP: Considering event: Type = [Normal], Name = [filler-pod-3ccf86bf-4f00-4568-9248-0ec9229ffee9.16138169bd8f61f9], Reason = [Started], Message = [Started container filler-pod-3ccf86bf-4f00-4568-9248-0ec9229ffee9] STEP: Considering event: Type = [Normal], Name = [filler-pod-8e06c342-a523-4ce4-9f97-5734307ba40f.16138168d8fcf742], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5441/filler-pod-8e06c342-a523-4ce4-9f97-5734307ba40f to iruya-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-8e06c342-a523-4ce4-9f97-5734307ba40f.16138169276dceda], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-8e06c342-a523-4ce4-9f97-5734307ba40f.16138169945858fb], Reason = [Created], Message = [Created container filler-pod-8e06c342-a523-4ce4-9f97-5734307ba40f] STEP: Considering event: Type = [Normal], Name = [filler-pod-8e06c342-a523-4ce4-9f97-5734307ba40f.16138169a9820a94], Reason = [Started], Message = [Started container filler-pod-8e06c342-a523-4ce4-9f97-5734307ba40f] STEP: Considering event: Type = [Warning], Name = [additional-pod.1613816a4185e237], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node iruya-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:03:07.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5441" for this suite. May 29 13:03:13.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:03:14.029: INFO: namespace sched-pred-5441 deletion completed in 6.150164947s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:13.634 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:03:14.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-715b10c2-8973-40f5-9248-ef504e5cd1a8 STEP: Creating a pod to test consume configMaps May 29 13:03:14.120: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8513bab2-d893-4b26-8a7a-e9158b6edd87" in namespace "projected-2720" to be "success or failure" May 29 13:03:14.143: INFO: Pod "pod-projected-configmaps-8513bab2-d893-4b26-8a7a-e9158b6edd87": Phase="Pending", Reason="", readiness=false. Elapsed: 23.246938ms May 29 13:03:16.148: INFO: Pod "pod-projected-configmaps-8513bab2-d893-4b26-8a7a-e9158b6edd87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027324766s May 29 13:03:18.240: INFO: Pod "pod-projected-configmaps-8513bab2-d893-4b26-8a7a-e9158b6edd87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.119861405s STEP: Saw pod success May 29 13:03:18.240: INFO: Pod "pod-projected-configmaps-8513bab2-d893-4b26-8a7a-e9158b6edd87" satisfied condition "success or failure" May 29 13:03:18.243: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-8513bab2-d893-4b26-8a7a-e9158b6edd87 container projected-configmap-volume-test: STEP: delete the pod May 29 13:03:18.279: INFO: Waiting for pod pod-projected-configmaps-8513bab2-d893-4b26-8a7a-e9158b6edd87 to disappear May 29 13:03:18.289: INFO: Pod pod-projected-configmaps-8513bab2-d893-4b26-8a7a-e9158b6edd87 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:03:18.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2720" for this suite. May 29 13:03:24.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:03:24.446: INFO: namespace projected-2720 deletion completed in 6.154139583s • [SLOW TEST:10.416 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:03:24.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 29 13:03:24.531: INFO: Waiting up to 5m0s for pod "downwardapi-volume-44acbb72-9b6e-4080-8b3e-085b86cc7552" in namespace "projected-5602" to be "success or failure" May 29 13:03:24.535: INFO: Pod "downwardapi-volume-44acbb72-9b6e-4080-8b3e-085b86cc7552": Phase="Pending", Reason="", readiness=false. Elapsed: 4.490294ms May 29 13:03:26.538: INFO: Pod "downwardapi-volume-44acbb72-9b6e-4080-8b3e-085b86cc7552": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00755151s May 29 13:03:28.543: INFO: Pod "downwardapi-volume-44acbb72-9b6e-4080-8b3e-085b86cc7552": Phase="Running", Reason="", readiness=true. Elapsed: 4.011963646s May 29 13:03:30.547: INFO: Pod "downwardapi-volume-44acbb72-9b6e-4080-8b3e-085b86cc7552": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016428111s STEP: Saw pod success May 29 13:03:30.547: INFO: Pod "downwardapi-volume-44acbb72-9b6e-4080-8b3e-085b86cc7552" satisfied condition "success or failure" May 29 13:03:30.551: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-44acbb72-9b6e-4080-8b3e-085b86cc7552 container client-container: STEP: delete the pod May 29 13:03:30.595: INFO: Waiting for pod downwardapi-volume-44acbb72-9b6e-4080-8b3e-085b86cc7552 to disappear May 29 13:03:30.601: INFO: Pod downwardapi-volume-44acbb72-9b6e-4080-8b3e-085b86cc7552 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:03:30.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5602" for this suite. May 29 13:03:36.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:03:36.718: INFO: namespace projected-5602 deletion completed in 6.113056866s • [SLOW TEST:12.272 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:03:36.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-rd5pp in namespace proxy-7536 I0529 13:03:36.859831 7 runners.go:180] Created replication controller with name: proxy-service-rd5pp, namespace: proxy-7536, replica count: 1 I0529 13:03:37.910259 7 runners.go:180] proxy-service-rd5pp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0529 13:03:38.910517 7 runners.go:180] proxy-service-rd5pp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0529 13:03:39.910771 7 runners.go:180] proxy-service-rd5pp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0529 13:03:40.911018 7 runners.go:180] proxy-service-rd5pp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0529 13:03:41.911257 7 runners.go:180] proxy-service-rd5pp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0529 13:03:42.911527 7 runners.go:180] proxy-service-rd5pp Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 29 13:03:42.915: INFO: setup took 6.114730204s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 29 13:03:42.920: INFO: (0) /api/v1/namespaces/proxy-7536/pods/http:proxy-service-rd5pp-6hg6w:162/proxy/: bar (200; 5.269432ms) May 29 13:03:42.921: INFO: (0) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w:162/proxy/: bar (200; 5.918751ms) May 29 13:03:42.921: INFO: (0) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w/proxy/: test (200; 6.042576ms) May 29 13:03:42.926: INFO: (0) /api/v1/namespaces/proxy-7536/pods/http:proxy-service-rd5pp-6hg6w:1080/proxy/: ... (200; 10.82488ms) May 29 13:03:42.936: INFO: (0) /api/v1/namespaces/proxy-7536/pods/https:proxy-service-rd5pp-6hg6w:462/proxy/: tls qux (200; 21.551389ms) May 29 13:03:42.948: INFO: (0) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w:1080/proxy/: test<... (200; 32.500547ms) May 29 13:03:42.948: INFO: (0) /api/v1/namespaces/proxy-7536/services/https:proxy-service-rd5pp:tlsportname2/proxy/: tls qux (200; 32.653202ms) May 29 13:03:42.948: INFO: (0) /api/v1/namespaces/proxy-7536/services/http:proxy-service-rd5pp:portname2/proxy/: bar (200; 32.489931ms) May 29 13:03:42.948: INFO: (0) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w:160/proxy/: foo (200; 32.672901ms) May 29 13:03:42.949: INFO: (0) /api/v1/namespaces/proxy-7536/pods/http:proxy-service-rd5pp-6hg6w:160/proxy/: foo (200; 34.414037ms) May 29 13:03:42.949: INFO: (0) /api/v1/namespaces/proxy-7536/services/proxy-service-rd5pp:portname1/proxy/: foo (200; 34.440917ms) May 29 13:03:42.950: INFO: (0) /api/v1/namespaces/proxy-7536/services/http:proxy-service-rd5pp:portname1/proxy/: foo (200; 34.598766ms) May 29 13:03:42.950: INFO: (0) /api/v1/namespaces/proxy-7536/services/proxy-service-rd5pp:portname2/proxy/: bar (200; 34.57051ms) May 29 13:03:42.953: INFO: (0) /api/v1/namespaces/proxy-7536/pods/https:proxy-service-rd5pp-6hg6w:443/proxy/: test (200; 4.174229ms) May 29 13:03:42.960: INFO: (1) /api/v1/namespaces/proxy-7536/pods/https:proxy-service-rd5pp-6hg6w:443/proxy/: test<... (200; 4.880944ms) May 29 13:03:42.961: INFO: (1) /api/v1/namespaces/proxy-7536/pods/https:proxy-service-rd5pp-6hg6w:462/proxy/: tls qux (200; 4.98428ms) May 29 13:03:42.961: INFO: (1) /api/v1/namespaces/proxy-7536/services/http:proxy-service-rd5pp:portname1/proxy/: foo (200; 4.966582ms) May 29 13:03:42.961: INFO: (1) /api/v1/namespaces/proxy-7536/services/https:proxy-service-rd5pp:tlsportname1/proxy/: tls baz (200; 5.068616ms) May 29 13:03:42.961: INFO: (1) /api/v1/namespaces/proxy-7536/pods/https:proxy-service-rd5pp-6hg6w:460/proxy/: tls baz (200; 4.952764ms) May 29 13:03:42.961: INFO: (1) /api/v1/namespaces/proxy-7536/pods/http:proxy-service-rd5pp-6hg6w:1080/proxy/: ... (200; 4.895973ms) May 29 13:03:42.961: INFO: (1) /api/v1/namespaces/proxy-7536/services/https:proxy-service-rd5pp:tlsportname2/proxy/: tls qux (200; 5.387165ms) May 29 13:03:42.961: INFO: (1) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w:162/proxy/: bar (200; 5.471279ms) May 29 13:03:42.961: INFO: (1) /api/v1/namespaces/proxy-7536/services/proxy-service-rd5pp:portname1/proxy/: foo (200; 5.697111ms) May 29 13:03:42.961: INFO: (1) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w:160/proxy/: foo (200; 5.702943ms) May 29 13:03:42.961: INFO: (1) /api/v1/namespaces/proxy-7536/services/http:proxy-service-rd5pp:portname2/proxy/: bar (200; 5.718649ms) May 29 13:03:42.961: INFO: (1) /api/v1/namespaces/proxy-7536/services/proxy-service-rd5pp:portname2/proxy/: bar (200; 5.870052ms) May 29 13:03:42.965: INFO: (2) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w/proxy/: test (200; 3.80408ms) May 29 13:03:42.965: INFO: (2) /api/v1/namespaces/proxy-7536/pods/http:proxy-service-rd5pp-6hg6w:162/proxy/: bar (200; 3.935686ms) May 29 13:03:42.966: INFO: (2) /api/v1/namespaces/proxy-7536/pods/http:proxy-service-rd5pp-6hg6w:160/proxy/: foo (200; 4.677695ms) May 29 13:03:42.966: INFO: (2) /api/v1/namespaces/proxy-7536/pods/https:proxy-service-rd5pp-6hg6w:462/proxy/: tls qux (200; 4.832446ms) May 29 13:03:42.966: INFO: (2) /api/v1/namespaces/proxy-7536/pods/http:proxy-service-rd5pp-6hg6w:1080/proxy/: ... (200; 4.920505ms) May 29 13:03:42.966: INFO: (2) /api/v1/namespaces/proxy-7536/services/https:proxy-service-rd5pp:tlsportname1/proxy/: tls baz (200; 4.887635ms) May 29 13:03:42.966: INFO: (2) /api/v1/namespaces/proxy-7536/pods/https:proxy-service-rd5pp-6hg6w:460/proxy/: tls baz (200; 4.925949ms) May 29 13:03:42.966: INFO: (2) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w:1080/proxy/: test<... (200; 4.926762ms) May 29 13:03:42.966: INFO: (2) /api/v1/namespaces/proxy-7536/services/proxy-service-rd5pp:portname2/proxy/: bar (200; 4.939637ms) May 29 13:03:42.966: INFO: (2) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w:160/proxy/: foo (200; 4.959525ms) May 29 13:03:42.967: INFO: (2) /api/v1/namespaces/proxy-7536/services/proxy-service-rd5pp:portname1/proxy/: foo (200; 5.128724ms) May 29 13:03:42.967: INFO: (2) /api/v1/namespaces/proxy-7536/services/http:proxy-service-rd5pp:portname1/proxy/: foo (200; 5.961478ms) May 29 13:03:42.967: INFO: (2) /api/v1/namespaces/proxy-7536/pods/https:proxy-service-rd5pp-6hg6w:443/proxy/: ... (200; 3.281715ms) May 29 13:03:42.971: INFO: (3) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w:162/proxy/: bar (200; 3.286859ms) May 29 13:03:42.971: INFO: (3) /api/v1/namespaces/proxy-7536/pods/http:proxy-service-rd5pp-6hg6w:160/proxy/: foo (200; 3.356578ms) May 29 13:03:42.971: INFO: (3) /api/v1/namespaces/proxy-7536/pods/http:proxy-service-rd5pp-6hg6w:162/proxy/: bar (200; 3.405155ms) May 29 13:03:42.971: INFO: (3) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w/proxy/: test (200; 3.39095ms) May 29 13:03:42.971: INFO: (3) /api/v1/namespaces/proxy-7536/pods/https:proxy-service-rd5pp-6hg6w:462/proxy/: tls qux (200; 3.437606ms) May 29 13:03:42.971: INFO: (3) /api/v1/namespaces/proxy-7536/pods/https:proxy-service-rd5pp-6hg6w:460/proxy/: tls baz (200; 3.495619ms) May 29 13:03:42.973: INFO: (3) /api/v1/namespaces/proxy-7536/services/https:proxy-service-rd5pp:tlsportname2/proxy/: tls qux (200; 5.12679ms) May 29 13:03:42.973: INFO: (3) /api/v1/namespaces/proxy-7536/services/proxy-service-rd5pp:portname1/proxy/: foo (200; 5.441932ms) May 29 13:03:42.973: INFO: (3) /api/v1/namespaces/proxy-7536/services/http:proxy-service-rd5pp:portname1/proxy/: foo (200; 5.444626ms) May 29 13:03:42.973: INFO: (3) /api/v1/namespaces/proxy-7536/services/http:proxy-service-rd5pp:portname2/proxy/: bar (200; 5.390711ms) May 29 13:03:42.973: INFO: (3) /api/v1/namespaces/proxy-7536/services/proxy-service-rd5pp:portname2/proxy/: bar (200; 5.524367ms) May 29 13:03:42.973: INFO: (3) /api/v1/namespaces/proxy-7536/services/https:proxy-service-rd5pp:tlsportname1/proxy/: tls baz (200; 5.47723ms) May 29 13:03:42.975: INFO: (3) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w:1080/proxy/: test<... (200; 7.46572ms) May 29 13:03:42.977: INFO: (4) /api/v1/namespaces/proxy-7536/pods/https:proxy-service-rd5pp-6hg6w:460/proxy/: tls baz (200; 2.031664ms) May 29 13:03:42.979: INFO: (4) /api/v1/namespaces/proxy-7536/pods/http:proxy-service-rd5pp-6hg6w:1080/proxy/: ... (200; 3.800349ms) May 29 13:03:42.979: INFO: (4) /api/v1/namespaces/proxy-7536/pods/https:proxy-service-rd5pp-6hg6w:462/proxy/: tls qux (200; 3.828023ms) May 29 13:03:42.979: INFO: (4) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w:160/proxy/: foo (200; 3.823761ms) May 29 13:03:42.979: INFO: (4) /api/v1/namespaces/proxy-7536/pods/http:proxy-service-rd5pp-6hg6w:162/proxy/: bar (200; 3.917888ms) May 29 13:03:42.979: INFO: (4) /api/v1/namespaces/proxy-7536/services/http:proxy-service-rd5pp:portname1/proxy/: foo (200; 3.953553ms) May 29 13:03:42.979: INFO: (4) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w:1080/proxy/: test<... (200; 3.935739ms) May 29 13:03:42.979: INFO: (4) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w/proxy/: test (200; 4.033555ms) May 29 13:03:42.979: INFO: (4) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w:162/proxy/: bar (200; 4.003009ms) May 29 13:03:42.979: INFO: (4) /api/v1/namespaces/proxy-7536/pods/http:proxy-service-rd5pp-6hg6w:160/proxy/: foo (200; 3.997184ms) May 29 13:03:42.979: INFO: (4) /api/v1/namespaces/proxy-7536/pods/https:proxy-service-rd5pp-6hg6w:443/proxy/: test<... (200; 5.575397ms) May 29 13:03:42.986: INFO: (5) /api/v1/namespaces/proxy-7536/services/https:proxy-service-rd5pp:tlsportname2/proxy/: tls qux (200; 5.714065ms) May 29 13:03:42.986: INFO: (5) /api/v1/namespaces/proxy-7536/pods/https:proxy-service-rd5pp-6hg6w:443/proxy/: ... (200; 6.388133ms) May 29 13:03:42.987: INFO: (5) /api/v1/namespaces/proxy-7536/services/proxy-service-rd5pp:portname2/proxy/: bar (200; 6.577653ms) May 29 13:03:42.987: INFO: (5) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w/proxy/: test (200; 6.642909ms) May 29 13:03:42.987: INFO: (5) /api/v1/namespaces/proxy-7536/services/proxy-service-rd5pp:portname1/proxy/: foo (200; 7.01818ms) May 29 13:03:42.988: INFO: (5) /api/v1/namespaces/proxy-7536/services/http:proxy-service-rd5pp:portname2/proxy/: bar (200; 7.441778ms) May 29 13:03:42.988: INFO: (5) /api/v1/namespaces/proxy-7536/services/http:proxy-service-rd5pp:portname1/proxy/: foo (200; 7.453166ms) May 29 13:03:42.988: INFO: (5) /api/v1/namespaces/proxy-7536/services/https:proxy-service-rd5pp:tlsportname1/proxy/: tls baz (200; 7.809999ms) May 29 13:03:42.992: INFO: (6) /api/v1/namespaces/proxy-7536/pods/https:proxy-service-rd5pp-6hg6w:460/proxy/: tls baz (200; 4.366094ms) May 29 13:03:42.993: INFO: (6) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w:1080/proxy/: test<... (200; 4.774983ms) May 29 13:03:42.993: INFO: (6) /api/v1/namespaces/proxy-7536/pods/https:proxy-service-rd5pp-6hg6w:443/proxy/: ... (200; 5.312072ms) May 29 13:03:42.993: INFO: (6) /api/v1/namespaces/proxy-7536/services/proxy-service-rd5pp:portname2/proxy/: bar (200; 5.257632ms) May 29 13:03:42.993: INFO: (6) /api/v1/namespaces/proxy-7536/pods/http:proxy-service-rd5pp-6hg6w:162/proxy/: bar (200; 5.204891ms) May 29 13:03:42.993: INFO: (6) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w/proxy/: test (200; 5.259712ms) May 29 13:03:42.993: INFO: (6) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w:160/proxy/: foo (200; 5.344982ms) May 29 13:03:42.993: INFO: (6) /api/v1/namespaces/proxy-7536/pods/http:proxy-service-rd5pp-6hg6w:160/proxy/: foo (200; 5.361287ms) May 29 13:03:42.993: INFO: (6) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w:162/proxy/: bar (200; 5.412461ms) May 29 13:03:42.993: INFO: (6) /api/v1/namespaces/proxy-7536/pods/https:proxy-service-rd5pp-6hg6w:462/proxy/: tls qux (200; 5.308256ms) May 29 13:03:42.993: INFO: (6) /api/v1/namespaces/proxy-7536/services/proxy-service-rd5pp:portname1/proxy/: foo (200; 5.386982ms) May 29 13:03:42.993: INFO: (6) /api/v1/namespaces/proxy-7536/services/https:proxy-service-rd5pp:tlsportname1/proxy/: tls baz (200; 5.371009ms) May 29 13:03:42.993: INFO: (6) /api/v1/namespaces/proxy-7536/services/http:proxy-service-rd5pp:portname1/proxy/: foo (200; 5.430651ms) May 29 13:03:42.993: INFO: (6) /api/v1/namespaces/proxy-7536/services/https:proxy-service-rd5pp:tlsportname2/proxy/: tls qux (200; 5.434208ms) May 29 13:03:42.994: INFO: (6) /api/v1/namespaces/proxy-7536/services/http:proxy-service-rd5pp:portname2/proxy/: bar (200; 5.539973ms) May 29 13:03:42.998: INFO: (7) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w:1080/proxy/: test<... (200; 3.777392ms) May 29 13:03:42.998: INFO: (7) /api/v1/namespaces/proxy-7536/services/proxy-service-rd5pp:portname2/proxy/: bar (200; 3.817415ms) May 29 13:03:42.998: INFO: (7) /api/v1/namespaces/proxy-7536/pods/http:proxy-service-rd5pp-6hg6w:1080/proxy/: ... (200; 3.802109ms) May 29 13:03:42.998: INFO: (7) /api/v1/namespaces/proxy-7536/pods/https:proxy-service-rd5pp-6hg6w:460/proxy/: tls baz (200; 3.874878ms) May 29 13:03:42.998: INFO: (7) /api/v1/namespaces/proxy-7536/pods/http:proxy-service-rd5pp-6hg6w:162/proxy/: bar (200; 3.996984ms) May 29 13:03:42.998: INFO: (7) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w/proxy/: test (200; 4.598075ms) May 29 13:03:42.998: INFO: (7) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w:160/proxy/: foo (200; 4.566267ms) May 29 13:03:42.999: INFO: (7) /api/v1/namespaces/proxy-7536/pods/http:proxy-service-rd5pp-6hg6w:160/proxy/: foo (200; 4.811778ms) May 29 13:03:42.999: INFO: (7) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w:162/proxy/: bar (200; 4.8216ms) May 29 13:03:42.999: INFO: (7) /api/v1/namespaces/proxy-7536/services/http:proxy-service-rd5pp:portname2/proxy/: bar (200; 4.810611ms) May 29 13:03:42.999: INFO: (7) /api/v1/namespaces/proxy-7536/pods/https:proxy-service-rd5pp-6hg6w:443/proxy/: ... (200; 2.629049ms) May 29 13:03:43.001: INFO: (8) /api/v1/namespaces/proxy-7536/pods/https:proxy-service-rd5pp-6hg6w:462/proxy/: tls qux (200; 2.711769ms) May 29 13:03:43.002: INFO: (8) /api/v1/namespaces/proxy-7536/pods/http:proxy-service-rd5pp-6hg6w:160/proxy/: foo (200; 3.262331ms) May 29 13:03:43.002: INFO: (8) /api/v1/namespaces/proxy-7536/pods/https:proxy-service-rd5pp-6hg6w:443/proxy/: test<... (200; 4.580197ms) May 29 13:03:43.003: INFO: (8) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w:160/proxy/: foo (200; 4.52441ms) May 29 13:03:43.004: INFO: (8) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w:162/proxy/: bar (200; 4.67249ms) May 29 13:03:43.004: INFO: (8) /api/v1/namespaces/proxy-7536/services/http:proxy-service-rd5pp:portname2/proxy/: bar (200; 4.713991ms) May 29 13:03:43.004: INFO: (8) /api/v1/namespaces/proxy-7536/pods/https:proxy-service-rd5pp-6hg6w:460/proxy/: tls baz (200; 4.838591ms) May 29 13:03:43.004: INFO: (8) /api/v1/namespaces/proxy-7536/services/proxy-service-rd5pp:portname1/proxy/: foo (200; 4.990653ms) May 29 13:03:43.004: INFO: (8) /api/v1/namespaces/proxy-7536/services/proxy-service-rd5pp:portname2/proxy/: bar (200; 4.984316ms) May 29 13:03:43.004: INFO: (8) /api/v1/namespaces/proxy-7536/services/https:proxy-service-rd5pp:tlsportname1/proxy/: tls baz (200; 4.971944ms) May 29 13:03:43.004: INFO: (8) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w/proxy/: test (200; 5.214961ms) May 29 13:03:43.013: INFO: (9) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w:162/proxy/: bar (200; 8.854134ms) May 29 13:03:43.013: INFO: (9) /api/v1/namespaces/proxy-7536/pods/http:proxy-service-rd5pp-6hg6w:160/proxy/: foo (200; 8.867895ms) May 29 13:03:43.013: INFO: (9) /api/v1/namespaces/proxy-7536/pods/http:proxy-service-rd5pp-6hg6w:1080/proxy/: ... (200; 8.921795ms) May 29 13:03:43.013: INFO: (9) /api/v1/namespaces/proxy-7536/services/http:proxy-service-rd5pp:portname2/proxy/: bar (200; 9.021616ms) May 29 13:03:43.013: INFO: (9) /api/v1/namespaces/proxy-7536/services/proxy-service-rd5pp:portname1/proxy/: foo (200; 8.965876ms) May 29 13:03:43.013: INFO: (9) /api/v1/namespaces/proxy-7536/pods/https:proxy-service-rd5pp-6hg6w:460/proxy/: tls baz (200; 8.990969ms) May 29 13:03:43.013: INFO: (9) /api/v1/namespaces/proxy-7536/pods/https:proxy-service-rd5pp-6hg6w:443/proxy/: test<... (200; 9.862152ms) May 29 13:03:43.014: INFO: (9) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w:160/proxy/: foo (200; 9.909575ms) May 29 13:03:43.014: INFO: (9) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w/proxy/: test (200; 9.831212ms) May 29 13:03:43.018: INFO: (10) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w:160/proxy/: foo (200; 3.952341ms) May 29 13:03:43.018: INFO: (10) /api/v1/namespaces/proxy-7536/services/proxy-service-rd5pp:portname2/proxy/: bar (200; 4.402812ms) May 29 13:03:43.018: INFO: (10) /api/v1/namespaces/proxy-7536/services/https:proxy-service-rd5pp:tlsportname2/proxy/: tls qux (200; 4.409298ms) May 29 13:03:43.019: INFO: (10) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w/proxy/: test (200; 4.687644ms) May 29 13:03:43.019: INFO: (10) /api/v1/namespaces/proxy-7536/pods/https:proxy-service-rd5pp-6hg6w:443/proxy/: ... (200; 6.805513ms) May 29 13:03:43.021: INFO: (10) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w:1080/proxy/: test<... (200; 6.920029ms) May 29 13:03:43.025: INFO: (11) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w:1080/proxy/: test<... (200; 3.83141ms) May 29 13:03:43.025: INFO: (11) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w:160/proxy/: foo (200; 3.801744ms) May 29 13:03:43.026: INFO: (11) /api/v1/namespaces/proxy-7536/pods/https:proxy-service-rd5pp-6hg6w:460/proxy/: tls baz (200; 4.641073ms) May 29 13:03:43.026: INFO: (11) /api/v1/namespaces/proxy-7536/pods/http:proxy-service-rd5pp-6hg6w:160/proxy/: foo (200; 4.659212ms) May 29 13:03:43.026: INFO: (11) /api/v1/namespaces/proxy-7536/pods/http:proxy-service-rd5pp-6hg6w:1080/proxy/: ... (200; 4.631875ms) May 29 13:03:43.026: INFO: (11) /api/v1/namespaces/proxy-7536/services/proxy-service-rd5pp:portname2/proxy/: bar (200; 4.630844ms) May 29 13:03:43.026: INFO: (11) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w:162/proxy/: bar (200; 4.747532ms) May 29 13:03:43.026: INFO: (11) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w/proxy/: test (200; 4.716105ms) May 29 13:03:43.026: INFO: (11) /api/v1/namespaces/proxy-7536/services/proxy-service-rd5pp:portname1/proxy/: foo (200; 4.775624ms) May 29 13:03:43.026: INFO: (11) /api/v1/namespaces/proxy-7536/pods/http:proxy-service-rd5pp-6hg6w:162/proxy/: bar (200; 4.805516ms) May 29 13:03:43.026: INFO: (11) /api/v1/namespaces/proxy-7536/pods/https:proxy-service-rd5pp-6hg6w:462/proxy/: tls qux (200; 5.254766ms) May 29 13:03:43.026: INFO: (11) /api/v1/namespaces/proxy-7536/services/https:proxy-service-rd5pp:tlsportname1/proxy/: tls baz (200; 5.283715ms) May 29 13:03:43.026: INFO: (11) /api/v1/namespaces/proxy-7536/services/http:proxy-service-rd5pp:portname1/proxy/: foo (200; 5.326768ms) May 29 13:03:43.026: INFO: (11) /api/v1/namespaces/proxy-7536/pods/https:proxy-service-rd5pp-6hg6w:443/proxy/: test<... (200; 5.49668ms) May 29 13:03:43.032: INFO: (12) /api/v1/namespaces/proxy-7536/pods/http:proxy-service-rd5pp-6hg6w:1080/proxy/: ... (200; 5.613826ms) May 29 13:03:43.032: INFO: (12) /api/v1/namespaces/proxy-7536/services/https:proxy-service-rd5pp:tlsportname1/proxy/: tls baz (200; 5.805843ms) May 29 13:03:43.032: INFO: (12) /api/v1/namespaces/proxy-7536/pods/https:proxy-service-rd5pp-6hg6w:460/proxy/: tls baz (200; 5.861898ms) May 29 13:03:43.033: INFO: (12) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w:162/proxy/: bar (200; 5.858206ms) May 29 13:03:43.033: INFO: (12) /api/v1/namespaces/proxy-7536/pods/http:proxy-service-rd5pp-6hg6w:162/proxy/: bar (200; 6.2788ms) May 29 13:03:43.033: INFO: (12) /api/v1/namespaces/proxy-7536/services/proxy-service-rd5pp:portname1/proxy/: foo (200; 6.315562ms) May 29 13:03:43.033: INFO: (12) /api/v1/namespaces/proxy-7536/services/https:proxy-service-rd5pp:tlsportname2/proxy/: tls qux (200; 6.389928ms) May 29 13:03:43.033: INFO: (12) /api/v1/namespaces/proxy-7536/pods/https:proxy-service-rd5pp-6hg6w:462/proxy/: tls qux (200; 6.61554ms) May 29 13:03:43.033: INFO: (12) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w/proxy/: test (200; 6.764672ms) May 29 13:03:43.034: INFO: (12) /api/v1/namespaces/proxy-7536/pods/http:proxy-service-rd5pp-6hg6w:160/proxy/: foo (200; 6.954942ms) May 29 13:03:43.037: INFO: (13) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w:160/proxy/: foo (200; 3.57302ms) May 29 13:03:43.038: INFO: (13) /api/v1/namespaces/proxy-7536/pods/https:proxy-service-rd5pp-6hg6w:462/proxy/: tls qux (200; 3.928518ms) May 29 13:03:43.038: INFO: (13) /api/v1/namespaces/proxy-7536/pods/http:proxy-service-rd5pp-6hg6w:162/proxy/: bar (200; 3.944799ms) May 29 13:03:43.038: INFO: (13) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w:1080/proxy/: test<... (200; 4.016969ms) May 29 13:03:43.039: INFO: (13) /api/v1/namespaces/proxy-7536/services/http:proxy-service-rd5pp:portname1/proxy/: foo (200; 4.972716ms) May 29 13:03:43.039: INFO: (13) /api/v1/namespaces/proxy-7536/pods/https:proxy-service-rd5pp-6hg6w:443/proxy/: ... (200; 5.036437ms) May 29 13:03:43.039: INFO: (13) /api/v1/namespaces/proxy-7536/pods/http:proxy-service-rd5pp-6hg6w:160/proxy/: foo (200; 5.042467ms) May 29 13:03:43.039: INFO: (13) /api/v1/namespaces/proxy-7536/services/http:proxy-service-rd5pp:portname2/proxy/: bar (200; 5.147893ms) May 29 13:03:43.039: INFO: (13) /api/v1/namespaces/proxy-7536/services/proxy-service-rd5pp:portname2/proxy/: bar (200; 5.14389ms) May 29 13:03:43.039: INFO: (13) /api/v1/namespaces/proxy-7536/services/https:proxy-service-rd5pp:tlsportname1/proxy/: tls baz (200; 5.175326ms) May 29 13:03:43.039: INFO: (13) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w/proxy/: test (200; 5.118773ms) May 29 13:03:43.039: INFO: (13) /api/v1/namespaces/proxy-7536/services/proxy-service-rd5pp:portname1/proxy/: foo (200; 5.187821ms) May 29 13:03:43.041: INFO: (14) /api/v1/namespaces/proxy-7536/pods/http:proxy-service-rd5pp-6hg6w:162/proxy/: bar (200; 2.446711ms) May 29 13:03:43.041: INFO: (14) /api/v1/namespaces/proxy-7536/pods/https:proxy-service-rd5pp-6hg6w:443/proxy/: ... (200; 4.361623ms) May 29 13:03:43.043: INFO: (14) /api/v1/namespaces/proxy-7536/services/https:proxy-service-rd5pp:tlsportname1/proxy/: tls baz (200; 4.370799ms) May 29 13:03:43.043: INFO: (14) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w:1080/proxy/: test<... (200; 4.415886ms) May 29 13:03:43.043: INFO: (14) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w:160/proxy/: foo (200; 4.348405ms) May 29 13:03:43.044: INFO: (14) /api/v1/namespaces/proxy-7536/pods/https:proxy-service-rd5pp-6hg6w:462/proxy/: tls qux (200; 4.618658ms) May 29 13:03:43.044: INFO: (14) /api/v1/namespaces/proxy-7536/services/https:proxy-service-rd5pp:tlsportname2/proxy/: tls qux (200; 5.286764ms) May 29 13:03:43.045: INFO: (14) /api/v1/namespaces/proxy-7536/services/proxy-service-rd5pp:portname2/proxy/: bar (200; 5.834701ms) May 29 13:03:43.045: INFO: (14) /api/v1/namespaces/proxy-7536/services/http:proxy-service-rd5pp:portname1/proxy/: foo (200; 5.836873ms) May 29 13:03:43.045: INFO: (14) /api/v1/namespaces/proxy-7536/services/http:proxy-service-rd5pp:portname2/proxy/: bar (200; 5.863197ms) May 29 13:03:43.045: INFO: (14) /api/v1/namespaces/proxy-7536/services/proxy-service-rd5pp:portname1/proxy/: foo (200; 5.850597ms) May 29 13:03:43.045: INFO: (14) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w/proxy/: test (200; 6.162229ms) May 29 13:03:43.048: INFO: (15) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w:160/proxy/: foo (200; 2.35756ms) May 29 13:03:43.048: INFO: (15) /api/v1/namespaces/proxy-7536/pods/https:proxy-service-rd5pp-6hg6w:443/proxy/: ... (200; 3.816887ms) May 29 13:03:43.049: INFO: (15) /api/v1/namespaces/proxy-7536/pods/http:proxy-service-rd5pp-6hg6w:162/proxy/: bar (200; 3.776919ms) May 29 13:03:43.049: INFO: (15) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w/proxy/: test (200; 4.097074ms) May 29 13:03:43.049: INFO: (15) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w:162/proxy/: bar (200; 4.148107ms) May 29 13:03:43.050: INFO: (15) /api/v1/namespaces/proxy-7536/pods/https:proxy-service-rd5pp-6hg6w:462/proxy/: tls qux (200; 4.334074ms) May 29 13:03:43.050: INFO: (15) /api/v1/namespaces/proxy-7536/pods/https:proxy-service-rd5pp-6hg6w:460/proxy/: tls baz (200; 4.406118ms) May 29 13:03:43.050: INFO: (15) /api/v1/namespaces/proxy-7536/pods/http:proxy-service-rd5pp-6hg6w:160/proxy/: foo (200; 4.501048ms) May 29 13:03:43.050: INFO: (15) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w:1080/proxy/: test<... (200; 4.455259ms) May 29 13:03:43.051: INFO: (15) /api/v1/namespaces/proxy-7536/services/http:proxy-service-rd5pp:portname2/proxy/: bar (200; 5.850797ms) May 29 13:03:43.051: INFO: (15) /api/v1/namespaces/proxy-7536/services/http:proxy-service-rd5pp:portname1/proxy/: foo (200; 5.893834ms) May 29 13:03:43.051: INFO: (15) /api/v1/namespaces/proxy-7536/services/https:proxy-service-rd5pp:tlsportname2/proxy/: tls qux (200; 5.888622ms) May 29 13:03:43.051: INFO: (15) /api/v1/namespaces/proxy-7536/services/proxy-service-rd5pp:portname2/proxy/: bar (200; 5.90175ms) May 29 13:03:43.051: INFO: (15) /api/v1/namespaces/proxy-7536/services/proxy-service-rd5pp:portname1/proxy/: foo (200; 6.003006ms) May 29 13:03:43.051: INFO: (15) /api/v1/namespaces/proxy-7536/services/https:proxy-service-rd5pp:tlsportname1/proxy/: tls baz (200; 6.20776ms) May 29 13:03:43.056: INFO: (16) /api/v1/namespaces/proxy-7536/pods/http:proxy-service-rd5pp-6hg6w:160/proxy/: foo (200; 4.15397ms) May 29 13:03:43.056: INFO: (16) /api/v1/namespaces/proxy-7536/pods/http:proxy-service-rd5pp-6hg6w:1080/proxy/: ... (200; 4.204955ms) May 29 13:03:43.056: INFO: (16) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w:1080/proxy/: test<... (200; 4.272132ms) May 29 13:03:43.056: INFO: (16) /api/v1/namespaces/proxy-7536/pods/https:proxy-service-rd5pp-6hg6w:443/proxy/: test (200; 4.538389ms) May 29 13:03:43.057: INFO: (16) /api/v1/namespaces/proxy-7536/services/proxy-service-rd5pp:portname1/proxy/: foo (200; 5.363531ms) May 29 13:03:43.057: INFO: (16) /api/v1/namespaces/proxy-7536/pods/https:proxy-service-rd5pp-6hg6w:462/proxy/: tls qux (200; 5.492422ms) May 29 13:03:43.057: INFO: (16) /api/v1/namespaces/proxy-7536/services/https:proxy-service-rd5pp:tlsportname1/proxy/: tls baz (200; 5.662236ms) May 29 13:03:43.057: INFO: (16) /api/v1/namespaces/proxy-7536/services/http:proxy-service-rd5pp:portname1/proxy/: foo (200; 5.821699ms) May 29 13:03:43.057: INFO: (16) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w:162/proxy/: bar (200; 5.924282ms) May 29 13:03:43.057: INFO: (16) /api/v1/namespaces/proxy-7536/services/proxy-service-rd5pp:portname2/proxy/: bar (200; 5.805487ms) May 29 13:03:43.058: INFO: (16) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w:160/proxy/: foo (200; 6.164989ms) May 29 13:03:43.058: INFO: (16) /api/v1/namespaces/proxy-7536/services/http:proxy-service-rd5pp:portname2/proxy/: bar (200; 6.221478ms) May 29 13:03:43.058: INFO: (16) /api/v1/namespaces/proxy-7536/services/https:proxy-service-rd5pp:tlsportname2/proxy/: tls qux (200; 6.349647ms) May 29 13:03:43.058: INFO: (16) /api/v1/namespaces/proxy-7536/pods/https:proxy-service-rd5pp-6hg6w:460/proxy/: tls baz (200; 6.317281ms) May 29 13:03:43.058: INFO: (16) /api/v1/namespaces/proxy-7536/pods/http:proxy-service-rd5pp-6hg6w:162/proxy/: bar (200; 6.41473ms) May 29 13:03:43.061: INFO: (17) /api/v1/namespaces/proxy-7536/pods/http:proxy-service-rd5pp-6hg6w:1080/proxy/: ... (200; 2.944589ms) May 29 13:03:43.061: INFO: (17) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w:162/proxy/: bar (200; 3.09457ms) May 29 13:03:43.061: INFO: (17) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w:1080/proxy/: test<... (200; 3.112893ms) May 29 13:03:43.061: INFO: (17) /api/v1/namespaces/proxy-7536/pods/https:proxy-service-rd5pp-6hg6w:443/proxy/: test (200; 3.77499ms) May 29 13:03:43.062: INFO: (17) /api/v1/namespaces/proxy-7536/services/http:proxy-service-rd5pp:portname2/proxy/: bar (200; 3.921521ms) May 29 13:03:43.063: INFO: (17) /api/v1/namespaces/proxy-7536/services/proxy-service-rd5pp:portname2/proxy/: bar (200; 4.447765ms) May 29 13:03:43.063: INFO: (17) /api/v1/namespaces/proxy-7536/services/http:proxy-service-rd5pp:portname1/proxy/: foo (200; 4.394972ms) May 29 13:03:43.063: INFO: (17) /api/v1/namespaces/proxy-7536/services/https:proxy-service-rd5pp:tlsportname1/proxy/: tls baz (200; 4.446272ms) May 29 13:03:43.063: INFO: (17) /api/v1/namespaces/proxy-7536/services/https:proxy-service-rd5pp:tlsportname2/proxy/: tls qux (200; 4.419524ms) May 29 13:03:43.063: INFO: (17) /api/v1/namespaces/proxy-7536/services/proxy-service-rd5pp:portname1/proxy/: foo (200; 4.443659ms) May 29 13:03:43.065: INFO: (18) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w/proxy/: test (200; 1.829005ms) May 29 13:03:43.065: INFO: (18) /api/v1/namespaces/proxy-7536/pods/http:proxy-service-rd5pp-6hg6w:160/proxy/: foo (200; 1.885556ms) May 29 13:03:43.065: INFO: (18) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w:162/proxy/: bar (200; 1.838932ms) May 29 13:03:43.066: INFO: (18) /api/v1/namespaces/proxy-7536/services/http:proxy-service-rd5pp:portname2/proxy/: bar (200; 3.297779ms) May 29 13:03:43.066: INFO: (18) /api/v1/namespaces/proxy-7536/services/proxy-service-rd5pp:portname1/proxy/: foo (200; 3.368306ms) May 29 13:03:43.066: INFO: (18) /api/v1/namespaces/proxy-7536/pods/https:proxy-service-rd5pp-6hg6w:443/proxy/: ... (200; 3.62264ms) May 29 13:03:43.066: INFO: (18) /api/v1/namespaces/proxy-7536/pods/https:proxy-service-rd5pp-6hg6w:462/proxy/: tls qux (200; 3.602922ms) May 29 13:03:43.066: INFO: (18) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w:1080/proxy/: test<... (200; 3.56564ms) May 29 13:03:43.066: INFO: (18) /api/v1/namespaces/proxy-7536/services/http:proxy-service-rd5pp:portname1/proxy/: foo (200; 3.676876ms) May 29 13:03:43.066: INFO: (18) /api/v1/namespaces/proxy-7536/services/proxy-service-rd5pp:portname2/proxy/: bar (200; 3.605716ms) May 29 13:03:43.066: INFO: (18) /api/v1/namespaces/proxy-7536/services/https:proxy-service-rd5pp:tlsportname1/proxy/: tls baz (200; 3.800076ms) May 29 13:03:43.066: INFO: (18) /api/v1/namespaces/proxy-7536/pods/http:proxy-service-rd5pp-6hg6w:162/proxy/: bar (200; 3.791848ms) May 29 13:03:43.066: INFO: (18) /api/v1/namespaces/proxy-7536/pods/https:proxy-service-rd5pp-6hg6w:460/proxy/: tls baz (200; 3.774325ms) May 29 13:03:43.066: INFO: (18) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w:160/proxy/: foo (200; 3.74166ms) May 29 13:03:43.069: INFO: (19) /api/v1/namespaces/proxy-7536/pods/http:proxy-service-rd5pp-6hg6w:162/proxy/: bar (200; 2.274628ms) May 29 13:03:43.069: INFO: (19) /api/v1/namespaces/proxy-7536/pods/http:proxy-service-rd5pp-6hg6w:160/proxy/: foo (200; 2.253632ms) May 29 13:03:43.070: INFO: (19) /api/v1/namespaces/proxy-7536/services/http:proxy-service-rd5pp:portname2/proxy/: bar (200; 3.65112ms) May 29 13:03:43.070: INFO: (19) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w:160/proxy/: foo (200; 3.876973ms) May 29 13:03:43.070: INFO: (19) /api/v1/namespaces/proxy-7536/services/proxy-service-rd5pp:portname2/proxy/: bar (200; 3.888715ms) May 29 13:03:43.071: INFO: (19) /api/v1/namespaces/proxy-7536/services/http:proxy-service-rd5pp:portname1/proxy/: foo (200; 4.067676ms) May 29 13:03:43.071: INFO: (19) /api/v1/namespaces/proxy-7536/pods/https:proxy-service-rd5pp-6hg6w:460/proxy/: tls baz (200; 4.091383ms) May 29 13:03:43.071: INFO: (19) /api/v1/namespaces/proxy-7536/services/proxy-service-rd5pp:portname1/proxy/: foo (200; 4.138599ms) May 29 13:03:43.071: INFO: (19) /api/v1/namespaces/proxy-7536/pods/proxy-service-rd5pp-6hg6w/proxy/: test (200; 4.072295ms) May 29 13:03:43.071: INFO: (19) /api/v1/namespaces/proxy-7536/pods/https:proxy-service-rd5pp-6hg6w:443/proxy/: test<... (200; 4.083935ms) May 29 13:03:43.071: INFO: (19) /api/v1/namespaces/proxy-7536/services/https:proxy-service-rd5pp:tlsportname1/proxy/: tls baz (200; 4.143823ms) May 29 13:03:43.071: INFO: (19) /api/v1/namespaces/proxy-7536/pods/http:proxy-service-rd5pp-6hg6w:1080/proxy/: ... (200; 4.170939ms) STEP: deleting ReplicationController proxy-service-rd5pp in namespace proxy-7536, will wait for the garbage collector to delete the pods May 29 13:03:43.128: INFO: Deleting ReplicationController proxy-service-rd5pp took: 5.670132ms May 29 13:03:43.428: INFO: Terminating ReplicationController proxy-service-rd5pp pods took: 300.25058ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:03:51.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-7536" for this suite. May 29 13:03:57.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:03:58.034: INFO: namespace proxy-7536 deletion completed in 6.099701632s • [SLOW TEST:21.316 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:03:58.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 29 13:03:58.115: INFO: Creating ReplicaSet my-hostname-basic-ea6492f5-5cd4-4446-a13a-3872258bb228 May 29 13:03:58.130: INFO: Pod name my-hostname-basic-ea6492f5-5cd4-4446-a13a-3872258bb228: Found 0 pods out of 1 May 29 13:04:03.134: INFO: Pod name my-hostname-basic-ea6492f5-5cd4-4446-a13a-3872258bb228: Found 1 pods out of 1 May 29 13:04:03.134: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-ea6492f5-5cd4-4446-a13a-3872258bb228" is running May 29 13:04:03.136: INFO: Pod "my-hostname-basic-ea6492f5-5cd4-4446-a13a-3872258bb228-2q9zr" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-29 13:03:58 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-29 13:04:01 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-29 13:04:01 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-29 13:03:58 +0000 UTC Reason: Message:}]) May 29 13:04:03.136: INFO: Trying to dial the pod May 29 13:04:08.153: INFO: Controller my-hostname-basic-ea6492f5-5cd4-4446-a13a-3872258bb228: Got expected result from replica 1 [my-hostname-basic-ea6492f5-5cd4-4446-a13a-3872258bb228-2q9zr]: "my-hostname-basic-ea6492f5-5cd4-4446-a13a-3872258bb228-2q9zr", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:04:08.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-8163" for this suite. May 29 13:04:14.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:04:14.329: INFO: namespace replicaset-8163 deletion completed in 6.157393487s • [SLOW TEST:16.294 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:04:14.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-40af1423-4366-4426-abd7-3b1e95e7bc7c STEP: Creating secret with name s-test-opt-upd-e68d7f05-839c-4fb7-9395-8e384d7da8f6 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-40af1423-4366-4426-abd7-3b1e95e7bc7c STEP: Updating secret s-test-opt-upd-e68d7f05-839c-4fb7-9395-8e384d7da8f6 STEP: Creating secret with name s-test-opt-create-25756f86-1061-422e-ba37-746388c122fa STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:05:47.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7419" for this suite. May 29 13:06:11.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:06:11.533: INFO: namespace projected-7419 deletion completed in 24.109841925s • [SLOW TEST:117.204 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:06:11.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 29 13:06:11.605: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d86dabb9-fa70-4b59-9e9c-1485e2913098" in namespace "projected-2329" to be "success or failure" May 29 13:06:11.611: INFO: Pod "downwardapi-volume-d86dabb9-fa70-4b59-9e9c-1485e2913098": Phase="Pending", Reason="", readiness=false. Elapsed: 5.440189ms May 29 13:06:13.656: INFO: Pod "downwardapi-volume-d86dabb9-fa70-4b59-9e9c-1485e2913098": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050637585s May 29 13:06:15.660: INFO: Pod "downwardapi-volume-d86dabb9-fa70-4b59-9e9c-1485e2913098": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054797265s STEP: Saw pod success May 29 13:06:15.660: INFO: Pod "downwardapi-volume-d86dabb9-fa70-4b59-9e9c-1485e2913098" satisfied condition "success or failure" May 29 13:06:15.663: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-d86dabb9-fa70-4b59-9e9c-1485e2913098 container client-container: STEP: delete the pod May 29 13:06:15.678: INFO: Waiting for pod downwardapi-volume-d86dabb9-fa70-4b59-9e9c-1485e2913098 to disappear May 29 13:06:15.696: INFO: Pod downwardapi-volume-d86dabb9-fa70-4b59-9e9c-1485e2913098 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:06:15.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2329" for this suite. May 29 13:06:21.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:06:21.816: INFO: namespace projected-2329 deletion completed in 6.093679196s • [SLOW TEST:10.283 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:06:21.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-9804, will wait for the garbage collector to delete the pods May 29 13:06:27.946: INFO: Deleting Job.batch foo took: 6.661145ms May 29 13:06:28.246: INFO: Terminating Job.batch foo pods took: 300.27092ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:07:12.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9804" for this suite. May 29 13:07:18.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:07:18.347: INFO: namespace job-9804 deletion completed in 6.092881029s • [SLOW TEST:56.531 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:07:18.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 29 13:07:18.520: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"208b3bea-1dbe-4766-ba76-76a13c15c760", Controller:(*bool)(0xc000c8540a), BlockOwnerDeletion:(*bool)(0xc000c8540b)}} May 29 13:07:18.534: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"17deee1b-a994-4324-8888-0a3c1f264c9f", Controller:(*bool)(0xc000c855c2), BlockOwnerDeletion:(*bool)(0xc000c855c3)}} May 29 13:07:18.592: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"11e7bc25-ecf9-4071-a7c5-a526c7379c4e", Controller:(*bool)(0xc0030e558a), BlockOwnerDeletion:(*bool)(0xc0030e558b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:07:23.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9650" for this suite. May 29 13:07:29.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:07:29.738: INFO: namespace gc-9650 deletion completed in 6.108016282s • [SLOW TEST:11.390 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:07:29.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 29 13:07:29.846: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 29 13:07:34.851: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 29 13:07:34.851: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 29 13:07:34.876: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-443,SelfLink:/apis/apps/v1/namespaces/deployment-443/deployments/test-cleanup-deployment,UID:77f4e194-ad30-4ca7-9599-2e7d517e1f17,ResourceVersion:13544090,Generation:1,CreationTimestamp:2020-05-29 13:07:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} May 29 13:07:34.882: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-443,SelfLink:/apis/apps/v1/namespaces/deployment-443/replicasets/test-cleanup-deployment-55bbcbc84c,UID:88b39425-600c-4133-877a-1aa9dbac2c9a,ResourceVersion:13544092,Generation:1,CreationTimestamp:2020-05-29 13:07:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 77f4e194-ad30-4ca7-9599-2e7d517e1f17 0xc001570fb7 0xc001570fb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 29 13:07:34.882: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 29 13:07:34.882: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-443,SelfLink:/apis/apps/v1/namespaces/deployment-443/replicasets/test-cleanup-controller,UID:fee64668-07fe-4781-96b1-ebc83d2d60a5,ResourceVersion:13544091,Generation:1,CreationTimestamp:2020-05-29 13:07:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 77f4e194-ad30-4ca7-9599-2e7d517e1f17 0xc001570ed7 0xc001570ed8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 29 13:07:34.911: INFO: Pod "test-cleanup-controller-7h8j5" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-7h8j5,GenerateName:test-cleanup-controller-,Namespace:deployment-443,SelfLink:/api/v1/namespaces/deployment-443/pods/test-cleanup-controller-7h8j5,UID:6b174b5d-9139-4fd6-b2c4-7d5d24936435,ResourceVersion:13544084,Generation:0,CreationTimestamp:2020-05-29 13:07:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller fee64668-07fe-4781-96b1-ebc83d2d60a5 0xc001b4e497 0xc001b4e498}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zmxb2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zmxb2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-zmxb2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b4e520} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b4e540}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 13:07:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 13:07:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 13:07:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 13:07:29 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.105,StartTime:2020-05-29 13:07:29 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-29 13:07:32 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://0c9364bfb51d78e8a8f96e9810179cfa9676779a54e54b2f6e8d9fabeda1a3ab}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 29 13:07:34.911: INFO: Pod "test-cleanup-deployment-55bbcbc84c-tct5k" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-tct5k,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-443,SelfLink:/api/v1/namespaces/deployment-443/pods/test-cleanup-deployment-55bbcbc84c-tct5k,UID:3b319374-7042-4fb7-89dc-fe5eb60b75ec,ResourceVersion:13544097,Generation:0,CreationTimestamp:2020-05-29 13:07:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 88b39425-600c-4133-877a-1aa9dbac2c9a 0xc001b4e627 0xc001b4e628}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zmxb2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zmxb2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-zmxb2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b4e6a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b4e6c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 13:07:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:07:34.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-443" for this suite. May 29 13:07:40.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:07:41.063: INFO: namespace deployment-443 deletion completed in 6.083859127s • [SLOW TEST:11.325 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:07:41.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:08:11.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6140" for this suite. May 29 13:08:17.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:08:18.028: INFO: namespace container-runtime-6140 deletion completed in 6.113487404s • [SLOW TEST:36.965 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:08:18.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-2b5e781c-39d3-4d13-8152-3f4316bd9a59 STEP: Creating a pod to test consume secrets May 29 13:08:18.124: INFO: Waiting up to 5m0s for pod "pod-secrets-8ef877c9-4126-4884-b7b7-d9b83d4e614e" in namespace "secrets-2415" to be "success or failure" May 29 13:08:18.128: INFO: Pod "pod-secrets-8ef877c9-4126-4884-b7b7-d9b83d4e614e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.582021ms May 29 13:08:20.131: INFO: Pod "pod-secrets-8ef877c9-4126-4884-b7b7-d9b83d4e614e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007058389s May 29 13:08:22.135: INFO: Pod "pod-secrets-8ef877c9-4126-4884-b7b7-d9b83d4e614e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011148348s STEP: Saw pod success May 29 13:08:22.136: INFO: Pod "pod-secrets-8ef877c9-4126-4884-b7b7-d9b83d4e614e" satisfied condition "success or failure" May 29 13:08:22.139: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-8ef877c9-4126-4884-b7b7-d9b83d4e614e container secret-volume-test: STEP: delete the pod May 29 13:08:22.202: INFO: Waiting for pod pod-secrets-8ef877c9-4126-4884-b7b7-d9b83d4e614e to disappear May 29 13:08:22.205: INFO: Pod pod-secrets-8ef877c9-4126-4884-b7b7-d9b83d4e614e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:08:22.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2415" for this suite. May 29 13:08:28.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:08:28.305: INFO: namespace secrets-2415 deletion completed in 6.097517668s • [SLOW TEST:10.277 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:08:28.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller May 29 13:08:28.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-235' May 29 13:08:28.677: INFO: stderr: "" May 29 13:08:28.677: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 29 13:08:28.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-235' May 29 13:08:28.801: INFO: stderr: "" May 29 13:08:28.801: INFO: stdout: "update-demo-nautilus-47j9j update-demo-nautilus-tzqt8 " May 29 13:08:28.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-47j9j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-235' May 29 13:08:28.895: INFO: stderr: "" May 29 13:08:28.895: INFO: stdout: "" May 29 13:08:28.895: INFO: update-demo-nautilus-47j9j is created but not running May 29 13:08:33.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-235' May 29 13:08:33.994: INFO: stderr: "" May 29 13:08:33.994: INFO: stdout: "update-demo-nautilus-47j9j update-demo-nautilus-tzqt8 " May 29 13:08:33.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-47j9j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-235' May 29 13:08:34.079: INFO: stderr: "" May 29 13:08:34.079: INFO: stdout: "true" May 29 13:08:34.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-47j9j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-235' May 29 13:08:34.167: INFO: stderr: "" May 29 13:08:34.167: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 29 13:08:34.167: INFO: validating pod update-demo-nautilus-47j9j May 29 13:08:34.171: INFO: got data: { "image": "nautilus.jpg" } May 29 13:08:34.171: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 29 13:08:34.171: INFO: update-demo-nautilus-47j9j is verified up and running May 29 13:08:34.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tzqt8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-235' May 29 13:08:34.265: INFO: stderr: "" May 29 13:08:34.265: INFO: stdout: "true" May 29 13:08:34.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tzqt8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-235' May 29 13:08:34.361: INFO: stderr: "" May 29 13:08:34.361: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 29 13:08:34.361: INFO: validating pod update-demo-nautilus-tzqt8 May 29 13:08:34.365: INFO: got data: { "image": "nautilus.jpg" } May 29 13:08:34.365: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 29 13:08:34.365: INFO: update-demo-nautilus-tzqt8 is verified up and running STEP: rolling-update to new replication controller May 29 13:08:34.367: INFO: scanned /root for discovery docs: May 29 13:08:34.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-235' May 29 13:08:57.258: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 29 13:08:57.258: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 29 13:08:57.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-235' May 29 13:08:57.363: INFO: stderr: "" May 29 13:08:57.363: INFO: stdout: "update-demo-kitten-brchx update-demo-kitten-p8f4r " May 29 13:08:57.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-brchx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-235' May 29 13:08:57.453: INFO: stderr: "" May 29 13:08:57.453: INFO: stdout: "true" May 29 13:08:57.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-brchx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-235' May 29 13:08:57.540: INFO: stderr: "" May 29 13:08:57.540: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 29 13:08:57.540: INFO: validating pod update-demo-kitten-brchx May 29 13:08:57.551: INFO: got data: { "image": "kitten.jpg" } May 29 13:08:57.552: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 29 13:08:57.552: INFO: update-demo-kitten-brchx is verified up and running May 29 13:08:57.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-p8f4r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-235' May 29 13:08:57.645: INFO: stderr: "" May 29 13:08:57.645: INFO: stdout: "true" May 29 13:08:57.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-p8f4r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-235' May 29 13:08:57.739: INFO: stderr: "" May 29 13:08:57.739: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 29 13:08:57.739: INFO: validating pod update-demo-kitten-p8f4r May 29 13:08:57.754: INFO: got data: { "image": "kitten.jpg" } May 29 13:08:57.754: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 29 13:08:57.754: INFO: update-demo-kitten-p8f4r is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:08:57.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-235" for this suite. May 29 13:09:21.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:09:21.866: INFO: namespace kubectl-235 deletion completed in 24.108191007s • [SLOW TEST:53.560 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:09:21.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-506 STEP: creating a selector STEP: Creating the service pods in kubernetes May 29 13:09:21.945: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 29 13:09:42.091: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.112:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-506 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 29 13:09:42.091: INFO: >>> kubeConfig: /root/.kube/config I0529 13:09:42.119582 7 log.go:172] (0xc002422580) (0xc0020adb80) Create stream I0529 13:09:42.119616 7 log.go:172] (0xc002422580) (0xc0020adb80) Stream added, broadcasting: 1 I0529 13:09:42.121937 7 log.go:172] (0xc002422580) Reply frame received for 1 I0529 13:09:42.121988 7 log.go:172] (0xc002422580) (0xc0020adc20) Create stream I0529 13:09:42.122016 7 log.go:172] (0xc002422580) (0xc0020adc20) Stream added, broadcasting: 3 I0529 13:09:42.122801 7 log.go:172] (0xc002422580) Reply frame received for 3 I0529 13:09:42.122827 7 log.go:172] (0xc002422580) (0xc0023683c0) Create stream I0529 13:09:42.122835 7 log.go:172] (0xc002422580) (0xc0023683c0) Stream added, broadcasting: 5 I0529 13:09:42.123353 7 log.go:172] (0xc002422580) Reply frame received for 5 I0529 13:09:42.220232 7 log.go:172] (0xc002422580) Data frame received for 3 I0529 13:09:42.220258 7 log.go:172] (0xc0020adc20) (3) Data frame handling I0529 13:09:42.220266 7 log.go:172] (0xc0020adc20) (3) Data frame sent I0529 13:09:42.220271 7 log.go:172] (0xc002422580) Data frame received for 3 I0529 13:09:42.220274 7 log.go:172] (0xc0020adc20) (3) Data frame handling I0529 13:09:42.220293 7 log.go:172] (0xc002422580) Data frame received for 5 I0529 13:09:42.220299 7 log.go:172] (0xc0023683c0) (5) Data frame handling I0529 13:09:42.222013 7 log.go:172] (0xc002422580) Data frame received for 1 I0529 13:09:42.222033 7 log.go:172] (0xc0020adb80) (1) Data frame handling I0529 13:09:42.222044 7 log.go:172] (0xc0020adb80) (1) Data frame sent I0529 13:09:42.222059 7 log.go:172] (0xc002422580) (0xc0020adb80) Stream removed, broadcasting: 1 I0529 13:09:42.222083 7 log.go:172] (0xc002422580) Go away received I0529 13:09:42.222202 7 log.go:172] (0xc002422580) (0xc0020adb80) Stream removed, broadcasting: 1 I0529 13:09:42.222236 7 log.go:172] (0xc002422580) (0xc0020adc20) Stream removed, broadcasting: 3 I0529 13:09:42.222245 7 log.go:172] (0xc002422580) (0xc0023683c0) Stream removed, broadcasting: 5 May 29 13:09:42.222: INFO: Found all expected endpoints: [netserver-0] May 29 13:09:42.224: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.207:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-506 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 29 13:09:42.224: INFO: >>> kubeConfig: /root/.kube/config I0529 13:09:42.248620 7 log.go:172] (0xc0023c6a50) (0xc001eb8460) Create stream I0529 13:09:42.248639 7 log.go:172] (0xc0023c6a50) (0xc001eb8460) Stream added, broadcasting: 1 I0529 13:09:42.251000 7 log.go:172] (0xc0023c6a50) Reply frame received for 1 I0529 13:09:42.251047 7 log.go:172] (0xc0023c6a50) (0xc0009f48c0) Create stream I0529 13:09:42.251058 7 log.go:172] (0xc0023c6a50) (0xc0009f48c0) Stream added, broadcasting: 3 I0529 13:09:42.251885 7 log.go:172] (0xc0023c6a50) Reply frame received for 3 I0529 13:09:42.251924 7 log.go:172] (0xc0023c6a50) (0xc002368460) Create stream I0529 13:09:42.251935 7 log.go:172] (0xc0023c6a50) (0xc002368460) Stream added, broadcasting: 5 I0529 13:09:42.252799 7 log.go:172] (0xc0023c6a50) Reply frame received for 5 I0529 13:09:42.333317 7 log.go:172] (0xc0023c6a50) Data frame received for 3 I0529 13:09:42.333351 7 log.go:172] (0xc0009f48c0) (3) Data frame handling I0529 13:09:42.333364 7 log.go:172] (0xc0009f48c0) (3) Data frame sent I0529 13:09:42.333396 7 log.go:172] (0xc0023c6a50) Data frame received for 5 I0529 13:09:42.333447 7 log.go:172] (0xc002368460) (5) Data frame handling I0529 13:09:42.333495 7 log.go:172] (0xc0023c6a50) Data frame received for 3 I0529 13:09:42.333517 7 log.go:172] (0xc0009f48c0) (3) Data frame handling I0529 13:09:42.334605 7 log.go:172] (0xc0023c6a50) Data frame received for 1 I0529 13:09:42.334628 7 log.go:172] (0xc001eb8460) (1) Data frame handling I0529 13:09:42.334651 7 log.go:172] (0xc001eb8460) (1) Data frame sent I0529 13:09:42.334778 7 log.go:172] (0xc0023c6a50) (0xc001eb8460) Stream removed, broadcasting: 1 I0529 13:09:42.334864 7 log.go:172] (0xc0023c6a50) Go away received I0529 13:09:42.334940 7 log.go:172] (0xc0023c6a50) (0xc001eb8460) Stream removed, broadcasting: 1 I0529 13:09:42.334960 7 log.go:172] (0xc0023c6a50) (0xc0009f48c0) Stream removed, broadcasting: 3 I0529 13:09:42.334970 7 log.go:172] (0xc0023c6a50) (0xc002368460) Stream removed, broadcasting: 5 May 29 13:09:42.334: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:09:42.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-506" for this suite. May 29 13:10:04.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:10:04.430: INFO: namespace pod-network-test-506 deletion completed in 22.091636154s • [SLOW TEST:42.564 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:10:04.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 29 13:10:09.045: INFO: Successfully updated pod "annotationupdate6d69d615-1e31-411e-a601-d6cb15493974" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:10:13.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7029" for this suite. May 29 13:10:35.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:10:35.236: INFO: namespace projected-7029 deletion completed in 22.101500356s • [SLOW TEST:30.806 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:10:35.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs May 29 13:10:35.303: INFO: Waiting up to 5m0s for pod "pod-33ba9ecb-2aa9-4d86-9a98-47ecc12fab04" in namespace "emptydir-118" to be "success or failure" May 29 13:10:35.318: INFO: Pod "pod-33ba9ecb-2aa9-4d86-9a98-47ecc12fab04": Phase="Pending", Reason="", readiness=false. Elapsed: 14.769653ms May 29 13:10:37.323: INFO: Pod "pod-33ba9ecb-2aa9-4d86-9a98-47ecc12fab04": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019577801s May 29 13:10:39.326: INFO: Pod "pod-33ba9ecb-2aa9-4d86-9a98-47ecc12fab04": Phase="Running", Reason="", readiness=true. Elapsed: 4.022719943s May 29 13:10:41.330: INFO: Pod "pod-33ba9ecb-2aa9-4d86-9a98-47ecc12fab04": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.026642354s STEP: Saw pod success May 29 13:10:41.330: INFO: Pod "pod-33ba9ecb-2aa9-4d86-9a98-47ecc12fab04" satisfied condition "success or failure" May 29 13:10:41.333: INFO: Trying to get logs from node iruya-worker2 pod pod-33ba9ecb-2aa9-4d86-9a98-47ecc12fab04 container test-container: STEP: delete the pod May 29 13:10:41.390: INFO: Waiting for pod pod-33ba9ecb-2aa9-4d86-9a98-47ecc12fab04 to disappear May 29 13:10:41.392: INFO: Pod pod-33ba9ecb-2aa9-4d86-9a98-47ecc12fab04 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:10:41.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-118" for this suite. May 29 13:10:47.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:10:47.474: INFO: namespace emptydir-118 deletion completed in 6.078257316s • [SLOW TEST:12.238 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:10:47.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 29 13:10:47.523: INFO: Waiting up to 5m0s for pod "downwardapi-volume-888ee456-f9b9-42d0-a10c-4146fdc4f0f3" in namespace "projected-2898" to be "success or failure" May 29 13:10:47.547: INFO: Pod "downwardapi-volume-888ee456-f9b9-42d0-a10c-4146fdc4f0f3": Phase="Pending", Reason="", readiness=false. Elapsed: 24.210559ms May 29 13:10:49.551: INFO: Pod "downwardapi-volume-888ee456-f9b9-42d0-a10c-4146fdc4f0f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028540468s May 29 13:10:51.555: INFO: Pod "downwardapi-volume-888ee456-f9b9-42d0-a10c-4146fdc4f0f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032736825s STEP: Saw pod success May 29 13:10:51.555: INFO: Pod "downwardapi-volume-888ee456-f9b9-42d0-a10c-4146fdc4f0f3" satisfied condition "success or failure" May 29 13:10:51.559: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-888ee456-f9b9-42d0-a10c-4146fdc4f0f3 container client-container: STEP: delete the pod May 29 13:10:51.612: INFO: Waiting for pod downwardapi-volume-888ee456-f9b9-42d0-a10c-4146fdc4f0f3 to disappear May 29 13:10:51.615: INFO: Pod downwardapi-volume-888ee456-f9b9-42d0-a10c-4146fdc4f0f3 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:10:51.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2898" for this suite. May 29 13:10:57.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:10:57.723: INFO: namespace projected-2898 deletion completed in 6.103971561s • [SLOW TEST:10.249 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:10:57.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0529 13:11:28.422338 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 29 13:11:28.422: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:11:28.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5715" for this suite. May 29 13:11:34.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:11:34.514: INFO: namespace gc-5715 deletion completed in 6.088262267s • [SLOW TEST:36.791 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:11:34.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-e4b44818-364f-4425-8790-8270e53472e6 STEP: Creating a pod to test consume secrets May 29 13:11:34.722: INFO: Waiting up to 5m0s for pod "pod-secrets-596a334d-1b50-4301-9dcb-7e7c4589a05b" in namespace "secrets-1271" to be "success or failure" May 29 13:11:34.751: INFO: Pod "pod-secrets-596a334d-1b50-4301-9dcb-7e7c4589a05b": Phase="Pending", Reason="", readiness=false. Elapsed: 29.083807ms May 29 13:11:36.755: INFO: Pod "pod-secrets-596a334d-1b50-4301-9dcb-7e7c4589a05b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033471611s May 29 13:11:38.760: INFO: Pod "pod-secrets-596a334d-1b50-4301-9dcb-7e7c4589a05b": Phase="Running", Reason="", readiness=true. Elapsed: 4.037700077s May 29 13:11:40.764: INFO: Pod "pod-secrets-596a334d-1b50-4301-9dcb-7e7c4589a05b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.042397418s STEP: Saw pod success May 29 13:11:40.764: INFO: Pod "pod-secrets-596a334d-1b50-4301-9dcb-7e7c4589a05b" satisfied condition "success or failure" May 29 13:11:40.768: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-596a334d-1b50-4301-9dcb-7e7c4589a05b container secret-env-test: STEP: delete the pod May 29 13:11:40.804: INFO: Waiting for pod pod-secrets-596a334d-1b50-4301-9dcb-7e7c4589a05b to disappear May 29 13:11:40.808: INFO: Pod pod-secrets-596a334d-1b50-4301-9dcb-7e7c4589a05b no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:11:40.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1271" for this suite. May 29 13:11:46.823: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:11:46.908: INFO: namespace secrets-1271 deletion completed in 6.0965604s • [SLOW TEST:12.394 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:11:46.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-jqfp STEP: Creating a pod to test atomic-volume-subpath May 29 13:11:46.995: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-jqfp" in namespace "subpath-8503" to be "success or failure" May 29 13:11:47.006: INFO: Pod "pod-subpath-test-configmap-jqfp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.465165ms May 29 13:11:49.010: INFO: Pod "pod-subpath-test-configmap-jqfp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014873671s May 29 13:11:51.014: INFO: Pod "pod-subpath-test-configmap-jqfp": Phase="Running", Reason="", readiness=true. Elapsed: 4.019139452s May 29 13:11:53.018: INFO: Pod "pod-subpath-test-configmap-jqfp": Phase="Running", Reason="", readiness=true. Elapsed: 6.023024392s May 29 13:11:55.022: INFO: Pod "pod-subpath-test-configmap-jqfp": Phase="Running", Reason="", readiness=true. Elapsed: 8.026868656s May 29 13:11:57.027: INFO: Pod "pod-subpath-test-configmap-jqfp": Phase="Running", Reason="", readiness=true. Elapsed: 10.031403905s May 29 13:11:59.032: INFO: Pod "pod-subpath-test-configmap-jqfp": Phase="Running", Reason="", readiness=true. Elapsed: 12.036615815s May 29 13:12:01.037: INFO: Pod "pod-subpath-test-configmap-jqfp": Phase="Running", Reason="", readiness=true. Elapsed: 14.041770901s May 29 13:12:03.042: INFO: Pod "pod-subpath-test-configmap-jqfp": Phase="Running", Reason="", readiness=true. Elapsed: 16.046461128s May 29 13:12:05.046: INFO: Pod "pod-subpath-test-configmap-jqfp": Phase="Running", Reason="", readiness=true. Elapsed: 18.050655211s May 29 13:12:07.051: INFO: Pod "pod-subpath-test-configmap-jqfp": Phase="Running", Reason="", readiness=true. Elapsed: 20.055216523s May 29 13:12:09.055: INFO: Pod "pod-subpath-test-configmap-jqfp": Phase="Running", Reason="", readiness=true. Elapsed: 22.060057321s May 29 13:12:11.059: INFO: Pod "pod-subpath-test-configmap-jqfp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.064095636s STEP: Saw pod success May 29 13:12:11.059: INFO: Pod "pod-subpath-test-configmap-jqfp" satisfied condition "success or failure" May 29 13:12:11.063: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-jqfp container test-container-subpath-configmap-jqfp: STEP: delete the pod May 29 13:12:11.089: INFO: Waiting for pod pod-subpath-test-configmap-jqfp to disappear May 29 13:12:11.158: INFO: Pod pod-subpath-test-configmap-jqfp no longer exists STEP: Deleting pod pod-subpath-test-configmap-jqfp May 29 13:12:11.158: INFO: Deleting pod "pod-subpath-test-configmap-jqfp" in namespace "subpath-8503" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:12:11.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8503" for this suite. May 29 13:12:17.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:12:17.385: INFO: namespace subpath-8503 deletion completed in 6.220047065s • [SLOW TEST:30.476 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:12:17.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server May 29 13:12:17.453: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:12:17.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8367" for this suite. May 29 13:12:23.557: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:12:23.659: INFO: namespace kubectl-8367 deletion completed in 6.113298202s • [SLOW TEST:6.274 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:12:23.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-00dcb919-cc31-4304-b2ba-631916386a32 STEP: Creating a pod to test consume configMaps May 29 13:12:23.742: INFO: Waiting up to 5m0s for pod "pod-configmaps-532c706a-6811-4c2d-8536-aaa00f4b2b6c" in namespace "configmap-8244" to be "success or failure" May 29 13:12:23.751: INFO: Pod "pod-configmaps-532c706a-6811-4c2d-8536-aaa00f4b2b6c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.111639ms May 29 13:12:25.871: INFO: Pod "pod-configmaps-532c706a-6811-4c2d-8536-aaa00f4b2b6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128477908s May 29 13:12:27.878: INFO: Pod "pod-configmaps-532c706a-6811-4c2d-8536-aaa00f4b2b6c": Phase="Running", Reason="", readiness=true. Elapsed: 4.135436289s May 29 13:12:29.882: INFO: Pod "pod-configmaps-532c706a-6811-4c2d-8536-aaa00f4b2b6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.139798176s STEP: Saw pod success May 29 13:12:29.882: INFO: Pod "pod-configmaps-532c706a-6811-4c2d-8536-aaa00f4b2b6c" satisfied condition "success or failure" May 29 13:12:29.885: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-532c706a-6811-4c2d-8536-aaa00f4b2b6c container configmap-volume-test: STEP: delete the pod May 29 13:12:29.917: INFO: Waiting for pod pod-configmaps-532c706a-6811-4c2d-8536-aaa00f4b2b6c to disappear May 29 13:12:29.931: INFO: Pod pod-configmaps-532c706a-6811-4c2d-8536-aaa00f4b2b6c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:12:29.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8244" for this suite. May 29 13:12:35.964: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:12:36.044: INFO: namespace configmap-8244 deletion completed in 6.109190437s • [SLOW TEST:12.384 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:12:36.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 29 13:12:36.124: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1f2d6a36-9838-49ad-a7cd-33ba918cc9f7" in namespace "downward-api-8684" to be "success or failure" May 29 13:12:36.139: INFO: Pod "downwardapi-volume-1f2d6a36-9838-49ad-a7cd-33ba918cc9f7": Phase="Pending", Reason="", readiness=false. Elapsed: 15.473317ms May 29 13:12:38.231: INFO: Pod "downwardapi-volume-1f2d6a36-9838-49ad-a7cd-33ba918cc9f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106826461s May 29 13:12:40.235: INFO: Pod "downwardapi-volume-1f2d6a36-9838-49ad-a7cd-33ba918cc9f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.111077131s STEP: Saw pod success May 29 13:12:40.235: INFO: Pod "downwardapi-volume-1f2d6a36-9838-49ad-a7cd-33ba918cc9f7" satisfied condition "success or failure" May 29 13:12:40.238: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-1f2d6a36-9838-49ad-a7cd-33ba918cc9f7 container client-container: STEP: delete the pod May 29 13:12:40.269: INFO: Waiting for pod downwardapi-volume-1f2d6a36-9838-49ad-a7cd-33ba918cc9f7 to disappear May 29 13:12:40.386: INFO: Pod downwardapi-volume-1f2d6a36-9838-49ad-a7cd-33ba918cc9f7 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:12:40.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8684" for this suite. May 29 13:12:46.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:12:46.483: INFO: namespace downward-api-8684 deletion completed in 6.093164548s • [SLOW TEST:10.439 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:12:46.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 29 13:12:51.103: INFO: Successfully updated pod "labelsupdatec7497a85-3890-4c7c-9d93-6440bd973f67" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:12:53.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2585" for this suite. May 29 13:13:15.193: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:13:15.268: INFO: namespace projected-2585 deletion completed in 22.094094042s • [SLOW TEST:28.785 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:13:15.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-1a7b64c7-0c4e-4cfc-9581-8dc092fb626a STEP: Creating a pod to test consume configMaps May 29 13:13:15.336: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-85cbdf5c-f2f9-498a-943b-5a81fc84ec32" in namespace "projected-6698" to be "success or failure" May 29 13:13:15.362: INFO: Pod "pod-projected-configmaps-85cbdf5c-f2f9-498a-943b-5a81fc84ec32": Phase="Pending", Reason="", readiness=false. Elapsed: 26.874061ms May 29 13:13:17.367: INFO: Pod "pod-projected-configmaps-85cbdf5c-f2f9-498a-943b-5a81fc84ec32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031019604s May 29 13:13:19.371: INFO: Pod "pod-projected-configmaps-85cbdf5c-f2f9-498a-943b-5a81fc84ec32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034931603s STEP: Saw pod success May 29 13:13:19.371: INFO: Pod "pod-projected-configmaps-85cbdf5c-f2f9-498a-943b-5a81fc84ec32" satisfied condition "success or failure" May 29 13:13:19.373: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-85cbdf5c-f2f9-498a-943b-5a81fc84ec32 container projected-configmap-volume-test: STEP: delete the pod May 29 13:13:19.440: INFO: Waiting for pod pod-projected-configmaps-85cbdf5c-f2f9-498a-943b-5a81fc84ec32 to disappear May 29 13:13:19.500: INFO: Pod pod-projected-configmaps-85cbdf5c-f2f9-498a-943b-5a81fc84ec32 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:13:19.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6698" for this suite. May 29 13:13:25.526: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:13:25.606: INFO: namespace projected-6698 deletion completed in 6.101128557s • [SLOW TEST:10.337 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:13:25.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-76ef0a1e-2a46-41bf-9aa2-5a6d15e72b1f STEP: Creating a pod to test consume secrets May 29 13:13:25.715: INFO: Waiting up to 5m0s for pod "pod-secrets-f7887b5c-f14b-4d1a-b53b-48955e257359" in namespace "secrets-3552" to be "success or failure" May 29 13:13:25.742: INFO: Pod "pod-secrets-f7887b5c-f14b-4d1a-b53b-48955e257359": Phase="Pending", Reason="", readiness=false. Elapsed: 26.41714ms May 29 13:13:27.824: INFO: Pod "pod-secrets-f7887b5c-f14b-4d1a-b53b-48955e257359": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1085155s May 29 13:13:29.828: INFO: Pod "pod-secrets-f7887b5c-f14b-4d1a-b53b-48955e257359": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.112357714s STEP: Saw pod success May 29 13:13:29.828: INFO: Pod "pod-secrets-f7887b5c-f14b-4d1a-b53b-48955e257359" satisfied condition "success or failure" May 29 13:13:29.831: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-f7887b5c-f14b-4d1a-b53b-48955e257359 container secret-volume-test: STEP: delete the pod May 29 13:13:29.853: INFO: Waiting for pod pod-secrets-f7887b5c-f14b-4d1a-b53b-48955e257359 to disappear May 29 13:13:29.867: INFO: Pod pod-secrets-f7887b5c-f14b-4d1a-b53b-48955e257359 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:13:29.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3552" for this suite. May 29 13:13:35.886: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:13:35.962: INFO: namespace secrets-3552 deletion completed in 6.091573468s • [SLOW TEST:10.356 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:13:35.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:13:40.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7526" for this suite. May 29 13:13:46.061: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:13:46.138: INFO: namespace kubelet-test-7526 deletion completed in 6.090986636s • [SLOW TEST:10.176 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:13:46.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 29 13:13:46.266: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-6478,SelfLink:/api/v1/namespaces/watch-6478/configmaps/e2e-watch-test-resource-version,UID:40668801-9e11-4d4b-819b-159c27e70683,ResourceVersion:13545488,Generation:0,CreationTimestamp:2020-05-29 13:13:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 29 13:13:46.267: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-6478,SelfLink:/api/v1/namespaces/watch-6478/configmaps/e2e-watch-test-resource-version,UID:40668801-9e11-4d4b-819b-159c27e70683,ResourceVersion:13545489,Generation:0,CreationTimestamp:2020-05-29 13:13:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:13:46.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6478" for this suite. May 29 13:13:52.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:13:52.364: INFO: namespace watch-6478 deletion completed in 6.093815793s • [SLOW TEST:6.225 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:13:52.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-5652 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet May 29 13:13:52.466: INFO: Found 0 stateful pods, waiting for 3 May 29 13:14:02.472: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 29 13:14:02.472: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 29 13:14:02.472: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 29 13:14:12.472: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 29 13:14:12.472: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 29 13:14:12.472: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 29 13:14:12.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5652 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 29 13:14:15.389: INFO: stderr: "I0529 13:14:15.243286 920 log.go:172] (0xc000b70420) (0xc000956780) Create stream\nI0529 13:14:15.243348 920 log.go:172] (0xc000b70420) (0xc000956780) Stream added, broadcasting: 1\nI0529 13:14:15.246385 920 log.go:172] (0xc000b70420) Reply frame received for 1\nI0529 13:14:15.246413 920 log.go:172] (0xc000b70420) (0xc000956820) Create stream\nI0529 13:14:15.246425 920 log.go:172] (0xc000b70420) (0xc000956820) Stream added, broadcasting: 3\nI0529 13:14:15.247312 920 log.go:172] (0xc000b70420) Reply frame received for 3\nI0529 13:14:15.247341 920 log.go:172] (0xc000b70420) (0xc0009568c0) Create stream\nI0529 13:14:15.247349 920 log.go:172] (0xc000b70420) (0xc0009568c0) Stream added, broadcasting: 5\nI0529 13:14:15.248223 920 log.go:172] (0xc000b70420) Reply frame received for 5\nI0529 13:14:15.345830 920 log.go:172] (0xc000b70420) Data frame received for 5\nI0529 13:14:15.345868 920 log.go:172] (0xc0009568c0) (5) Data frame handling\nI0529 13:14:15.345887 920 log.go:172] (0xc0009568c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0529 13:14:15.378636 920 log.go:172] (0xc000b70420) Data frame received for 3\nI0529 13:14:15.378680 920 log.go:172] (0xc000956820) (3) Data frame handling\nI0529 13:14:15.378751 920 log.go:172] (0xc000956820) (3) Data frame sent\nI0529 13:14:15.378782 920 log.go:172] (0xc000b70420) Data frame received for 3\nI0529 13:14:15.378809 920 log.go:172] (0xc000956820) (3) Data frame handling\nI0529 13:14:15.379097 920 log.go:172] (0xc000b70420) Data frame received for 5\nI0529 13:14:15.379138 920 log.go:172] (0xc0009568c0) (5) Data frame handling\nI0529 13:14:15.380982 920 log.go:172] (0xc000b70420) Data frame received for 1\nI0529 13:14:15.381006 920 log.go:172] (0xc000956780) (1) Data frame handling\nI0529 13:14:15.381031 920 log.go:172] (0xc000956780) (1) Data frame sent\nI0529 13:14:15.381043 920 log.go:172] (0xc000b70420) (0xc000956780) Stream removed, broadcasting: 1\nI0529 13:14:15.381056 920 log.go:172] (0xc000b70420) Go away received\nI0529 13:14:15.381683 920 log.go:172] (0xc000b70420) (0xc000956780) Stream removed, broadcasting: 1\nI0529 13:14:15.381704 920 log.go:172] (0xc000b70420) (0xc000956820) Stream removed, broadcasting: 3\nI0529 13:14:15.381715 920 log.go:172] (0xc000b70420) (0xc0009568c0) Stream removed, broadcasting: 5\n" May 29 13:14:15.389: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 29 13:14:15.389: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 29 13:14:25.422: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 29 13:14:35.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5652 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 29 13:14:35.736: INFO: stderr: "I0529 13:14:35.668562 953 log.go:172] (0xc00097a420) (0xc0003346e0) Create stream\nI0529 13:14:35.668628 953 log.go:172] (0xc00097a420) (0xc0003346e0) Stream added, broadcasting: 1\nI0529 13:14:35.672788 953 log.go:172] (0xc00097a420) Reply frame received for 1\nI0529 13:14:35.672845 953 log.go:172] (0xc00097a420) (0xc00072c1e0) Create stream\nI0529 13:14:35.672869 953 log.go:172] (0xc00097a420) (0xc00072c1e0) Stream added, broadcasting: 3\nI0529 13:14:35.674124 953 log.go:172] (0xc00097a420) Reply frame received for 3\nI0529 13:14:35.674163 953 log.go:172] (0xc00097a420) (0xc00072c280) Create stream\nI0529 13:14:35.674176 953 log.go:172] (0xc00097a420) (0xc00072c280) Stream added, broadcasting: 5\nI0529 13:14:35.675122 953 log.go:172] (0xc00097a420) Reply frame received for 5\nI0529 13:14:35.728497 953 log.go:172] (0xc00097a420) Data frame received for 3\nI0529 13:14:35.728536 953 log.go:172] (0xc00072c1e0) (3) Data frame handling\nI0529 13:14:35.728548 953 log.go:172] (0xc00072c1e0) (3) Data frame sent\nI0529 13:14:35.728571 953 log.go:172] (0xc00097a420) Data frame received for 5\nI0529 13:14:35.728588 953 log.go:172] (0xc00072c280) (5) Data frame handling\nI0529 13:14:35.728605 953 log.go:172] (0xc00072c280) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0529 13:14:35.728632 953 log.go:172] (0xc00097a420) Data frame received for 5\nI0529 13:14:35.728659 953 log.go:172] (0xc00072c280) (5) Data frame handling\nI0529 13:14:35.728906 953 log.go:172] (0xc00097a420) Data frame received for 3\nI0529 13:14:35.728949 953 log.go:172] (0xc00072c1e0) (3) Data frame handling\nI0529 13:14:35.730844 953 log.go:172] (0xc00097a420) Data frame received for 1\nI0529 13:14:35.730869 953 log.go:172] (0xc0003346e0) (1) Data frame handling\nI0529 13:14:35.730884 953 log.go:172] (0xc0003346e0) (1) Data frame sent\nI0529 13:14:35.730904 953 log.go:172] (0xc00097a420) (0xc0003346e0) Stream removed, broadcasting: 1\nI0529 13:14:35.730926 953 log.go:172] (0xc00097a420) Go away received\nI0529 13:14:35.731323 953 log.go:172] (0xc00097a420) (0xc0003346e0) Stream removed, broadcasting: 1\nI0529 13:14:35.731349 953 log.go:172] (0xc00097a420) (0xc00072c1e0) Stream removed, broadcasting: 3\nI0529 13:14:35.731362 953 log.go:172] (0xc00097a420) (0xc00072c280) Stream removed, broadcasting: 5\n" May 29 13:14:35.736: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 29 13:14:35.736: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 29 13:14:45.756: INFO: Waiting for StatefulSet statefulset-5652/ss2 to complete update May 29 13:14:45.756: INFO: Waiting for Pod statefulset-5652/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 29 13:14:45.756: INFO: Waiting for Pod statefulset-5652/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 29 13:14:55.764: INFO: Waiting for StatefulSet statefulset-5652/ss2 to complete update May 29 13:14:55.764: INFO: Waiting for Pod statefulset-5652/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision May 29 13:15:05.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5652 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 29 13:15:06.054: INFO: stderr: "I0529 13:15:05.895933 973 log.go:172] (0xc000ae2580) (0xc0005f8aa0) Create stream\nI0529 13:15:05.896005 973 log.go:172] (0xc000ae2580) (0xc0005f8aa0) Stream added, broadcasting: 1\nI0529 13:15:05.899908 973 log.go:172] (0xc000ae2580) Reply frame received for 1\nI0529 13:15:05.899965 973 log.go:172] (0xc000ae2580) (0xc00058e000) Create stream\nI0529 13:15:05.899979 973 log.go:172] (0xc000ae2580) (0xc00058e000) Stream added, broadcasting: 3\nI0529 13:15:05.901007 973 log.go:172] (0xc000ae2580) Reply frame received for 3\nI0529 13:15:05.901065 973 log.go:172] (0xc000ae2580) (0xc0005f8320) Create stream\nI0529 13:15:05.901092 973 log.go:172] (0xc000ae2580) (0xc0005f8320) Stream added, broadcasting: 5\nI0529 13:15:05.902153 973 log.go:172] (0xc000ae2580) Reply frame received for 5\nI0529 13:15:06.000282 973 log.go:172] (0xc000ae2580) Data frame received for 5\nI0529 13:15:06.000306 973 log.go:172] (0xc0005f8320) (5) Data frame handling\nI0529 13:15:06.000318 973 log.go:172] (0xc0005f8320) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0529 13:15:06.046374 973 log.go:172] (0xc000ae2580) Data frame received for 5\nI0529 13:15:06.046559 973 log.go:172] (0xc0005f8320) (5) Data frame handling\nI0529 13:15:06.046586 973 log.go:172] (0xc000ae2580) Data frame received for 3\nI0529 13:15:06.046597 973 log.go:172] (0xc00058e000) (3) Data frame handling\nI0529 13:15:06.046607 973 log.go:172] (0xc00058e000) (3) Data frame sent\nI0529 13:15:06.046778 973 log.go:172] (0xc000ae2580) Data frame received for 3\nI0529 13:15:06.046799 973 log.go:172] (0xc00058e000) (3) Data frame handling\nI0529 13:15:06.049454 973 log.go:172] (0xc000ae2580) Data frame received for 1\nI0529 13:15:06.049494 973 log.go:172] (0xc0005f8aa0) (1) Data frame handling\nI0529 13:15:06.049538 973 log.go:172] (0xc0005f8aa0) (1) Data frame sent\nI0529 13:15:06.049567 973 log.go:172] (0xc000ae2580) (0xc0005f8aa0) Stream removed, broadcasting: 1\nI0529 13:15:06.049602 973 log.go:172] (0xc000ae2580) Go away received\nI0529 13:15:06.049931 973 log.go:172] (0xc000ae2580) (0xc0005f8aa0) Stream removed, broadcasting: 1\nI0529 13:15:06.049949 973 log.go:172] (0xc000ae2580) (0xc00058e000) Stream removed, broadcasting: 3\nI0529 13:15:06.049955 973 log.go:172] (0xc000ae2580) (0xc0005f8320) Stream removed, broadcasting: 5\n" May 29 13:15:06.054: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 29 13:15:06.054: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 29 13:15:16.085: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 29 13:15:26.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5652 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 29 13:15:26.362: INFO: stderr: "I0529 13:15:26.251864 994 log.go:172] (0xc0008c4420) (0xc00022c820) Create stream\nI0529 13:15:26.251933 994 log.go:172] (0xc0008c4420) (0xc00022c820) Stream added, broadcasting: 1\nI0529 13:15:26.254353 994 log.go:172] (0xc0008c4420) Reply frame received for 1\nI0529 13:15:26.254406 994 log.go:172] (0xc0008c4420) (0xc0007ca000) Create stream\nI0529 13:15:26.254427 994 log.go:172] (0xc0008c4420) (0xc0007ca000) Stream added, broadcasting: 3\nI0529 13:15:26.255514 994 log.go:172] (0xc0008c4420) Reply frame received for 3\nI0529 13:15:26.255554 994 log.go:172] (0xc0008c4420) (0xc0007ca0a0) Create stream\nI0529 13:15:26.255587 994 log.go:172] (0xc0008c4420) (0xc0007ca0a0) Stream added, broadcasting: 5\nI0529 13:15:26.256717 994 log.go:172] (0xc0008c4420) Reply frame received for 5\nI0529 13:15:26.355854 994 log.go:172] (0xc0008c4420) Data frame received for 3\nI0529 13:15:26.355894 994 log.go:172] (0xc0007ca000) (3) Data frame handling\nI0529 13:15:26.355910 994 log.go:172] (0xc0007ca000) (3) Data frame sent\nI0529 13:15:26.355921 994 log.go:172] (0xc0008c4420) Data frame received for 3\nI0529 13:15:26.355931 994 log.go:172] (0xc0007ca000) (3) Data frame handling\nI0529 13:15:26.355964 994 log.go:172] (0xc0008c4420) Data frame received for 5\nI0529 13:15:26.355975 994 log.go:172] (0xc0007ca0a0) (5) Data frame handling\nI0529 13:15:26.355986 994 log.go:172] (0xc0007ca0a0) (5) Data frame sent\nI0529 13:15:26.355995 994 log.go:172] (0xc0008c4420) Data frame received for 5\nI0529 13:15:26.356016 994 log.go:172] (0xc0007ca0a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0529 13:15:26.357881 994 log.go:172] (0xc0008c4420) Data frame received for 1\nI0529 13:15:26.357907 994 log.go:172] (0xc00022c820) (1) Data frame handling\nI0529 13:15:26.357917 994 log.go:172] (0xc00022c820) (1) Data frame sent\nI0529 13:15:26.357940 994 log.go:172] (0xc0008c4420) (0xc00022c820) Stream removed, broadcasting: 1\nI0529 13:15:26.358029 994 log.go:172] (0xc0008c4420) Go away received\nI0529 13:15:26.358306 994 log.go:172] (0xc0008c4420) (0xc00022c820) Stream removed, broadcasting: 1\nI0529 13:15:26.358326 994 log.go:172] (0xc0008c4420) (0xc0007ca000) Stream removed, broadcasting: 3\nI0529 13:15:26.358336 994 log.go:172] (0xc0008c4420) (0xc0007ca0a0) Stream removed, broadcasting: 5\n" May 29 13:15:26.362: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 29 13:15:26.362: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 29 13:15:36.385: INFO: Waiting for StatefulSet statefulset-5652/ss2 to complete update May 29 13:15:36.385: INFO: Waiting for Pod statefulset-5652/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 29 13:15:36.385: INFO: Waiting for Pod statefulset-5652/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 29 13:15:46.393: INFO: Waiting for StatefulSet statefulset-5652/ss2 to complete update May 29 13:15:46.393: INFO: Waiting for Pod statefulset-5652/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 29 13:15:56.393: INFO: Deleting all statefulset in ns statefulset-5652 May 29 13:15:56.395: INFO: Scaling statefulset ss2 to 0 May 29 13:16:26.448: INFO: Waiting for statefulset status.replicas updated to 0 May 29 13:16:26.451: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:16:26.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5652" for this suite. May 29 13:16:34.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:16:34.612: INFO: namespace statefulset-5652 deletion completed in 8.121055536s • [SLOW TEST:162.247 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:16:34.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-54f596c9-d854-45d0-8fdc-d0da95bec78e STEP: Creating secret with name secret-projected-all-test-volume-5a460251-5829-4801-9a66-e2751f17966b STEP: Creating a pod to test Check all projections for projected volume plugin May 29 13:16:34.737: INFO: Waiting up to 5m0s for pod "projected-volume-8d8528d0-3997-404c-9707-5dd7a08478fa" in namespace "projected-4453" to be "success or failure" May 29 13:16:34.741: INFO: Pod "projected-volume-8d8528d0-3997-404c-9707-5dd7a08478fa": Phase="Pending", Reason="", readiness=false. Elapsed: 3.782874ms May 29 13:16:36.828: INFO: Pod "projected-volume-8d8528d0-3997-404c-9707-5dd7a08478fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090930896s May 29 13:16:38.834: INFO: Pod "projected-volume-8d8528d0-3997-404c-9707-5dd7a08478fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.096410558s STEP: Saw pod success May 29 13:16:38.834: INFO: Pod "projected-volume-8d8528d0-3997-404c-9707-5dd7a08478fa" satisfied condition "success or failure" May 29 13:16:38.837: INFO: Trying to get logs from node iruya-worker2 pod projected-volume-8d8528d0-3997-404c-9707-5dd7a08478fa container projected-all-volume-test: STEP: delete the pod May 29 13:16:38.868: INFO: Waiting for pod projected-volume-8d8528d0-3997-404c-9707-5dd7a08478fa to disappear May 29 13:16:38.872: INFO: Pod projected-volume-8d8528d0-3997-404c-9707-5dd7a08478fa no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:16:38.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4453" for this suite. May 29 13:16:44.887: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:16:44.961: INFO: namespace projected-4453 deletion completed in 6.082948741s • [SLOW TEST:10.349 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:16:44.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:16:49.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1122" for this suite. May 29 13:17:35.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:17:35.192: INFO: namespace kubelet-test-1122 deletion completed in 46.095586569s • [SLOW TEST:50.230 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:17:35.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium May 29 13:17:35.276: INFO: Waiting up to 5m0s for pod "pod-efef9f5b-5ee0-4463-8bb9-467e3bced4cd" in namespace "emptydir-3318" to be "success or failure" May 29 13:17:35.280: INFO: Pod "pod-efef9f5b-5ee0-4463-8bb9-467e3bced4cd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.560643ms May 29 13:17:37.283: INFO: Pod "pod-efef9f5b-5ee0-4463-8bb9-467e3bced4cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007151418s May 29 13:17:39.286: INFO: Pod "pod-efef9f5b-5ee0-4463-8bb9-467e3bced4cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010462365s STEP: Saw pod success May 29 13:17:39.286: INFO: Pod "pod-efef9f5b-5ee0-4463-8bb9-467e3bced4cd" satisfied condition "success or failure" May 29 13:17:39.288: INFO: Trying to get logs from node iruya-worker pod pod-efef9f5b-5ee0-4463-8bb9-467e3bced4cd container test-container: STEP: delete the pod May 29 13:17:39.343: INFO: Waiting for pod pod-efef9f5b-5ee0-4463-8bb9-467e3bced4cd to disappear May 29 13:17:39.351: INFO: Pod pod-efef9f5b-5ee0-4463-8bb9-467e3bced4cd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:17:39.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3318" for this suite. May 29 13:17:45.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:17:45.477: INFO: namespace emptydir-3318 deletion completed in 6.119219449s • [SLOW TEST:10.285 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:17:45.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 29 13:17:45.541: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8a0da9f3-229a-4b98-aa00-f1b885592844" in namespace "downward-api-4940" to be "success or failure" May 29 13:17:45.567: INFO: Pod "downwardapi-volume-8a0da9f3-229a-4b98-aa00-f1b885592844": Phase="Pending", Reason="", readiness=false. Elapsed: 26.343973ms May 29 13:17:47.571: INFO: Pod "downwardapi-volume-8a0da9f3-229a-4b98-aa00-f1b885592844": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030612193s May 29 13:17:49.576: INFO: Pod "downwardapi-volume-8a0da9f3-229a-4b98-aa00-f1b885592844": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03498632s STEP: Saw pod success May 29 13:17:49.576: INFO: Pod "downwardapi-volume-8a0da9f3-229a-4b98-aa00-f1b885592844" satisfied condition "success or failure" May 29 13:17:49.578: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-8a0da9f3-229a-4b98-aa00-f1b885592844 container client-container: STEP: delete the pod May 29 13:17:49.644: INFO: Waiting for pod downwardapi-volume-8a0da9f3-229a-4b98-aa00-f1b885592844 to disappear May 29 13:17:49.658: INFO: Pod downwardapi-volume-8a0da9f3-229a-4b98-aa00-f1b885592844 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:17:49.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4940" for this suite. May 29 13:17:55.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:17:55.776: INFO: namespace downward-api-4940 deletion completed in 6.11517124s • [SLOW TEST:10.299 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:17:55.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components May 29 13:17:55.847: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend May 29 13:17:55.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6095' May 29 13:17:56.183: INFO: stderr: "" May 29 13:17:56.183: INFO: stdout: "service/redis-slave created\n" May 29 13:17:56.183: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend May 29 13:17:56.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6095' May 29 13:17:56.486: INFO: stderr: "" May 29 13:17:56.486: INFO: stdout: "service/redis-master created\n" May 29 13:17:56.486: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 29 13:17:56.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6095' May 29 13:17:56.777: INFO: stderr: "" May 29 13:17:56.777: INFO: stdout: "service/frontend created\n" May 29 13:17:56.778: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 May 29 13:17:56.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6095' May 29 13:17:57.058: INFO: stderr: "" May 29 13:17:57.058: INFO: stdout: "deployment.apps/frontend created\n" May 29 13:17:57.058: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 29 13:17:57.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6095' May 29 13:17:57.363: INFO: stderr: "" May 29 13:17:57.363: INFO: stdout: "deployment.apps/redis-master created\n" May 29 13:17:57.364: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 May 29 13:17:57.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6095' May 29 13:17:57.667: INFO: stderr: "" May 29 13:17:57.667: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app May 29 13:17:57.668: INFO: Waiting for all frontend pods to be Running. May 29 13:18:07.718: INFO: Waiting for frontend to serve content. May 29 13:18:07.736: INFO: Trying to add a new entry to the guestbook. May 29 13:18:07.749: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 29 13:18:07.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6095' May 29 13:18:07.963: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 29 13:18:07.963: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources May 29 13:18:07.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6095' May 29 13:18:08.585: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 29 13:18:08.585: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 29 13:18:08.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6095' May 29 13:18:08.724: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 29 13:18:08.724: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 29 13:18:08.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6095' May 29 13:18:08.825: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 29 13:18:08.825: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 29 13:18:08.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6095' May 29 13:18:08.939: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 29 13:18:08.939: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 29 13:18:08.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6095' May 29 13:18:09.084: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 29 13:18:09.085: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:18:09.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6095" for this suite. May 29 13:18:55.219: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:18:55.310: INFO: namespace kubectl-6095 deletion completed in 46.162859378s • [SLOW TEST:59.533 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:18:55.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 29 13:18:55.369: INFO: Waiting up to 5m0s for pod "downwardapi-volume-97413c7d-b8d5-44df-8285-b7ca0f187b2b" in namespace "downward-api-6430" to be "success or failure" May 29 13:18:55.373: INFO: Pod "downwardapi-volume-97413c7d-b8d5-44df-8285-b7ca0f187b2b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.987148ms May 29 13:18:57.481: INFO: Pod "downwardapi-volume-97413c7d-b8d5-44df-8285-b7ca0f187b2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112147387s May 29 13:18:59.486: INFO: Pod "downwardapi-volume-97413c7d-b8d5-44df-8285-b7ca0f187b2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.116373185s STEP: Saw pod success May 29 13:18:59.486: INFO: Pod "downwardapi-volume-97413c7d-b8d5-44df-8285-b7ca0f187b2b" satisfied condition "success or failure" May 29 13:18:59.489: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-97413c7d-b8d5-44df-8285-b7ca0f187b2b container client-container: STEP: delete the pod May 29 13:18:59.526: INFO: Waiting for pod downwardapi-volume-97413c7d-b8d5-44df-8285-b7ca0f187b2b to disappear May 29 13:18:59.529: INFO: Pod downwardapi-volume-97413c7d-b8d5-44df-8285-b7ca0f187b2b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:18:59.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6430" for this suite. May 29 13:19:05.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:19:05.621: INFO: namespace downward-api-6430 deletion completed in 6.08906712s • [SLOW TEST:10.311 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:19:05.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 29 13:19:05.719: INFO: Pod name pod-release: Found 0 pods out of 1 May 29 13:19:10.723: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:19:11.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5242" for this suite. May 29 13:19:17.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:19:17.916: INFO: namespace replication-controller-5242 deletion completed in 6.173963123s • [SLOW TEST:12.294 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:19:17.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller May 29 13:19:18.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3238' May 29 13:19:18.846: INFO: stderr: "" May 29 13:19:18.846: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 29 13:19:18.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3238' May 29 13:19:18.979: INFO: stderr: "" May 29 13:19:18.979: INFO: stdout: "update-demo-nautilus-kgp7x update-demo-nautilus-tdvpd " May 29 13:19:18.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kgp7x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3238' May 29 13:19:19.086: INFO: stderr: "" May 29 13:19:19.086: INFO: stdout: "" May 29 13:19:19.086: INFO: update-demo-nautilus-kgp7x is created but not running May 29 13:19:24.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3238' May 29 13:19:24.190: INFO: stderr: "" May 29 13:19:24.190: INFO: stdout: "update-demo-nautilus-kgp7x update-demo-nautilus-tdvpd " May 29 13:19:24.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kgp7x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3238' May 29 13:19:24.286: INFO: stderr: "" May 29 13:19:24.286: INFO: stdout: "true" May 29 13:19:24.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kgp7x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3238' May 29 13:19:24.394: INFO: stderr: "" May 29 13:19:24.394: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 29 13:19:24.394: INFO: validating pod update-demo-nautilus-kgp7x May 29 13:19:24.399: INFO: got data: { "image": "nautilus.jpg" } May 29 13:19:24.399: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 29 13:19:24.399: INFO: update-demo-nautilus-kgp7x is verified up and running May 29 13:19:24.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tdvpd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3238' May 29 13:19:24.492: INFO: stderr: "" May 29 13:19:24.492: INFO: stdout: "true" May 29 13:19:24.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tdvpd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3238' May 29 13:19:24.591: INFO: stderr: "" May 29 13:19:24.591: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 29 13:19:24.591: INFO: validating pod update-demo-nautilus-tdvpd May 29 13:19:24.595: INFO: got data: { "image": "nautilus.jpg" } May 29 13:19:24.595: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 29 13:19:24.596: INFO: update-demo-nautilus-tdvpd is verified up and running STEP: using delete to clean up resources May 29 13:19:24.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3238' May 29 13:19:24.703: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 29 13:19:24.703: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 29 13:19:24.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3238' May 29 13:19:24.813: INFO: stderr: "No resources found.\n" May 29 13:19:24.813: INFO: stdout: "" May 29 13:19:24.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3238 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 29 13:19:24.903: INFO: stderr: "" May 29 13:19:24.903: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:19:24.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3238" for this suite. May 29 13:19:46.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:19:46.996: INFO: namespace kubectl-3238 deletion completed in 22.090174248s • [SLOW TEST:29.079 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:19:46.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:19:47.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-570" for this suite. May 29 13:19:53.183: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:19:53.264: INFO: namespace kubelet-test-570 deletion completed in 6.091884969s • [SLOW TEST:6.267 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:19:53.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 May 29 13:19:53.330: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 29 13:19:53.337: INFO: Waiting for terminating namespaces to be deleted... May 29 13:19:53.340: INFO: Logging pods the kubelet thinks is on node iruya-worker before test May 29 13:19:53.344: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 29 13:19:53.344: INFO: Container kube-proxy ready: true, restart count 0 May 29 13:19:53.344: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 29 13:19:53.344: INFO: Container kindnet-cni ready: true, restart count 2 May 29 13:19:53.344: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test May 29 13:19:53.349: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) May 29 13:19:53.349: INFO: Container kindnet-cni ready: true, restart count 2 May 29 13:19:53.349: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) May 29 13:19:53.349: INFO: Container kube-proxy ready: true, restart count 0 May 29 13:19:53.349: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) May 29 13:19:53.349: INFO: Container coredns ready: true, restart count 0 May 29 13:19:53.349: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) May 29 13:19:53.349: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16138254a003f9c7], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:19:54.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6248" for this suite. May 29 13:20:00.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:20:00.471: INFO: namespace sched-pred-6248 deletion completed in 6.098034475s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.207 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:20:00.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-91a509e1-4987-4fe2-b500-f5f9363202d9 STEP: Creating a pod to test consume secrets May 29 13:20:00.608: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-44d725fd-ed1d-409d-87f0-245fa600f7aa" in namespace "projected-9729" to be "success or failure" May 29 13:20:00.630: INFO: Pod "pod-projected-secrets-44d725fd-ed1d-409d-87f0-245fa600f7aa": Phase="Pending", Reason="", readiness=false. Elapsed: 21.180122ms May 29 13:20:02.634: INFO: Pod "pod-projected-secrets-44d725fd-ed1d-409d-87f0-245fa600f7aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025457099s May 29 13:20:04.638: INFO: Pod "pod-projected-secrets-44d725fd-ed1d-409d-87f0-245fa600f7aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029728988s STEP: Saw pod success May 29 13:20:04.638: INFO: Pod "pod-projected-secrets-44d725fd-ed1d-409d-87f0-245fa600f7aa" satisfied condition "success or failure" May 29 13:20:04.641: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-44d725fd-ed1d-409d-87f0-245fa600f7aa container projected-secret-volume-test: STEP: delete the pod May 29 13:20:04.658: INFO: Waiting for pod pod-projected-secrets-44d725fd-ed1d-409d-87f0-245fa600f7aa to disappear May 29 13:20:04.662: INFO: Pod pod-projected-secrets-44d725fd-ed1d-409d-87f0-245fa600f7aa no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:20:04.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9729" for this suite. May 29 13:20:10.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:20:10.848: INFO: namespace projected-9729 deletion completed in 6.182301934s • [SLOW TEST:10.375 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:20:10.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-9361 STEP: creating a selector STEP: Creating the service pods in kubernetes May 29 13:20:10.885: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 29 13:20:37.698: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.131:8080/dial?request=hostName&protocol=udp&host=10.244.2.230&port=8081&tries=1'] Namespace:pod-network-test-9361 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 29 13:20:37.698: INFO: >>> kubeConfig: /root/.kube/config I0529 13:20:37.727168 7 log.go:172] (0xc00091c210) (0xc0020f86e0) Create stream I0529 13:20:37.727214 7 log.go:172] (0xc00091c210) (0xc0020f86e0) Stream added, broadcasting: 1 I0529 13:20:37.729979 7 log.go:172] (0xc00091c210) Reply frame received for 1 I0529 13:20:37.730039 7 log.go:172] (0xc00091c210) (0xc000582000) Create stream I0529 13:20:37.730057 7 log.go:172] (0xc00091c210) (0xc000582000) Stream added, broadcasting: 3 I0529 13:20:37.731004 7 log.go:172] (0xc00091c210) Reply frame received for 3 I0529 13:20:37.731038 7 log.go:172] (0xc00091c210) (0xc0020f8780) Create stream I0529 13:20:37.731047 7 log.go:172] (0xc00091c210) (0xc0020f8780) Stream added, broadcasting: 5 I0529 13:20:37.731867 7 log.go:172] (0xc00091c210) Reply frame received for 5 I0529 13:20:37.898522 7 log.go:172] (0xc00091c210) Data frame received for 3 I0529 13:20:37.898561 7 log.go:172] (0xc000582000) (3) Data frame handling I0529 13:20:37.898583 7 log.go:172] (0xc000582000) (3) Data frame sent I0529 13:20:37.899078 7 log.go:172] (0xc00091c210) Data frame received for 3 I0529 13:20:37.899111 7 log.go:172] (0xc000582000) (3) Data frame handling I0529 13:20:37.899411 7 log.go:172] (0xc00091c210) Data frame received for 5 I0529 13:20:37.899426 7 log.go:172] (0xc0020f8780) (5) Data frame handling I0529 13:20:37.901611 7 log.go:172] (0xc00091c210) Data frame received for 1 I0529 13:20:37.901650 7 log.go:172] (0xc0020f86e0) (1) Data frame handling I0529 13:20:37.901663 7 log.go:172] (0xc0020f86e0) (1) Data frame sent I0529 13:20:37.901672 7 log.go:172] (0xc00091c210) (0xc0020f86e0) Stream removed, broadcasting: 1 I0529 13:20:37.901681 7 log.go:172] (0xc00091c210) Go away received I0529 13:20:37.901859 7 log.go:172] (0xc00091c210) (0xc0020f86e0) Stream removed, broadcasting: 1 I0529 13:20:37.901885 7 log.go:172] (0xc00091c210) (0xc000582000) Stream removed, broadcasting: 3 I0529 13:20:37.901901 7 log.go:172] (0xc00091c210) (0xc0020f8780) Stream removed, broadcasting: 5 May 29 13:20:37.901: INFO: Waiting for endpoints: map[] May 29 13:20:37.915: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.131:8080/dial?request=hostName&protocol=udp&host=10.244.1.130&port=8081&tries=1'] Namespace:pod-network-test-9361 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 29 13:20:37.915: INFO: >>> kubeConfig: /root/.kube/config I0529 13:20:37.947034 7 log.go:172] (0xc000d66420) (0xc0017a05a0) Create stream I0529 13:20:37.947061 7 log.go:172] (0xc000d66420) (0xc0017a05a0) Stream added, broadcasting: 1 I0529 13:20:37.949470 7 log.go:172] (0xc000d66420) Reply frame received for 1 I0529 13:20:37.949543 7 log.go:172] (0xc000d66420) (0xc002596460) Create stream I0529 13:20:37.949581 7 log.go:172] (0xc000d66420) (0xc002596460) Stream added, broadcasting: 3 I0529 13:20:37.950489 7 log.go:172] (0xc000d66420) Reply frame received for 3 I0529 13:20:37.950533 7 log.go:172] (0xc000d66420) (0xc0005821e0) Create stream I0529 13:20:37.950540 7 log.go:172] (0xc000d66420) (0xc0005821e0) Stream added, broadcasting: 5 I0529 13:20:37.951454 7 log.go:172] (0xc000d66420) Reply frame received for 5 I0529 13:20:38.021026 7 log.go:172] (0xc000d66420) Data frame received for 3 I0529 13:20:38.021054 7 log.go:172] (0xc002596460) (3) Data frame handling I0529 13:20:38.021072 7 log.go:172] (0xc002596460) (3) Data frame sent I0529 13:20:38.022271 7 log.go:172] (0xc000d66420) Data frame received for 5 I0529 13:20:38.022301 7 log.go:172] (0xc0005821e0) (5) Data frame handling I0529 13:20:38.022341 7 log.go:172] (0xc000d66420) Data frame received for 3 I0529 13:20:38.022366 7 log.go:172] (0xc002596460) (3) Data frame handling I0529 13:20:38.024347 7 log.go:172] (0xc000d66420) Data frame received for 1 I0529 13:20:38.024381 7 log.go:172] (0xc0017a05a0) (1) Data frame handling I0529 13:20:38.024461 7 log.go:172] (0xc0017a05a0) (1) Data frame sent I0529 13:20:38.024497 7 log.go:172] (0xc000d66420) (0xc0017a05a0) Stream removed, broadcasting: 1 I0529 13:20:38.024525 7 log.go:172] (0xc000d66420) Go away received I0529 13:20:38.024713 7 log.go:172] (0xc000d66420) (0xc0017a05a0) Stream removed, broadcasting: 1 I0529 13:20:38.024746 7 log.go:172] (0xc000d66420) (0xc002596460) Stream removed, broadcasting: 3 I0529 13:20:38.024759 7 log.go:172] (0xc000d66420) (0xc0005821e0) Stream removed, broadcasting: 5 May 29 13:20:38.024: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:20:38.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9361" for this suite. May 29 13:21:02.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:21:02.157: INFO: namespace pod-network-test-9361 deletion completed in 24.128777552s • [SLOW TEST:51.309 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:21:02.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-794dfd25-f292-46e8-a2c4-f9d408e11352 STEP: Creating a pod to test consume configMaps May 29 13:21:02.247: INFO: Waiting up to 5m0s for pod "pod-configmaps-bbabfcd9-4ee1-4360-9e34-816041f72957" in namespace "configmap-4772" to be "success or failure" May 29 13:21:02.260: INFO: Pod "pod-configmaps-bbabfcd9-4ee1-4360-9e34-816041f72957": Phase="Pending", Reason="", readiness=false. Elapsed: 13.623351ms May 29 13:21:04.279: INFO: Pod "pod-configmaps-bbabfcd9-4ee1-4360-9e34-816041f72957": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032569582s May 29 13:21:06.283: INFO: Pod "pod-configmaps-bbabfcd9-4ee1-4360-9e34-816041f72957": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035908399s STEP: Saw pod success May 29 13:21:06.283: INFO: Pod "pod-configmaps-bbabfcd9-4ee1-4360-9e34-816041f72957" satisfied condition "success or failure" May 29 13:21:06.285: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-bbabfcd9-4ee1-4360-9e34-816041f72957 container configmap-volume-test: STEP: delete the pod May 29 13:21:06.414: INFO: Waiting for pod pod-configmaps-bbabfcd9-4ee1-4360-9e34-816041f72957 to disappear May 29 13:21:06.436: INFO: Pod pod-configmaps-bbabfcd9-4ee1-4360-9e34-816041f72957 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:21:06.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4772" for this suite. May 29 13:21:12.541: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:21:12.621: INFO: namespace configmap-4772 deletion completed in 6.181591042s • [SLOW TEST:10.464 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:21:12.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-490feddb-0129-4bcc-a6c6-cda27ca87751 STEP: Creating a pod to test consume secrets May 29 13:21:12.798: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c749ef08-d2af-4dfb-a97d-f3b6829c61a8" in namespace "projected-9859" to be "success or failure" May 29 13:21:12.826: INFO: Pod "pod-projected-secrets-c749ef08-d2af-4dfb-a97d-f3b6829c61a8": Phase="Pending", Reason="", readiness=false. Elapsed: 27.88366ms May 29 13:21:14.832: INFO: Pod "pod-projected-secrets-c749ef08-d2af-4dfb-a97d-f3b6829c61a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033586529s May 29 13:21:16.837: INFO: Pod "pod-projected-secrets-c749ef08-d2af-4dfb-a97d-f3b6829c61a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03815419s STEP: Saw pod success May 29 13:21:16.837: INFO: Pod "pod-projected-secrets-c749ef08-d2af-4dfb-a97d-f3b6829c61a8" satisfied condition "success or failure" May 29 13:21:16.840: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-c749ef08-d2af-4dfb-a97d-f3b6829c61a8 container projected-secret-volume-test: STEP: delete the pod May 29 13:21:16.857: INFO: Waiting for pod pod-projected-secrets-c749ef08-d2af-4dfb-a97d-f3b6829c61a8 to disappear May 29 13:21:16.876: INFO: Pod pod-projected-secrets-c749ef08-d2af-4dfb-a97d-f3b6829c61a8 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:21:16.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9859" for this suite. May 29 13:21:22.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:21:22.991: INFO: namespace projected-9859 deletion completed in 6.112019731s • [SLOW TEST:10.370 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:21:22.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-4637 STEP: creating a selector STEP: Creating the service pods in kubernetes May 29 13:21:23.049: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 29 13:21:47.236: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.133:8080/dial?request=hostName&protocol=http&host=10.244.2.233&port=8080&tries=1'] Namespace:pod-network-test-4637 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 29 13:21:47.237: INFO: >>> kubeConfig: /root/.kube/config I0529 13:21:47.270329 7 log.go:172] (0xc00318b340) (0xc0003819a0) Create stream I0529 13:21:47.270383 7 log.go:172] (0xc00318b340) (0xc0003819a0) Stream added, broadcasting: 1 I0529 13:21:47.272394 7 log.go:172] (0xc00318b340) Reply frame received for 1 I0529 13:21:47.272453 7 log.go:172] (0xc00318b340) (0xc000583f40) Create stream I0529 13:21:47.272482 7 log.go:172] (0xc00318b340) (0xc000583f40) Stream added, broadcasting: 3 I0529 13:21:47.273893 7 log.go:172] (0xc00318b340) Reply frame received for 3 I0529 13:21:47.273938 7 log.go:172] (0xc00318b340) (0xc000381a40) Create stream I0529 13:21:47.273952 7 log.go:172] (0xc00318b340) (0xc000381a40) Stream added, broadcasting: 5 I0529 13:21:47.275198 7 log.go:172] (0xc00318b340) Reply frame received for 5 I0529 13:21:47.346882 7 log.go:172] (0xc00318b340) Data frame received for 3 I0529 13:21:47.346912 7 log.go:172] (0xc000583f40) (3) Data frame handling I0529 13:21:47.346929 7 log.go:172] (0xc000583f40) (3) Data frame sent I0529 13:21:47.347203 7 log.go:172] (0xc00318b340) Data frame received for 3 I0529 13:21:47.347228 7 log.go:172] (0xc000583f40) (3) Data frame handling I0529 13:21:47.347249 7 log.go:172] (0xc00318b340) Data frame received for 5 I0529 13:21:47.347266 7 log.go:172] (0xc000381a40) (5) Data frame handling I0529 13:21:47.349105 7 log.go:172] (0xc00318b340) Data frame received for 1 I0529 13:21:47.349318 7 log.go:172] (0xc0003819a0) (1) Data frame handling I0529 13:21:47.349326 7 log.go:172] (0xc0003819a0) (1) Data frame sent I0529 13:21:47.349334 7 log.go:172] (0xc00318b340) (0xc0003819a0) Stream removed, broadcasting: 1 I0529 13:21:47.349343 7 log.go:172] (0xc00318b340) Go away received I0529 13:21:47.349463 7 log.go:172] (0xc00318b340) (0xc0003819a0) Stream removed, broadcasting: 1 I0529 13:21:47.349475 7 log.go:172] (0xc00318b340) (0xc000583f40) Stream removed, broadcasting: 3 I0529 13:21:47.349481 7 log.go:172] (0xc00318b340) (0xc000381a40) Stream removed, broadcasting: 5 May 29 13:21:47.349: INFO: Waiting for endpoints: map[] May 29 13:21:47.352: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.133:8080/dial?request=hostName&protocol=http&host=10.244.1.132&port=8080&tries=1'] Namespace:pod-network-test-4637 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 29 13:21:47.352: INFO: >>> kubeConfig: /root/.kube/config I0529 13:21:47.380843 7 log.go:172] (0xc001b14b00) (0xc002bbb4a0) Create stream I0529 13:21:47.380876 7 log.go:172] (0xc001b14b00) (0xc002bbb4a0) Stream added, broadcasting: 1 I0529 13:21:47.383079 7 log.go:172] (0xc001b14b00) Reply frame received for 1 I0529 13:21:47.383128 7 log.go:172] (0xc001b14b00) (0xc0009f57c0) Create stream I0529 13:21:47.383141 7 log.go:172] (0xc001b14b00) (0xc0009f57c0) Stream added, broadcasting: 3 I0529 13:21:47.383894 7 log.go:172] (0xc001b14b00) Reply frame received for 3 I0529 13:21:47.383921 7 log.go:172] (0xc001b14b00) (0xc0009f5900) Create stream I0529 13:21:47.383929 7 log.go:172] (0xc001b14b00) (0xc0009f5900) Stream added, broadcasting: 5 I0529 13:21:47.384629 7 log.go:172] (0xc001b14b00) Reply frame received for 5 I0529 13:21:47.543186 7 log.go:172] (0xc001b14b00) Data frame received for 3 I0529 13:21:47.543210 7 log.go:172] (0xc0009f57c0) (3) Data frame handling I0529 13:21:47.543221 7 log.go:172] (0xc0009f57c0) (3) Data frame sent I0529 13:21:47.543670 7 log.go:172] (0xc001b14b00) Data frame received for 5 I0529 13:21:47.543700 7 log.go:172] (0xc0009f5900) (5) Data frame handling I0529 13:21:47.543759 7 log.go:172] (0xc001b14b00) Data frame received for 3 I0529 13:21:47.543809 7 log.go:172] (0xc0009f57c0) (3) Data frame handling I0529 13:21:47.545360 7 log.go:172] (0xc001b14b00) Data frame received for 1 I0529 13:21:47.545377 7 log.go:172] (0xc002bbb4a0) (1) Data frame handling I0529 13:21:47.545392 7 log.go:172] (0xc002bbb4a0) (1) Data frame sent I0529 13:21:47.545409 7 log.go:172] (0xc001b14b00) (0xc002bbb4a0) Stream removed, broadcasting: 1 I0529 13:21:47.545422 7 log.go:172] (0xc001b14b00) Go away received I0529 13:21:47.545593 7 log.go:172] (0xc001b14b00) (0xc002bbb4a0) Stream removed, broadcasting: 1 I0529 13:21:47.545601 7 log.go:172] (0xc001b14b00) (0xc0009f57c0) Stream removed, broadcasting: 3 I0529 13:21:47.545606 7 log.go:172] (0xc001b14b00) (0xc0009f5900) Stream removed, broadcasting: 5 May 29 13:21:47.545: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:21:47.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4637" for this suite. May 29 13:22:11.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:22:11.672: INFO: namespace pod-network-test-4637 deletion completed in 24.12367048s • [SLOW TEST:48.681 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:22:11.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 29 13:22:11.787: INFO: Waiting up to 5m0s for pod "downwardapi-volume-150f4455-b720-4687-9f2a-29f62fc392c0" in namespace "projected-4392" to be "success or failure" May 29 13:22:11.791: INFO: Pod "downwardapi-volume-150f4455-b720-4687-9f2a-29f62fc392c0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.435382ms May 29 13:22:13.795: INFO: Pod "downwardapi-volume-150f4455-b720-4687-9f2a-29f62fc392c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007861472s May 29 13:22:15.800: INFO: Pod "downwardapi-volume-150f4455-b720-4687-9f2a-29f62fc392c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012270116s STEP: Saw pod success May 29 13:22:15.800: INFO: Pod "downwardapi-volume-150f4455-b720-4687-9f2a-29f62fc392c0" satisfied condition "success or failure" May 29 13:22:15.802: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-150f4455-b720-4687-9f2a-29f62fc392c0 container client-container: STEP: delete the pod May 29 13:22:15.846: INFO: Waiting for pod downwardapi-volume-150f4455-b720-4687-9f2a-29f62fc392c0 to disappear May 29 13:22:15.851: INFO: Pod downwardapi-volume-150f4455-b720-4687-9f2a-29f62fc392c0 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:22:15.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4392" for this suite. May 29 13:22:21.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:22:21.948: INFO: namespace projected-4392 deletion completed in 6.093866291s • [SLOW TEST:10.275 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:22:21.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 29 13:22:22.004: INFO: Waiting up to 5m0s for pod "downward-api-d08d8301-d467-4507-b898-3c07a2e20eea" in namespace "downward-api-7596" to be "success or failure" May 29 13:22:22.006: INFO: Pod "downward-api-d08d8301-d467-4507-b898-3c07a2e20eea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.719038ms May 29 13:22:24.012: INFO: Pod "downward-api-d08d8301-d467-4507-b898-3c07a2e20eea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007940219s May 29 13:22:26.016: INFO: Pod "downward-api-d08d8301-d467-4507-b898-3c07a2e20eea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012505045s STEP: Saw pod success May 29 13:22:26.016: INFO: Pod "downward-api-d08d8301-d467-4507-b898-3c07a2e20eea" satisfied condition "success or failure" May 29 13:22:26.019: INFO: Trying to get logs from node iruya-worker pod downward-api-d08d8301-d467-4507-b898-3c07a2e20eea container dapi-container: STEP: delete the pod May 29 13:22:26.073: INFO: Waiting for pod downward-api-d08d8301-d467-4507-b898-3c07a2e20eea to disappear May 29 13:22:26.084: INFO: Pod downward-api-d08d8301-d467-4507-b898-3c07a2e20eea no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:22:26.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7596" for this suite. May 29 13:22:32.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:22:32.200: INFO: namespace downward-api-7596 deletion completed in 6.112576266s • [SLOW TEST:10.251 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:22:32.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:22:32.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6402" for this suite. May 29 13:22:54.358: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:22:54.435: INFO: namespace pods-6402 deletion completed in 22.097995469s • [SLOW TEST:22.234 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:22:54.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 29 13:22:54.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-1263' May 29 13:22:54.634: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 29 13:22:54.634: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc May 29 13:22:54.641: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-lq68p] May 29 13:22:54.642: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-lq68p" in namespace "kubectl-1263" to be "running and ready" May 29 13:22:54.671: INFO: Pod "e2e-test-nginx-rc-lq68p": Phase="Pending", Reason="", readiness=false. Elapsed: 29.006556ms May 29 13:22:56.755: INFO: Pod "e2e-test-nginx-rc-lq68p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113226738s May 29 13:22:58.759: INFO: Pod "e2e-test-nginx-rc-lq68p": Phase="Running", Reason="", readiness=true. Elapsed: 4.117260705s May 29 13:22:58.759: INFO: Pod "e2e-test-nginx-rc-lq68p" satisfied condition "running and ready" May 29 13:22:58.759: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-lq68p] May 29 13:22:58.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-1263' May 29 13:22:58.913: INFO: stderr: "" May 29 13:22:58.913: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 May 29 13:22:58.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-1263' May 29 13:22:59.024: INFO: stderr: "" May 29 13:22:59.024: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:22:59.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1263" for this suite. May 29 13:23:05.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:23:05.143: INFO: namespace kubectl-1263 deletion completed in 6.115316779s • [SLOW TEST:10.708 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:23:05.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 29 13:23:05.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-4018' May 29 13:23:05.319: INFO: stderr: "" May 29 13:23:05.319: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created May 29 13:23:10.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-4018 -o json' May 29 13:23:10.458: INFO: stderr: "" May 29 13:23:10.458: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-29T13:23:05Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-4018\",\n \"resourceVersion\": \"13547704\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-4018/pods/e2e-test-nginx-pod\",\n \"uid\": \"2776c3d8-bf0f-4eb2-9997-560cca241b5c\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-sz4pz\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-sz4pz\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-sz4pz\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-29T13:23:05Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-29T13:23:09Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-29T13:23:09Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-29T13:23:05Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://109c900e4449fd508007fdf0bd7ed75c6c2395923ac3d703c67f34a0f12b5ff4\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-29T13:23:08Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.5\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.135\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-29T13:23:05Z\"\n }\n}\n" STEP: replace the image in the pod May 29 13:23:10.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-4018' May 29 13:23:10.737: INFO: stderr: "" May 29 13:23:10.737: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 May 29 13:23:10.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-4018' May 29 13:23:21.868: INFO: stderr: "" May 29 13:23:21.868: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:23:21.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4018" for this suite. May 29 13:23:27.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:23:27.993: INFO: namespace kubectl-4018 deletion completed in 6.093445212s • [SLOW TEST:22.850 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:23:27.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 29 13:23:38.186: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4349 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 29 13:23:38.186: INFO: >>> kubeConfig: /root/.kube/config I0529 13:23:38.226003 7 log.go:172] (0xc0024e20b0) (0xc001633720) Create stream I0529 13:23:38.226033 7 log.go:172] (0xc0024e20b0) (0xc001633720) Stream added, broadcasting: 1 I0529 13:23:38.228418 7 log.go:172] (0xc0024e20b0) Reply frame received for 1 I0529 13:23:38.228467 7 log.go:172] (0xc0024e20b0) (0xc0023c8c80) Create stream I0529 13:23:38.228477 7 log.go:172] (0xc0024e20b0) (0xc0023c8c80) Stream added, broadcasting: 3 I0529 13:23:38.229775 7 log.go:172] (0xc0024e20b0) Reply frame received for 3 I0529 13:23:38.229827 7 log.go:172] (0xc0024e20b0) (0xc0016337c0) Create stream I0529 13:23:38.229850 7 log.go:172] (0xc0024e20b0) (0xc0016337c0) Stream added, broadcasting: 5 I0529 13:23:38.230991 7 log.go:172] (0xc0024e20b0) Reply frame received for 5 I0529 13:23:38.297401 7 log.go:172] (0xc0024e20b0) Data frame received for 3 I0529 13:23:38.297444 7 log.go:172] (0xc0024e20b0) Data frame received for 5 I0529 13:23:38.297472 7 log.go:172] (0xc0016337c0) (5) Data frame handling I0529 13:23:38.297503 7 log.go:172] (0xc0023c8c80) (3) Data frame handling I0529 13:23:38.297521 7 log.go:172] (0xc0023c8c80) (3) Data frame sent I0529 13:23:38.297534 7 log.go:172] (0xc0024e20b0) Data frame received for 3 I0529 13:23:38.297543 7 log.go:172] (0xc0023c8c80) (3) Data frame handling I0529 13:23:38.299047 7 log.go:172] (0xc0024e20b0) Data frame received for 1 I0529 13:23:38.299087 7 log.go:172] (0xc001633720) (1) Data frame handling I0529 13:23:38.299128 7 log.go:172] (0xc001633720) (1) Data frame sent I0529 13:23:38.299151 7 log.go:172] (0xc0024e20b0) (0xc001633720) Stream removed, broadcasting: 1 I0529 13:23:38.299176 7 log.go:172] (0xc0024e20b0) Go away received I0529 13:23:38.299305 7 log.go:172] (0xc0024e20b0) (0xc001633720) Stream removed, broadcasting: 1 I0529 13:23:38.299319 7 log.go:172] (0xc0024e20b0) (0xc0023c8c80) Stream removed, broadcasting: 3 I0529 13:23:38.299325 7 log.go:172] (0xc0024e20b0) (0xc0016337c0) Stream removed, broadcasting: 5 May 29 13:23:38.299: INFO: Exec stderr: "" May 29 13:23:38.299: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4349 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 29 13:23:38.299: INFO: >>> kubeConfig: /root/.kube/config I0529 13:23:38.331602 7 log.go:172] (0xc002622210) (0xc001c1e1e0) Create stream I0529 13:23:38.331633 7 log.go:172] (0xc002622210) (0xc001c1e1e0) Stream added, broadcasting: 1 I0529 13:23:38.333651 7 log.go:172] (0xc002622210) Reply frame received for 1 I0529 13:23:38.333707 7 log.go:172] (0xc002622210) (0xc0023c8d20) Create stream I0529 13:23:38.333723 7 log.go:172] (0xc002622210) (0xc0023c8d20) Stream added, broadcasting: 3 I0529 13:23:38.334864 7 log.go:172] (0xc002622210) Reply frame received for 3 I0529 13:23:38.334907 7 log.go:172] (0xc002622210) (0xc001c1e280) Create stream I0529 13:23:38.334922 7 log.go:172] (0xc002622210) (0xc001c1e280) Stream added, broadcasting: 5 I0529 13:23:38.335902 7 log.go:172] (0xc002622210) Reply frame received for 5 I0529 13:23:38.390685 7 log.go:172] (0xc002622210) Data frame received for 3 I0529 13:23:38.390719 7 log.go:172] (0xc0023c8d20) (3) Data frame handling I0529 13:23:38.390732 7 log.go:172] (0xc0023c8d20) (3) Data frame sent I0529 13:23:38.390748 7 log.go:172] (0xc002622210) Data frame received for 3 I0529 13:23:38.390756 7 log.go:172] (0xc0023c8d20) (3) Data frame handling I0529 13:23:38.390779 7 log.go:172] (0xc002622210) Data frame received for 5 I0529 13:23:38.390787 7 log.go:172] (0xc001c1e280) (5) Data frame handling I0529 13:23:38.391800 7 log.go:172] (0xc002622210) Data frame received for 1 I0529 13:23:38.391812 7 log.go:172] (0xc001c1e1e0) (1) Data frame handling I0529 13:23:38.391824 7 log.go:172] (0xc001c1e1e0) (1) Data frame sent I0529 13:23:38.391834 7 log.go:172] (0xc002622210) (0xc001c1e1e0) Stream removed, broadcasting: 1 I0529 13:23:38.391846 7 log.go:172] (0xc002622210) Go away received I0529 13:23:38.391981 7 log.go:172] (0xc002622210) (0xc001c1e1e0) Stream removed, broadcasting: 1 I0529 13:23:38.392010 7 log.go:172] (0xc002622210) (0xc0023c8d20) Stream removed, broadcasting: 3 I0529 13:23:38.392021 7 log.go:172] (0xc002622210) (0xc001c1e280) Stream removed, broadcasting: 5 May 29 13:23:38.392: INFO: Exec stderr: "" May 29 13:23:38.392: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4349 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 29 13:23:38.392: INFO: >>> kubeConfig: /root/.kube/config I0529 13:23:38.440232 7 log.go:172] (0xc002622c60) (0xc001c1e780) Create stream I0529 13:23:38.440259 7 log.go:172] (0xc002622c60) (0xc001c1e780) Stream added, broadcasting: 1 I0529 13:23:38.442344 7 log.go:172] (0xc002622c60) Reply frame received for 1 I0529 13:23:38.442395 7 log.go:172] (0xc002622c60) (0xc0023c8dc0) Create stream I0529 13:23:38.442408 7 log.go:172] (0xc002622c60) (0xc0023c8dc0) Stream added, broadcasting: 3 I0529 13:23:38.443571 7 log.go:172] (0xc002622c60) Reply frame received for 3 I0529 13:23:38.443610 7 log.go:172] (0xc002622c60) (0xc00261ed20) Create stream I0529 13:23:38.443632 7 log.go:172] (0xc002622c60) (0xc00261ed20) Stream added, broadcasting: 5 I0529 13:23:38.444574 7 log.go:172] (0xc002622c60) Reply frame received for 5 I0529 13:23:38.512509 7 log.go:172] (0xc002622c60) Data frame received for 3 I0529 13:23:38.512550 7 log.go:172] (0xc0023c8dc0) (3) Data frame handling I0529 13:23:38.512578 7 log.go:172] (0xc0023c8dc0) (3) Data frame sent I0529 13:23:38.512590 7 log.go:172] (0xc002622c60) Data frame received for 3 I0529 13:23:38.512603 7 log.go:172] (0xc0023c8dc0) (3) Data frame handling I0529 13:23:38.512659 7 log.go:172] (0xc002622c60) Data frame received for 5 I0529 13:23:38.512698 7 log.go:172] (0xc00261ed20) (5) Data frame handling I0529 13:23:38.514333 7 log.go:172] (0xc002622c60) Data frame received for 1 I0529 13:23:38.514385 7 log.go:172] (0xc001c1e780) (1) Data frame handling I0529 13:23:38.514414 7 log.go:172] (0xc001c1e780) (1) Data frame sent I0529 13:23:38.514432 7 log.go:172] (0xc002622c60) (0xc001c1e780) Stream removed, broadcasting: 1 I0529 13:23:38.514448 7 log.go:172] (0xc002622c60) Go away received I0529 13:23:38.514625 7 log.go:172] (0xc002622c60) (0xc001c1e780) Stream removed, broadcasting: 1 I0529 13:23:38.514648 7 log.go:172] (0xc002622c60) (0xc0023c8dc0) Stream removed, broadcasting: 3 I0529 13:23:38.514661 7 log.go:172] (0xc002622c60) (0xc00261ed20) Stream removed, broadcasting: 5 May 29 13:23:38.514: INFO: Exec stderr: "" May 29 13:23:38.514: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4349 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 29 13:23:38.514: INFO: >>> kubeConfig: /root/.kube/config I0529 13:23:38.546292 7 log.go:172] (0xc002623810) (0xc001c1ed20) Create stream I0529 13:23:38.546314 7 log.go:172] (0xc002623810) (0xc001c1ed20) Stream added, broadcasting: 1 I0529 13:23:38.548104 7 log.go:172] (0xc002623810) Reply frame received for 1 I0529 13:23:38.548142 7 log.go:172] (0xc002623810) (0xc001c1edc0) Create stream I0529 13:23:38.548156 7 log.go:172] (0xc002623810) (0xc001c1edc0) Stream added, broadcasting: 3 I0529 13:23:38.549485 7 log.go:172] (0xc002623810) Reply frame received for 3 I0529 13:23:38.549502 7 log.go:172] (0xc002623810) (0xc0023c8e60) Create stream I0529 13:23:38.549510 7 log.go:172] (0xc002623810) (0xc0023c8e60) Stream added, broadcasting: 5 I0529 13:23:38.550341 7 log.go:172] (0xc002623810) Reply frame received for 5 I0529 13:23:38.618977 7 log.go:172] (0xc002623810) Data frame received for 5 I0529 13:23:38.619022 7 log.go:172] (0xc0023c8e60) (5) Data frame handling I0529 13:23:38.619053 7 log.go:172] (0xc002623810) Data frame received for 3 I0529 13:23:38.619070 7 log.go:172] (0xc001c1edc0) (3) Data frame handling I0529 13:23:38.619085 7 log.go:172] (0xc001c1edc0) (3) Data frame sent I0529 13:23:38.619097 7 log.go:172] (0xc002623810) Data frame received for 3 I0529 13:23:38.619107 7 log.go:172] (0xc001c1edc0) (3) Data frame handling I0529 13:23:38.620455 7 log.go:172] (0xc002623810) Data frame received for 1 I0529 13:23:38.620473 7 log.go:172] (0xc001c1ed20) (1) Data frame handling I0529 13:23:38.620486 7 log.go:172] (0xc001c1ed20) (1) Data frame sent I0529 13:23:38.620501 7 log.go:172] (0xc002623810) (0xc001c1ed20) Stream removed, broadcasting: 1 I0529 13:23:38.620520 7 log.go:172] (0xc002623810) Go away received I0529 13:23:38.620671 7 log.go:172] (0xc002623810) (0xc001c1ed20) Stream removed, broadcasting: 1 I0529 13:23:38.620704 7 log.go:172] (0xc002623810) (0xc001c1edc0) Stream removed, broadcasting: 3 I0529 13:23:38.620728 7 log.go:172] (0xc002623810) (0xc0023c8e60) Stream removed, broadcasting: 5 May 29 13:23:38.620: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 29 13:23:38.620: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4349 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 29 13:23:38.620: INFO: >>> kubeConfig: /root/.kube/config I0529 13:23:38.654084 7 log.go:172] (0xc002ba88f0) (0xc0023c9220) Create stream I0529 13:23:38.654116 7 log.go:172] (0xc002ba88f0) (0xc0023c9220) Stream added, broadcasting: 1 I0529 13:23:38.657093 7 log.go:172] (0xc002ba88f0) Reply frame received for 1 I0529 13:23:38.657284 7 log.go:172] (0xc002ba88f0) (0xc0023c92c0) Create stream I0529 13:23:38.657300 7 log.go:172] (0xc002ba88f0) (0xc0023c92c0) Stream added, broadcasting: 3 I0529 13:23:38.658750 7 log.go:172] (0xc002ba88f0) Reply frame received for 3 I0529 13:23:38.658798 7 log.go:172] (0xc002ba88f0) (0xc001633860) Create stream I0529 13:23:38.658811 7 log.go:172] (0xc002ba88f0) (0xc001633860) Stream added, broadcasting: 5 I0529 13:23:38.659739 7 log.go:172] (0xc002ba88f0) Reply frame received for 5 I0529 13:23:38.707124 7 log.go:172] (0xc002ba88f0) Data frame received for 5 I0529 13:23:38.707158 7 log.go:172] (0xc002ba88f0) Data frame received for 3 I0529 13:23:38.707183 7 log.go:172] (0xc0023c92c0) (3) Data frame handling I0529 13:23:38.707199 7 log.go:172] (0xc0023c92c0) (3) Data frame sent I0529 13:23:38.707207 7 log.go:172] (0xc002ba88f0) Data frame received for 3 I0529 13:23:38.707216 7 log.go:172] (0xc0023c92c0) (3) Data frame handling I0529 13:23:38.707238 7 log.go:172] (0xc001633860) (5) Data frame handling I0529 13:23:38.708698 7 log.go:172] (0xc002ba88f0) Data frame received for 1 I0529 13:23:38.708722 7 log.go:172] (0xc0023c9220) (1) Data frame handling I0529 13:23:38.708736 7 log.go:172] (0xc0023c9220) (1) Data frame sent I0529 13:23:38.708755 7 log.go:172] (0xc002ba88f0) (0xc0023c9220) Stream removed, broadcasting: 1 I0529 13:23:38.708771 7 log.go:172] (0xc002ba88f0) Go away received I0529 13:23:38.708973 7 log.go:172] (0xc002ba88f0) (0xc0023c9220) Stream removed, broadcasting: 1 I0529 13:23:38.709000 7 log.go:172] (0xc002ba88f0) (0xc0023c92c0) Stream removed, broadcasting: 3 I0529 13:23:38.709019 7 log.go:172] (0xc002ba88f0) (0xc001633860) Stream removed, broadcasting: 5 May 29 13:23:38.709: INFO: Exec stderr: "" May 29 13:23:38.709: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4349 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 29 13:23:38.709: INFO: >>> kubeConfig: /root/.kube/config I0529 13:23:38.782118 7 log.go:172] (0xc0027e6b00) (0xc001b33900) Create stream I0529 13:23:38.782155 7 log.go:172] (0xc0027e6b00) (0xc001b33900) Stream added, broadcasting: 1 I0529 13:23:38.785563 7 log.go:172] (0xc0027e6b00) Reply frame received for 1 I0529 13:23:38.785617 7 log.go:172] (0xc0027e6b00) (0xc001633900) Create stream I0529 13:23:38.785638 7 log.go:172] (0xc0027e6b00) (0xc001633900) Stream added, broadcasting: 3 I0529 13:23:38.786530 7 log.go:172] (0xc0027e6b00) Reply frame received for 3 I0529 13:23:38.786566 7 log.go:172] (0xc0027e6b00) (0xc001b339a0) Create stream I0529 13:23:38.786580 7 log.go:172] (0xc0027e6b00) (0xc001b339a0) Stream added, broadcasting: 5 I0529 13:23:38.787734 7 log.go:172] (0xc0027e6b00) Reply frame received for 5 I0529 13:23:38.849781 7 log.go:172] (0xc0027e6b00) Data frame received for 5 I0529 13:23:38.849815 7 log.go:172] (0xc001b339a0) (5) Data frame handling I0529 13:23:38.849832 7 log.go:172] (0xc0027e6b00) Data frame received for 3 I0529 13:23:38.849836 7 log.go:172] (0xc001633900) (3) Data frame handling I0529 13:23:38.849843 7 log.go:172] (0xc001633900) (3) Data frame sent I0529 13:23:38.849849 7 log.go:172] (0xc0027e6b00) Data frame received for 3 I0529 13:23:38.849854 7 log.go:172] (0xc001633900) (3) Data frame handling I0529 13:23:38.851177 7 log.go:172] (0xc0027e6b00) Data frame received for 1 I0529 13:23:38.851222 7 log.go:172] (0xc001b33900) (1) Data frame handling I0529 13:23:38.851237 7 log.go:172] (0xc001b33900) (1) Data frame sent I0529 13:23:38.851252 7 log.go:172] (0xc0027e6b00) (0xc001b33900) Stream removed, broadcasting: 1 I0529 13:23:38.851270 7 log.go:172] (0xc0027e6b00) Go away received I0529 13:23:38.851624 7 log.go:172] (0xc0027e6b00) (0xc001b33900) Stream removed, broadcasting: 1 I0529 13:23:38.851644 7 log.go:172] (0xc0027e6b00) (0xc001633900) Stream removed, broadcasting: 3 I0529 13:23:38.851656 7 log.go:172] (0xc0027e6b00) (0xc001b339a0) Stream removed, broadcasting: 5 May 29 13:23:38.851: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 29 13:23:38.851: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4349 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 29 13:23:38.851: INFO: >>> kubeConfig: /root/.kube/config I0529 13:23:38.877249 7 log.go:172] (0xc0027e7d90) (0xc001b33d60) Create stream I0529 13:23:38.877309 7 log.go:172] (0xc0027e7d90) (0xc001b33d60) Stream added, broadcasting: 1 I0529 13:23:38.880189 7 log.go:172] (0xc0027e7d90) Reply frame received for 1 I0529 13:23:38.880228 7 log.go:172] (0xc0027e7d90) (0xc001c1ee60) Create stream I0529 13:23:38.880245 7 log.go:172] (0xc0027e7d90) (0xc001c1ee60) Stream added, broadcasting: 3 I0529 13:23:38.881297 7 log.go:172] (0xc0027e7d90) Reply frame received for 3 I0529 13:23:38.881350 7 log.go:172] (0xc0027e7d90) (0xc001c1ef00) Create stream I0529 13:23:38.881370 7 log.go:172] (0xc0027e7d90) (0xc001c1ef00) Stream added, broadcasting: 5 I0529 13:23:38.882185 7 log.go:172] (0xc0027e7d90) Reply frame received for 5 I0529 13:23:38.934168 7 log.go:172] (0xc0027e7d90) Data frame received for 5 I0529 13:23:38.934220 7 log.go:172] (0xc001c1ef00) (5) Data frame handling I0529 13:23:38.934253 7 log.go:172] (0xc0027e7d90) Data frame received for 3 I0529 13:23:38.934288 7 log.go:172] (0xc001c1ee60) (3) Data frame handling I0529 13:23:38.934322 7 log.go:172] (0xc001c1ee60) (3) Data frame sent I0529 13:23:38.934357 7 log.go:172] (0xc0027e7d90) Data frame received for 3 I0529 13:23:38.934382 7 log.go:172] (0xc001c1ee60) (3) Data frame handling I0529 13:23:38.935822 7 log.go:172] (0xc0027e7d90) Data frame received for 1 I0529 13:23:38.935847 7 log.go:172] (0xc001b33d60) (1) Data frame handling I0529 13:23:38.935862 7 log.go:172] (0xc001b33d60) (1) Data frame sent I0529 13:23:38.935875 7 log.go:172] (0xc0027e7d90) (0xc001b33d60) Stream removed, broadcasting: 1 I0529 13:23:38.935888 7 log.go:172] (0xc0027e7d90) Go away received I0529 13:23:38.936074 7 log.go:172] (0xc0027e7d90) (0xc001b33d60) Stream removed, broadcasting: 1 I0529 13:23:38.936104 7 log.go:172] (0xc0027e7d90) (0xc001c1ee60) Stream removed, broadcasting: 3 I0529 13:23:38.936116 7 log.go:172] (0xc0027e7d90) (0xc001c1ef00) Stream removed, broadcasting: 5 May 29 13:23:38.936: INFO: Exec stderr: "" May 29 13:23:38.936: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4349 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 29 13:23:38.936: INFO: >>> kubeConfig: /root/.kube/config I0529 13:23:38.978767 7 log.go:172] (0xc002d74790) (0xc001c1f360) Create stream I0529 13:23:38.978793 7 log.go:172] (0xc002d74790) (0xc001c1f360) Stream added, broadcasting: 1 I0529 13:23:38.981856 7 log.go:172] (0xc002d74790) Reply frame received for 1 I0529 13:23:38.981898 7 log.go:172] (0xc002d74790) (0xc0016339a0) Create stream I0529 13:23:38.981910 7 log.go:172] (0xc002d74790) (0xc0016339a0) Stream added, broadcasting: 3 I0529 13:23:38.983028 7 log.go:172] (0xc002d74790) Reply frame received for 3 I0529 13:23:38.983063 7 log.go:172] (0xc002d74790) (0xc001633a40) Create stream I0529 13:23:38.983072 7 log.go:172] (0xc002d74790) (0xc001633a40) Stream added, broadcasting: 5 I0529 13:23:38.983825 7 log.go:172] (0xc002d74790) Reply frame received for 5 I0529 13:23:39.043527 7 log.go:172] (0xc002d74790) Data frame received for 5 I0529 13:23:39.043580 7 log.go:172] (0xc001633a40) (5) Data frame handling I0529 13:23:39.043723 7 log.go:172] (0xc002d74790) Data frame received for 3 I0529 13:23:39.043763 7 log.go:172] (0xc0016339a0) (3) Data frame handling I0529 13:23:39.043792 7 log.go:172] (0xc0016339a0) (3) Data frame sent I0529 13:23:39.043807 7 log.go:172] (0xc002d74790) Data frame received for 3 I0529 13:23:39.043823 7 log.go:172] (0xc0016339a0) (3) Data frame handling I0529 13:23:39.045596 7 log.go:172] (0xc002d74790) Data frame received for 1 I0529 13:23:39.045629 7 log.go:172] (0xc001c1f360) (1) Data frame handling I0529 13:23:39.045647 7 log.go:172] (0xc001c1f360) (1) Data frame sent I0529 13:23:39.045658 7 log.go:172] (0xc002d74790) (0xc001c1f360) Stream removed, broadcasting: 1 I0529 13:23:39.045773 7 log.go:172] (0xc002d74790) (0xc001c1f360) Stream removed, broadcasting: 1 I0529 13:23:39.045800 7 log.go:172] (0xc002d74790) (0xc0016339a0) Stream removed, broadcasting: 3 I0529 13:23:39.045830 7 log.go:172] (0xc002d74790) Go away received I0529 13:23:39.045860 7 log.go:172] (0xc002d74790) (0xc001633a40) Stream removed, broadcasting: 5 May 29 13:23:39.045: INFO: Exec stderr: "" May 29 13:23:39.045: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4349 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 29 13:23:39.045: INFO: >>> kubeConfig: /root/.kube/config I0529 13:23:39.084504 7 log.go:172] (0xc0024e3b80) (0xc001633cc0) Create stream I0529 13:23:39.084534 7 log.go:172] (0xc0024e3b80) (0xc001633cc0) Stream added, broadcasting: 1 I0529 13:23:39.087588 7 log.go:172] (0xc0024e3b80) Reply frame received for 1 I0529 13:23:39.087621 7 log.go:172] (0xc0024e3b80) (0xc001c1f5e0) Create stream I0529 13:23:39.087632 7 log.go:172] (0xc0024e3b80) (0xc001c1f5e0) Stream added, broadcasting: 3 I0529 13:23:39.088648 7 log.go:172] (0xc0024e3b80) Reply frame received for 3 I0529 13:23:39.088807 7 log.go:172] (0xc0024e3b80) (0xc001b33e00) Create stream I0529 13:23:39.088823 7 log.go:172] (0xc0024e3b80) (0xc001b33e00) Stream added, broadcasting: 5 I0529 13:23:39.090110 7 log.go:172] (0xc0024e3b80) Reply frame received for 5 I0529 13:23:39.156678 7 log.go:172] (0xc0024e3b80) Data frame received for 5 I0529 13:23:39.156718 7 log.go:172] (0xc001b33e00) (5) Data frame handling I0529 13:23:39.156741 7 log.go:172] (0xc0024e3b80) Data frame received for 3 I0529 13:23:39.156755 7 log.go:172] (0xc001c1f5e0) (3) Data frame handling I0529 13:23:39.156771 7 log.go:172] (0xc001c1f5e0) (3) Data frame sent I0529 13:23:39.156781 7 log.go:172] (0xc0024e3b80) Data frame received for 3 I0529 13:23:39.156796 7 log.go:172] (0xc001c1f5e0) (3) Data frame handling I0529 13:23:39.159533 7 log.go:172] (0xc0024e3b80) Data frame received for 1 I0529 13:23:39.159646 7 log.go:172] (0xc001633cc0) (1) Data frame handling I0529 13:23:39.159697 7 log.go:172] (0xc001633cc0) (1) Data frame sent I0529 13:23:39.160337 7 log.go:172] (0xc0024e3b80) (0xc001633cc0) Stream removed, broadcasting: 1 I0529 13:23:39.160442 7 log.go:172] (0xc0024e3b80) (0xc001633cc0) Stream removed, broadcasting: 1 I0529 13:23:39.160475 7 log.go:172] (0xc0024e3b80) (0xc001c1f5e0) Stream removed, broadcasting: 3 I0529 13:23:39.160503 7 log.go:172] (0xc0024e3b80) (0xc001b33e00) Stream removed, broadcasting: 5 May 29 13:23:39.160: INFO: Exec stderr: "" May 29 13:23:39.160: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4349 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 29 13:23:39.160: INFO: >>> kubeConfig: /root/.kube/config I0529 13:23:39.163571 7 log.go:172] (0xc0024e3b80) Go away received I0529 13:23:39.186569 7 log.go:172] (0xc0029789a0) (0xc00304c140) Create stream I0529 13:23:39.186594 7 log.go:172] (0xc0029789a0) (0xc00304c140) Stream added, broadcasting: 1 I0529 13:23:39.188509 7 log.go:172] (0xc0029789a0) Reply frame received for 1 I0529 13:23:39.188533 7 log.go:172] (0xc0029789a0) (0xc0023c9360) Create stream I0529 13:23:39.188540 7 log.go:172] (0xc0029789a0) (0xc0023c9360) Stream added, broadcasting: 3 I0529 13:23:39.189308 7 log.go:172] (0xc0029789a0) Reply frame received for 3 I0529 13:23:39.189327 7 log.go:172] (0xc0029789a0) (0xc001c1f720) Create stream I0529 13:23:39.189337 7 log.go:172] (0xc0029789a0) (0xc001c1f720) Stream added, broadcasting: 5 I0529 13:23:39.189961 7 log.go:172] (0xc0029789a0) Reply frame received for 5 I0529 13:23:39.236742 7 log.go:172] (0xc0029789a0) Data frame received for 3 I0529 13:23:39.236782 7 log.go:172] (0xc0023c9360) (3) Data frame handling I0529 13:23:39.236798 7 log.go:172] (0xc0023c9360) (3) Data frame sent I0529 13:23:39.236815 7 log.go:172] (0xc0029789a0) Data frame received for 3 I0529 13:23:39.236829 7 log.go:172] (0xc0023c9360) (3) Data frame handling I0529 13:23:39.236858 7 log.go:172] (0xc0029789a0) Data frame received for 5 I0529 13:23:39.236876 7 log.go:172] (0xc001c1f720) (5) Data frame handling I0529 13:23:39.238295 7 log.go:172] (0xc0029789a0) Data frame received for 1 I0529 13:23:39.238355 7 log.go:172] (0xc00304c140) (1) Data frame handling I0529 13:23:39.238403 7 log.go:172] (0xc00304c140) (1) Data frame sent I0529 13:23:39.238450 7 log.go:172] (0xc0029789a0) (0xc00304c140) Stream removed, broadcasting: 1 I0529 13:23:39.238507 7 log.go:172] (0xc0029789a0) Go away received I0529 13:23:39.238575 7 log.go:172] (0xc0029789a0) (0xc00304c140) Stream removed, broadcasting: 1 I0529 13:23:39.238603 7 log.go:172] (0xc0029789a0) (0xc0023c9360) Stream removed, broadcasting: 3 I0529 13:23:39.238613 7 log.go:172] (0xc0029789a0) (0xc001c1f720) Stream removed, broadcasting: 5 May 29 13:23:39.238: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:23:39.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-4349" for this suite. May 29 13:24:25.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:24:25.346: INFO: namespace e2e-kubelet-etc-hosts-4349 deletion completed in 46.10354954s • [SLOW TEST:57.352 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:24:25.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:24:31.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5936" for this suite. May 29 13:24:37.635: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:24:37.739: INFO: namespace namespaces-5936 deletion completed in 6.117388268s STEP: Destroying namespace "nsdeletetest-8916" for this suite. May 29 13:24:37.742: INFO: Namespace nsdeletetest-8916 was already deleted STEP: Destroying namespace "nsdeletetest-2213" for this suite. May 29 13:24:43.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:24:43.839: INFO: namespace nsdeletetest-2213 deletion completed in 6.096875796s • [SLOW TEST:18.493 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:24:43.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 29 13:24:43.898: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. May 29 13:24:44.703: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 29 13:24:46.886: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726355484, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726355484, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726355484, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726355484, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 29 13:24:48.889: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726355484, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726355484, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726355484, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726355484, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 29 13:24:51.519: INFO: Waited 621.162382ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:24:52.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-6394" for this suite. May 29 13:24:58.151: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:24:58.269: INFO: namespace aggregator-6394 deletion completed in 6.19799497s • [SLOW TEST:14.430 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:24:58.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 29 13:24:58.365: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1789,SelfLink:/api/v1/namespaces/watch-1789/configmaps/e2e-watch-test-label-changed,UID:35cff41b-7ad7-4412-91ef-d542dc59cf36,ResourceVersion:13548107,Generation:0,CreationTimestamp:2020-05-29 13:24:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 29 13:24:58.365: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1789,SelfLink:/api/v1/namespaces/watch-1789/configmaps/e2e-watch-test-label-changed,UID:35cff41b-7ad7-4412-91ef-d542dc59cf36,ResourceVersion:13548108,Generation:0,CreationTimestamp:2020-05-29 13:24:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 29 13:24:58.365: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1789,SelfLink:/api/v1/namespaces/watch-1789/configmaps/e2e-watch-test-label-changed,UID:35cff41b-7ad7-4412-91ef-d542dc59cf36,ResourceVersion:13548109,Generation:0,CreationTimestamp:2020-05-29 13:24:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 29 13:25:08.496: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1789,SelfLink:/api/v1/namespaces/watch-1789/configmaps/e2e-watch-test-label-changed,UID:35cff41b-7ad7-4412-91ef-d542dc59cf36,ResourceVersion:13548132,Generation:0,CreationTimestamp:2020-05-29 13:24:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 29 13:25:08.496: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1789,SelfLink:/api/v1/namespaces/watch-1789/configmaps/e2e-watch-test-label-changed,UID:35cff41b-7ad7-4412-91ef-d542dc59cf36,ResourceVersion:13548133,Generation:0,CreationTimestamp:2020-05-29 13:24:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 29 13:25:08.496: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1789,SelfLink:/api/v1/namespaces/watch-1789/configmaps/e2e-watch-test-label-changed,UID:35cff41b-7ad7-4412-91ef-d542dc59cf36,ResourceVersion:13548134,Generation:0,CreationTimestamp:2020-05-29 13:24:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:25:08.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1789" for this suite. May 29 13:25:14.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:25:14.575: INFO: namespace watch-1789 deletion completed in 6.072048199s • [SLOW TEST:16.306 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:25:14.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium May 29 13:25:14.624: INFO: Waiting up to 5m0s for pod "pod-3e21ba78-5b8a-41ac-a726-a652d6db37b2" in namespace "emptydir-4191" to be "success or failure" May 29 13:25:14.640: INFO: Pod "pod-3e21ba78-5b8a-41ac-a726-a652d6db37b2": Phase="Pending", Reason="", readiness=false. Elapsed: 15.954888ms May 29 13:25:16.644: INFO: Pod "pod-3e21ba78-5b8a-41ac-a726-a652d6db37b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020094554s May 29 13:25:18.648: INFO: Pod "pod-3e21ba78-5b8a-41ac-a726-a652d6db37b2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024412569s May 29 13:25:20.652: INFO: Pod "pod-3e21ba78-5b8a-41ac-a726-a652d6db37b2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027658556s May 29 13:25:22.655: INFO: Pod "pod-3e21ba78-5b8a-41ac-a726-a652d6db37b2": Phase="Running", Reason="", readiness=true. Elapsed: 8.031388124s May 29 13:25:24.660: INFO: Pod "pod-3e21ba78-5b8a-41ac-a726-a652d6db37b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.036123703s STEP: Saw pod success May 29 13:25:24.660: INFO: Pod "pod-3e21ba78-5b8a-41ac-a726-a652d6db37b2" satisfied condition "success or failure" May 29 13:25:24.663: INFO: Trying to get logs from node iruya-worker2 pod pod-3e21ba78-5b8a-41ac-a726-a652d6db37b2 container test-container: STEP: delete the pod May 29 13:25:24.699: INFO: Waiting for pod pod-3e21ba78-5b8a-41ac-a726-a652d6db37b2 to disappear May 29 13:25:24.706: INFO: Pod pod-3e21ba78-5b8a-41ac-a726-a652d6db37b2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:25:24.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4191" for this suite. May 29 13:25:30.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:25:30.801: INFO: namespace emptydir-4191 deletion completed in 6.092015768s • [SLOW TEST:16.226 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:25:30.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info May 29 13:25:30.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 29 13:25:33.915: INFO: stderr: "" May 29 13:25:33.915: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:25:33.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9174" for this suite. May 29 13:25:39.932: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:25:40.030: INFO: namespace kubectl-9174 deletion completed in 6.110927877s • [SLOW TEST:9.227 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:25:40.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all May 29 13:25:40.099: INFO: Waiting up to 5m0s for pod "client-containers-99b353d4-b5c2-4eb7-8d53-575bacd12112" in namespace "containers-412" to be "success or failure" May 29 13:25:40.106: INFO: Pod "client-containers-99b353d4-b5c2-4eb7-8d53-575bacd12112": Phase="Pending", Reason="", readiness=false. Elapsed: 6.700356ms May 29 13:25:42.122: INFO: Pod "client-containers-99b353d4-b5c2-4eb7-8d53-575bacd12112": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022944763s May 29 13:25:44.126: INFO: Pod "client-containers-99b353d4-b5c2-4eb7-8d53-575bacd12112": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027034076s STEP: Saw pod success May 29 13:25:44.126: INFO: Pod "client-containers-99b353d4-b5c2-4eb7-8d53-575bacd12112" satisfied condition "success or failure" May 29 13:25:44.129: INFO: Trying to get logs from node iruya-worker pod client-containers-99b353d4-b5c2-4eb7-8d53-575bacd12112 container test-container: STEP: delete the pod May 29 13:25:44.175: INFO: Waiting for pod client-containers-99b353d4-b5c2-4eb7-8d53-575bacd12112 to disappear May 29 13:25:44.190: INFO: Pod client-containers-99b353d4-b5c2-4eb7-8d53-575bacd12112 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:25:44.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-412" for this suite. May 29 13:25:50.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:25:50.313: INFO: namespace containers-412 deletion completed in 6.120369765s • [SLOW TEST:10.283 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:25:50.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 29 13:25:50.376: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 5.207248ms) May 29 13:25:50.380: INFO: (1) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.845695ms) May 29 13:25:50.383: INFO: (2) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.515532ms) May 29 13:25:50.387: INFO: (3) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.665723ms) May 29 13:25:50.390: INFO: (4) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.452493ms) May 29 13:25:50.394: INFO: (5) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.334897ms) May 29 13:25:50.397: INFO: (6) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.295682ms) May 29 13:25:50.400: INFO: (7) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.079429ms) May 29 13:25:50.404: INFO: (8) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.666871ms) May 29 13:25:50.408: INFO: (9) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.673203ms) May 29 13:25:50.411: INFO: (10) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.343392ms) May 29 13:25:50.415: INFO: (11) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 4.247871ms) May 29 13:25:50.419: INFO: (12) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.702494ms) May 29 13:25:50.423: INFO: (13) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.898899ms) May 29 13:25:50.426: INFO: (14) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.205483ms) May 29 13:25:50.430: INFO: (15) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.400938ms) May 29 13:25:50.433: INFO: (16) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.187668ms) May 29 13:25:50.436: INFO: (17) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.835434ms) May 29 13:25:50.439: INFO: (18) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.964456ms) May 29 13:25:50.442: INFO: (19) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.693406ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:25:50.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-8111" for this suite. May 29 13:25:56.468: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:25:56.546: INFO: namespace proxy-8111 deletion completed in 6.101852403s • [SLOW TEST:6.233 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:25:56.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-9682 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet May 29 13:25:56.615: INFO: Found 0 stateful pods, waiting for 3 May 29 13:26:06.620: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 29 13:26:06.620: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 29 13:26:06.620: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 29 13:26:16.620: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 29 13:26:16.620: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 29 13:26:16.620: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 29 13:26:16.650: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 29 13:26:26.709: INFO: Updating stateful set ss2 May 29 13:26:26.736: INFO: Waiting for Pod statefulset-9682/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted May 29 13:26:36.885: INFO: Found 2 stateful pods, waiting for 3 May 29 13:26:46.891: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 29 13:26:46.891: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 29 13:26:46.891: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 29 13:26:46.916: INFO: Updating stateful set ss2 May 29 13:26:46.972: INFO: Waiting for Pod statefulset-9682/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 29 13:26:56.997: INFO: Updating stateful set ss2 May 29 13:26:57.039: INFO: Waiting for StatefulSet statefulset-9682/ss2 to complete update May 29 13:26:57.039: INFO: Waiting for Pod statefulset-9682/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 29 13:27:07.049: INFO: Deleting all statefulset in ns statefulset-9682 May 29 13:27:07.052: INFO: Scaling statefulset ss2 to 0 May 29 13:27:17.070: INFO: Waiting for statefulset status.replicas updated to 0 May 29 13:27:17.074: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:27:17.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9682" for this suite. May 29 13:27:23.132: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:27:23.218: INFO: namespace statefulset-9682 deletion completed in 6.099686005s • [SLOW TEST:86.671 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:27:23.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 29 13:27:27.293: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-2486712f-9b47-4775-802f-525df7f67303,GenerateName:,Namespace:events-5284,SelfLink:/api/v1/namespaces/events-5284/pods/send-events-2486712f-9b47-4775-802f-525df7f67303,UID:3d3ad6cf-94c8-4aa2-9556-ff6c0661c87b,ResourceVersion:13548739,Generation:0,CreationTimestamp:2020-05-29 13:27:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 265734097,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kzddb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kzddb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-kzddb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003302b80} {node.kubernetes.io/unreachable Exists NoExecute 0xc003302ba0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 13:27:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 13:27:26 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 13:27:26 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 13:27:23 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.142,StartTime:2020-05-29 13:27:23 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-05-29 13:27:25 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://ef9477adf33615002d16b138c4498210e3f6d2265a4bb29504c72ef8527ceb89}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod May 29 13:27:29.298: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 29 13:27:31.303: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:27:31.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-5284" for this suite. May 29 13:28:13.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:28:13.468: INFO: namespace events-5284 deletion completed in 42.137783279s • [SLOW TEST:50.250 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:28:13.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-7ccc0159-7b52-4352-93ca-3edb10e72e4c STEP: Creating the pod STEP: Updating configmap configmap-test-upd-7ccc0159-7b52-4352-93ca-3edb10e72e4c STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:28:19.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9116" for this suite. May 29 13:28:41.612: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:28:41.694: INFO: namespace configmap-9116 deletion completed in 22.107748331s • [SLOW TEST:28.224 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:28:41.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 29 13:28:41.770: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 13:28:41.795: INFO: Number of nodes with available pods: 0 May 29 13:28:41.795: INFO: Node iruya-worker is running more than one daemon pod May 29 13:28:42.800: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 13:28:42.802: INFO: Number of nodes with available pods: 0 May 29 13:28:42.803: INFO: Node iruya-worker is running more than one daemon pod May 29 13:28:43.857: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 13:28:43.860: INFO: Number of nodes with available pods: 0 May 29 13:28:43.860: INFO: Node iruya-worker is running more than one daemon pod May 29 13:28:44.802: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 13:28:44.805: INFO: Number of nodes with available pods: 0 May 29 13:28:44.805: INFO: Node iruya-worker is running more than one daemon pod May 29 13:28:45.799: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 13:28:45.802: INFO: Number of nodes with available pods: 0 May 29 13:28:45.802: INFO: Node iruya-worker is running more than one daemon pod May 29 13:28:46.800: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 13:28:46.804: INFO: Number of nodes with available pods: 2 May 29 13:28:46.804: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 29 13:28:46.823: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 13:28:46.862: INFO: Number of nodes with available pods: 2 May 29 13:28:46.862: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6014, will wait for the garbage collector to delete the pods May 29 13:28:47.978: INFO: Deleting DaemonSet.extensions daemon-set took: 5.25002ms May 29 13:28:48.279: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.274539ms May 29 13:29:02.283: INFO: Number of nodes with available pods: 0 May 29 13:29:02.283: INFO: Number of running nodes: 0, number of available pods: 0 May 29 13:29:02.287: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6014/daemonsets","resourceVersion":"13549023"},"items":null} May 29 13:29:02.289: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6014/pods","resourceVersion":"13549023"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:29:02.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6014" for this suite. May 29 13:29:08.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:29:08.410: INFO: namespace daemonsets-6014 deletion completed in 6.109695471s • [SLOW TEST:26.716 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:29:08.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 29 13:29:16.589: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 29 13:29:16.595: INFO: Pod pod-with-poststart-exec-hook still exists May 29 13:29:18.595: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 29 13:29:18.599: INFO: Pod pod-with-poststart-exec-hook still exists May 29 13:29:20.595: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 29 13:29:20.599: INFO: Pod pod-with-poststart-exec-hook still exists May 29 13:29:22.595: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 29 13:29:22.600: INFO: Pod pod-with-poststart-exec-hook still exists May 29 13:29:24.595: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 29 13:29:24.600: INFO: Pod pod-with-poststart-exec-hook still exists May 29 13:29:26.595: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 29 13:29:26.599: INFO: Pod pod-with-poststart-exec-hook still exists May 29 13:29:28.595: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 29 13:29:28.604: INFO: Pod pod-with-poststart-exec-hook still exists May 29 13:29:30.595: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 29 13:29:30.598: INFO: Pod pod-with-poststart-exec-hook still exists May 29 13:29:32.595: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 29 13:29:32.604: INFO: Pod pod-with-poststart-exec-hook still exists May 29 13:29:34.595: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 29 13:29:34.599: INFO: Pod pod-with-poststart-exec-hook still exists May 29 13:29:36.595: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 29 13:29:36.600: INFO: Pod pod-with-poststart-exec-hook still exists May 29 13:29:38.595: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 29 13:29:38.600: INFO: Pod pod-with-poststart-exec-hook still exists May 29 13:29:40.595: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 29 13:29:40.599: INFO: Pod pod-with-poststart-exec-hook still exists May 29 13:29:42.595: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 29 13:29:42.617: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:29:42.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4631" for this suite. May 29 13:30:04.641: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:30:04.736: INFO: namespace container-lifecycle-hook-4631 deletion completed in 22.114132086s • [SLOW TEST:56.325 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:30:04.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-5075 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-5075 STEP: Deleting pre-stop pod May 29 13:30:17.893: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:30:17.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-5075" for this suite. May 29 13:30:55.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:30:56.018: INFO: namespace prestop-5075 deletion completed in 38.11044016s • [SLOW TEST:51.282 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:30:56.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token May 29 13:30:56.630: INFO: created pod pod-service-account-defaultsa May 29 13:30:56.630: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 29 13:30:56.654: INFO: created pod pod-service-account-mountsa May 29 13:30:56.654: INFO: pod pod-service-account-mountsa service account token volume mount: true May 29 13:30:56.684: INFO: created pod pod-service-account-nomountsa May 29 13:30:56.684: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 29 13:30:56.696: INFO: created pod pod-service-account-defaultsa-mountspec May 29 13:30:56.696: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 29 13:30:56.726: INFO: created pod pod-service-account-mountsa-mountspec May 29 13:30:56.726: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 29 13:30:56.774: INFO: created pod pod-service-account-nomountsa-mountspec May 29 13:30:56.774: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 29 13:30:56.777: INFO: created pod pod-service-account-defaultsa-nomountspec May 29 13:30:56.777: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 29 13:30:56.806: INFO: created pod pod-service-account-mountsa-nomountspec May 29 13:30:56.806: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 29 13:30:56.847: INFO: created pod pod-service-account-nomountsa-nomountspec May 29 13:30:56.847: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:30:56.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-2471" for this suite. May 29 13:31:25.059: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:31:25.139: INFO: namespace svcaccounts-2471 deletion completed in 28.213697515s • [SLOW TEST:29.120 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:31:25.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-93727429-5c37-4087-a473-292e99896ae5 STEP: Creating a pod to test consume secrets May 29 13:31:25.202: INFO: Waiting up to 5m0s for pod "pod-secrets-6846d650-6607-4f3b-b93c-0d02990a2773" in namespace "secrets-3380" to be "success or failure" May 29 13:31:25.206: INFO: Pod "pod-secrets-6846d650-6607-4f3b-b93c-0d02990a2773": Phase="Pending", Reason="", readiness=false. Elapsed: 4.131829ms May 29 13:31:27.211: INFO: Pod "pod-secrets-6846d650-6607-4f3b-b93c-0d02990a2773": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009475848s May 29 13:31:29.215: INFO: Pod "pod-secrets-6846d650-6607-4f3b-b93c-0d02990a2773": Phase="Running", Reason="", readiness=true. Elapsed: 4.013291901s May 29 13:31:31.219: INFO: Pod "pod-secrets-6846d650-6607-4f3b-b93c-0d02990a2773": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017435471s STEP: Saw pod success May 29 13:31:31.219: INFO: Pod "pod-secrets-6846d650-6607-4f3b-b93c-0d02990a2773" satisfied condition "success or failure" May 29 13:31:31.222: INFO: Trying to get logs from node iruya-worker pod pod-secrets-6846d650-6607-4f3b-b93c-0d02990a2773 container secret-volume-test: STEP: delete the pod May 29 13:31:31.238: INFO: Waiting for pod pod-secrets-6846d650-6607-4f3b-b93c-0d02990a2773 to disappear May 29 13:31:31.242: INFO: Pod pod-secrets-6846d650-6607-4f3b-b93c-0d02990a2773 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:31:31.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3380" for this suite. May 29 13:31:37.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:31:37.339: INFO: namespace secrets-3380 deletion completed in 6.0945551s • [SLOW TEST:12.201 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:31:37.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args May 29 13:31:37.429: INFO: Waiting up to 5m0s for pod "var-expansion-cf2d112c-ac87-4728-81ff-7eaad05f9155" in namespace "var-expansion-7540" to be "success or failure" May 29 13:31:37.435: INFO: Pod "var-expansion-cf2d112c-ac87-4728-81ff-7eaad05f9155": Phase="Pending", Reason="", readiness=false. Elapsed: 5.503201ms May 29 13:31:39.438: INFO: Pod "var-expansion-cf2d112c-ac87-4728-81ff-7eaad05f9155": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008912173s May 29 13:31:41.443: INFO: Pod "var-expansion-cf2d112c-ac87-4728-81ff-7eaad05f9155": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013466132s STEP: Saw pod success May 29 13:31:41.443: INFO: Pod "var-expansion-cf2d112c-ac87-4728-81ff-7eaad05f9155" satisfied condition "success or failure" May 29 13:31:41.446: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-cf2d112c-ac87-4728-81ff-7eaad05f9155 container dapi-container: STEP: delete the pod May 29 13:31:41.467: INFO: Waiting for pod var-expansion-cf2d112c-ac87-4728-81ff-7eaad05f9155 to disappear May 29 13:31:41.471: INFO: Pod var-expansion-cf2d112c-ac87-4728-81ff-7eaad05f9155 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:31:41.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7540" for this suite. May 29 13:31:47.486: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:31:47.557: INFO: namespace var-expansion-7540 deletion completed in 6.082972973s • [SLOW TEST:10.217 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:31:47.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 29 13:31:47.648: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1e2c8f75-24f7-4867-ba24-61eb68200396" in namespace "downward-api-180" to be "success or failure" May 29 13:31:47.657: INFO: Pod "downwardapi-volume-1e2c8f75-24f7-4867-ba24-61eb68200396": Phase="Pending", Reason="", readiness=false. Elapsed: 8.988201ms May 29 13:31:49.703: INFO: Pod "downwardapi-volume-1e2c8f75-24f7-4867-ba24-61eb68200396": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054469334s May 29 13:31:51.707: INFO: Pod "downwardapi-volume-1e2c8f75-24f7-4867-ba24-61eb68200396": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058864531s STEP: Saw pod success May 29 13:31:51.707: INFO: Pod "downwardapi-volume-1e2c8f75-24f7-4867-ba24-61eb68200396" satisfied condition "success or failure" May 29 13:31:51.710: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-1e2c8f75-24f7-4867-ba24-61eb68200396 container client-container: STEP: delete the pod May 29 13:31:51.771: INFO: Waiting for pod downwardapi-volume-1e2c8f75-24f7-4867-ba24-61eb68200396 to disappear May 29 13:31:51.802: INFO: Pod downwardapi-volume-1e2c8f75-24f7-4867-ba24-61eb68200396 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:31:51.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-180" for this suite. May 29 13:31:57.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:31:57.934: INFO: namespace downward-api-180 deletion completed in 6.128240312s • [SLOW TEST:10.376 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:31:57.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-dcc7a4f6-44c7-4f98-9d94-126319c077fc [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:31:58.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-670" for this suite. May 29 13:32:04.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:32:04.184: INFO: namespace configmap-670 deletion completed in 6.180874149s • [SLOW TEST:6.250 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:32:04.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 29 13:32:04.292: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7685,SelfLink:/api/v1/namespaces/watch-7685/configmaps/e2e-watch-test-configmap-a,UID:28c9dd3d-ae54-40b7-9c2e-a18270f0592d,ResourceVersion:13549663,Generation:0,CreationTimestamp:2020-05-29 13:32:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 29 13:32:04.292: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7685,SelfLink:/api/v1/namespaces/watch-7685/configmaps/e2e-watch-test-configmap-a,UID:28c9dd3d-ae54-40b7-9c2e-a18270f0592d,ResourceVersion:13549663,Generation:0,CreationTimestamp:2020-05-29 13:32:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 29 13:32:14.302: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7685,SelfLink:/api/v1/namespaces/watch-7685/configmaps/e2e-watch-test-configmap-a,UID:28c9dd3d-ae54-40b7-9c2e-a18270f0592d,ResourceVersion:13549683,Generation:0,CreationTimestamp:2020-05-29 13:32:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 29 13:32:14.302: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7685,SelfLink:/api/v1/namespaces/watch-7685/configmaps/e2e-watch-test-configmap-a,UID:28c9dd3d-ae54-40b7-9c2e-a18270f0592d,ResourceVersion:13549683,Generation:0,CreationTimestamp:2020-05-29 13:32:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 29 13:32:24.311: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7685,SelfLink:/api/v1/namespaces/watch-7685/configmaps/e2e-watch-test-configmap-a,UID:28c9dd3d-ae54-40b7-9c2e-a18270f0592d,ResourceVersion:13549703,Generation:0,CreationTimestamp:2020-05-29 13:32:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 29 13:32:24.311: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7685,SelfLink:/api/v1/namespaces/watch-7685/configmaps/e2e-watch-test-configmap-a,UID:28c9dd3d-ae54-40b7-9c2e-a18270f0592d,ResourceVersion:13549703,Generation:0,CreationTimestamp:2020-05-29 13:32:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 29 13:32:34.318: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7685,SelfLink:/api/v1/namespaces/watch-7685/configmaps/e2e-watch-test-configmap-a,UID:28c9dd3d-ae54-40b7-9c2e-a18270f0592d,ResourceVersion:13549723,Generation:0,CreationTimestamp:2020-05-29 13:32:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 29 13:32:34.318: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7685,SelfLink:/api/v1/namespaces/watch-7685/configmaps/e2e-watch-test-configmap-a,UID:28c9dd3d-ae54-40b7-9c2e-a18270f0592d,ResourceVersion:13549723,Generation:0,CreationTimestamp:2020-05-29 13:32:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 29 13:32:44.345: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7685,SelfLink:/api/v1/namespaces/watch-7685/configmaps/e2e-watch-test-configmap-b,UID:a71b6110-9220-419e-856e-581b8239b810,ResourceVersion:13549744,Generation:0,CreationTimestamp:2020-05-29 13:32:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 29 13:32:44.345: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7685,SelfLink:/api/v1/namespaces/watch-7685/configmaps/e2e-watch-test-configmap-b,UID:a71b6110-9220-419e-856e-581b8239b810,ResourceVersion:13549744,Generation:0,CreationTimestamp:2020-05-29 13:32:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 29 13:32:54.351: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7685,SelfLink:/api/v1/namespaces/watch-7685/configmaps/e2e-watch-test-configmap-b,UID:a71b6110-9220-419e-856e-581b8239b810,ResourceVersion:13549764,Generation:0,CreationTimestamp:2020-05-29 13:32:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 29 13:32:54.351: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7685,SelfLink:/api/v1/namespaces/watch-7685/configmaps/e2e-watch-test-configmap-b,UID:a71b6110-9220-419e-856e-581b8239b810,ResourceVersion:13549764,Generation:0,CreationTimestamp:2020-05-29 13:32:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:33:04.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7685" for this suite. May 29 13:33:10.388: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:33:10.481: INFO: namespace watch-7685 deletion completed in 6.12488315s • [SLOW TEST:66.295 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:33:10.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 29 13:33:18.604: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 29 13:33:18.622: INFO: Pod pod-with-prestop-http-hook still exists May 29 13:33:20.622: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 29 13:33:20.626: INFO: Pod pod-with-prestop-http-hook still exists May 29 13:33:22.622: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 29 13:33:22.627: INFO: Pod pod-with-prestop-http-hook still exists May 29 13:33:24.622: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 29 13:33:24.627: INFO: Pod pod-with-prestop-http-hook still exists May 29 13:33:26.622: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 29 13:33:26.626: INFO: Pod pod-with-prestop-http-hook still exists May 29 13:33:28.622: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 29 13:33:28.627: INFO: Pod pod-with-prestop-http-hook still exists May 29 13:33:30.622: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 29 13:33:30.627: INFO: Pod pod-with-prestop-http-hook still exists May 29 13:33:32.622: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 29 13:33:32.627: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:33:32.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9013" for this suite. May 29 13:33:54.650: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:33:54.730: INFO: namespace container-lifecycle-hook-9013 deletion completed in 22.092719436s • [SLOW TEST:44.249 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:33:54.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 29 13:33:54.769: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2e54f021-101c-457a-9b8d-be892f77f51d" in namespace "downward-api-8792" to be "success or failure" May 29 13:33:54.786: INFO: Pod "downwardapi-volume-2e54f021-101c-457a-9b8d-be892f77f51d": Phase="Pending", Reason="", readiness=false. Elapsed: 16.74035ms May 29 13:33:56.791: INFO: Pod "downwardapi-volume-2e54f021-101c-457a-9b8d-be892f77f51d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021535344s May 29 13:33:58.796: INFO: Pod "downwardapi-volume-2e54f021-101c-457a-9b8d-be892f77f51d": Phase="Running", Reason="", readiness=true. Elapsed: 4.026066969s May 29 13:34:00.801: INFO: Pod "downwardapi-volume-2e54f021-101c-457a-9b8d-be892f77f51d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031650943s STEP: Saw pod success May 29 13:34:00.801: INFO: Pod "downwardapi-volume-2e54f021-101c-457a-9b8d-be892f77f51d" satisfied condition "success or failure" May 29 13:34:00.804: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-2e54f021-101c-457a-9b8d-be892f77f51d container client-container: STEP: delete the pod May 29 13:34:00.831: INFO: Waiting for pod downwardapi-volume-2e54f021-101c-457a-9b8d-be892f77f51d to disappear May 29 13:34:00.854: INFO: Pod downwardapi-volume-2e54f021-101c-457a-9b8d-be892f77f51d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:34:00.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8792" for this suite. May 29 13:34:06.879: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:34:06.952: INFO: namespace downward-api-8792 deletion completed in 6.094274505s • [SLOW TEST:12.221 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:34:06.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 29 13:34:07.014: INFO: Waiting up to 5m0s for pod "downward-api-8c66e013-034c-40be-a015-7280af849636" in namespace "downward-api-3691" to be "success or failure" May 29 13:34:07.024: INFO: Pod "downward-api-8c66e013-034c-40be-a015-7280af849636": Phase="Pending", Reason="", readiness=false. Elapsed: 9.806335ms May 29 13:34:09.029: INFO: Pod "downward-api-8c66e013-034c-40be-a015-7280af849636": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014288882s May 29 13:34:11.033: INFO: Pod "downward-api-8c66e013-034c-40be-a015-7280af849636": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018764524s STEP: Saw pod success May 29 13:34:11.033: INFO: Pod "downward-api-8c66e013-034c-40be-a015-7280af849636" satisfied condition "success or failure" May 29 13:34:11.036: INFO: Trying to get logs from node iruya-worker pod downward-api-8c66e013-034c-40be-a015-7280af849636 container dapi-container: STEP: delete the pod May 29 13:34:11.055: INFO: Waiting for pod downward-api-8c66e013-034c-40be-a015-7280af849636 to disappear May 29 13:34:11.090: INFO: Pod downward-api-8c66e013-034c-40be-a015-7280af849636 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:34:11.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3691" for this suite. May 29 13:34:17.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:34:17.284: INFO: namespace downward-api-3691 deletion completed in 6.147343391s • [SLOW TEST:10.332 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:34:17.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 29 13:34:17.360: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 29 13:34:22.365: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 29 13:34:22.365: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 29 13:34:24.370: INFO: Creating deployment "test-rollover-deployment" May 29 13:34:24.412: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 29 13:34:26.425: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 29 13:34:26.432: INFO: Ensure that both replica sets have 1 created replica May 29 13:34:26.438: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 29 13:34:26.447: INFO: Updating deployment test-rollover-deployment May 29 13:34:26.447: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 29 13:34:28.460: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 29 13:34:28.467: INFO: Make sure deployment "test-rollover-deployment" is complete May 29 13:34:28.472: INFO: all replica sets need to contain the pod-template-hash label May 29 13:34:28.472: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726356064, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726356064, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726356066, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726356064, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 29 13:34:30.483: INFO: all replica sets need to contain the pod-template-hash label May 29 13:34:30.483: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726356064, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726356064, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726356069, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726356064, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 29 13:34:32.481: INFO: all replica sets need to contain the pod-template-hash label May 29 13:34:32.481: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726356064, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726356064, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726356069, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726356064, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 29 13:34:34.481: INFO: all replica sets need to contain the pod-template-hash label May 29 13:34:34.481: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726356064, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726356064, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726356069, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726356064, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 29 13:34:36.482: INFO: all replica sets need to contain the pod-template-hash label May 29 13:34:36.482: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726356064, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726356064, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726356069, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726356064, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 29 13:34:38.481: INFO: all replica sets need to contain the pod-template-hash label May 29 13:34:38.481: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726356064, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726356064, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726356069, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726356064, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 29 13:34:40.480: INFO: May 29 13:34:40.480: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 29 13:34:40.487: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-7986,SelfLink:/apis/apps/v1/namespaces/deployment-7986/deployments/test-rollover-deployment,UID:1156bfe9-6aad-4ba1-bb15-b7a8c518aa57,ResourceVersion:13550138,Generation:2,CreationTimestamp:2020-05-29 13:34:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-29 13:34:24 +0000 UTC 2020-05-29 13:34:24 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-29 13:34:40 +0000 UTC 2020-05-29 13:34:24 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 29 13:34:40.490: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-7986,SelfLink:/apis/apps/v1/namespaces/deployment-7986/replicasets/test-rollover-deployment-854595fc44,UID:d9ce1918-3de5-4102-b1c2-5760eea65635,ResourceVersion:13550127,Generation:2,CreationTimestamp:2020-05-29 13:34:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 1156bfe9-6aad-4ba1-bb15-b7a8c518aa57 0xc00329fcf7 0xc00329fcf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 29 13:34:40.490: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 29 13:34:40.490: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-7986,SelfLink:/apis/apps/v1/namespaces/deployment-7986/replicasets/test-rollover-controller,UID:cb9ad71e-924b-41b2-bfdc-98814162cde5,ResourceVersion:13550136,Generation:2,CreationTimestamp:2020-05-29 13:34:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 1156bfe9-6aad-4ba1-bb15-b7a8c518aa57 0xc00329fc27 0xc00329fc28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 29 13:34:40.490: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-7986,SelfLink:/apis/apps/v1/namespaces/deployment-7986/replicasets/test-rollover-deployment-9b8b997cf,UID:886f4aa7-b837-4f1d-8012-3a8179d4b95b,ResourceVersion:13550089,Generation:2,CreationTimestamp:2020-05-29 13:34:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 1156bfe9-6aad-4ba1-bb15-b7a8c518aa57 0xc00329fdd0 0xc00329fdd1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 29 13:34:40.493: INFO: Pod "test-rollover-deployment-854595fc44-rv582" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-rv582,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-7986,SelfLink:/api/v1/namespaces/deployment-7986/pods/test-rollover-deployment-854595fc44-rv582,UID:21ad627c-a932-4817-ae30-48d1d0bc773c,ResourceVersion:13550105,Generation:0,CreationTimestamp:2020-05-29 13:34:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 d9ce1918-3de5-4102-b1c2-5760eea65635 0xc00299ea17 0xc00299ea18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2r86x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2r86x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-2r86x true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00299ea90} {node.kubernetes.io/unreachable Exists NoExecute 0xc00299eab0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 13:34:26 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 13:34:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 13:34:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 13:34:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.156,StartTime:2020-05-29 13:34:26 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-29 13:34:29 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://b153c3e0de7d75870c6d461777cde266575ae6bebca49c1641f8d11e2a8b7c7c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:34:40.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7986" for this suite. May 29 13:34:46.632: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:34:46.705: INFO: namespace deployment-7986 deletion completed in 6.209415812s • [SLOW TEST:29.421 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:34:46.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition May 29 13:34:46.775: INFO: Waiting up to 5m0s for pod "var-expansion-fa24eac4-a92e-494d-88b2-3a53ce339adf" in namespace "var-expansion-3010" to be "success or failure" May 29 13:34:46.779: INFO: Pod "var-expansion-fa24eac4-a92e-494d-88b2-3a53ce339adf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.659073ms May 29 13:34:48.885: INFO: Pod "var-expansion-fa24eac4-a92e-494d-88b2-3a53ce339adf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109341929s May 29 13:34:50.890: INFO: Pod "var-expansion-fa24eac4-a92e-494d-88b2-3a53ce339adf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.114050349s STEP: Saw pod success May 29 13:34:50.890: INFO: Pod "var-expansion-fa24eac4-a92e-494d-88b2-3a53ce339adf" satisfied condition "success or failure" May 29 13:34:50.893: INFO: Trying to get logs from node iruya-worker pod var-expansion-fa24eac4-a92e-494d-88b2-3a53ce339adf container dapi-container: STEP: delete the pod May 29 13:34:51.055: INFO: Waiting for pod var-expansion-fa24eac4-a92e-494d-88b2-3a53ce339adf to disappear May 29 13:34:51.098: INFO: Pod var-expansion-fa24eac4-a92e-494d-88b2-3a53ce339adf no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:34:51.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3010" for this suite. May 29 13:34:57.220: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:34:57.296: INFO: namespace var-expansion-3010 deletion completed in 6.194498706s • [SLOW TEST:10.591 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:34:57.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-915380f9-dda4-4797-85bb-1b9f29e58d98 STEP: Creating a pod to test consume configMaps May 29 13:34:57.386: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9689d745-ce2c-4e37-b0c9-d49ac551c38c" in namespace "projected-4230" to be "success or failure" May 29 13:34:57.431: INFO: Pod "pod-projected-configmaps-9689d745-ce2c-4e37-b0c9-d49ac551c38c": Phase="Pending", Reason="", readiness=false. Elapsed: 44.925826ms May 29 13:34:59.484: INFO: Pod "pod-projected-configmaps-9689d745-ce2c-4e37-b0c9-d49ac551c38c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097753735s May 29 13:35:01.489: INFO: Pod "pod-projected-configmaps-9689d745-ce2c-4e37-b0c9-d49ac551c38c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.102401467s STEP: Saw pod success May 29 13:35:01.489: INFO: Pod "pod-projected-configmaps-9689d745-ce2c-4e37-b0c9-d49ac551c38c" satisfied condition "success or failure" May 29 13:35:01.493: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-9689d745-ce2c-4e37-b0c9-d49ac551c38c container projected-configmap-volume-test: STEP: delete the pod May 29 13:35:01.582: INFO: Waiting for pod pod-projected-configmaps-9689d745-ce2c-4e37-b0c9-d49ac551c38c to disappear May 29 13:35:01.589: INFO: Pod pod-projected-configmaps-9689d745-ce2c-4e37-b0c9-d49ac551c38c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:35:01.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4230" for this suite. May 29 13:35:07.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:35:07.669: INFO: namespace projected-4230 deletion completed in 6.077314231s • [SLOW TEST:10.372 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:35:07.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 29 13:35:07.770: INFO: Waiting up to 5m0s for pod "downwardapi-volume-df37936f-2ae2-4dca-a726-7fa4d0e0eb0d" in namespace "downward-api-8193" to be "success or failure" May 29 13:35:07.801: INFO: Pod "downwardapi-volume-df37936f-2ae2-4dca-a726-7fa4d0e0eb0d": Phase="Pending", Reason="", readiness=false. Elapsed: 30.273441ms May 29 13:35:10.209: INFO: Pod "downwardapi-volume-df37936f-2ae2-4dca-a726-7fa4d0e0eb0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.438862178s May 29 13:35:12.214: INFO: Pod "downwardapi-volume-df37936f-2ae2-4dca-a726-7fa4d0e0eb0d": Phase="Running", Reason="", readiness=true. Elapsed: 4.444063051s May 29 13:35:14.219: INFO: Pod "downwardapi-volume-df37936f-2ae2-4dca-a726-7fa4d0e0eb0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.448625523s STEP: Saw pod success May 29 13:35:14.219: INFO: Pod "downwardapi-volume-df37936f-2ae2-4dca-a726-7fa4d0e0eb0d" satisfied condition "success or failure" May 29 13:35:14.222: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-df37936f-2ae2-4dca-a726-7fa4d0e0eb0d container client-container: STEP: delete the pod May 29 13:35:14.289: INFO: Waiting for pod downwardapi-volume-df37936f-2ae2-4dca-a726-7fa4d0e0eb0d to disappear May 29 13:35:14.319: INFO: Pod downwardapi-volume-df37936f-2ae2-4dca-a726-7fa4d0e0eb0d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:35:14.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8193" for this suite. May 29 13:35:20.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:35:20.452: INFO: namespace downward-api-8193 deletion completed in 6.129696748s • [SLOW TEST:12.783 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:35:20.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 29 13:35:20.525: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 29 13:35:20.564: INFO: Pod name sample-pod: Found 0 pods out of 1 May 29 13:35:25.570: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 29 13:35:25.570: INFO: Creating deployment "test-rolling-update-deployment" May 29 13:35:25.575: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 29 13:35:25.586: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 29 13:35:27.601: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 29 13:35:27.603: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726356125, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726356125, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726356125, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726356125, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 29 13:35:29.736: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 29 13:35:29.744: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-286,SelfLink:/apis/apps/v1/namespaces/deployment-286/deployments/test-rolling-update-deployment,UID:5e9cdb4c-2428-44fa-97fc-5a68fef28708,ResourceVersion:13550382,Generation:1,CreationTimestamp:2020-05-29 13:35:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-29 13:35:25 +0000 UTC 2020-05-29 13:35:25 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-29 13:35:28 +0000 UTC 2020-05-29 13:35:25 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 29 13:35:29.747: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-286,SelfLink:/apis/apps/v1/namespaces/deployment-286/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:afccafb6-63e6-4e30-860c-cf29ab779be0,ResourceVersion:13550371,Generation:1,CreationTimestamp:2020-05-29 13:35:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 5e9cdb4c-2428-44fa-97fc-5a68fef28708 0xc00294bfe7 0xc00294bfe8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 29 13:35:29.747: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 29 13:35:29.747: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-286,SelfLink:/apis/apps/v1/namespaces/deployment-286/replicasets/test-rolling-update-controller,UID:a134c25e-6ae3-47c3-83f2-3e31a97eabf4,ResourceVersion:13550381,Generation:2,CreationTimestamp:2020-05-29 13:35:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 5e9cdb4c-2428-44fa-97fc-5a68fef28708 0xc00294bf17 0xc00294bf18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 29 13:35:29.750: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-wrgkf" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-wrgkf,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-286,SelfLink:/api/v1/namespaces/deployment-286/pods/test-rolling-update-deployment-79f6b9d75c-wrgkf,UID:7b669637-705d-4b68-8dc0-418645969b14,ResourceVersion:13550370,Generation:0,CreationTimestamp:2020-05-29 13:35:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c afccafb6-63e6-4e30-860c-cf29ab779be0 0xc0031e48a7 0xc0031e48a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-d5wd6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-d5wd6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-d5wd6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031e4920} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031e4940}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 13:35:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 13:35:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 13:35:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 13:35:25 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.6,StartTime:2020-05-29 13:35:25 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-29 13:35:28 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://544bea4879271b2c2f5b31686b5271b6f87672eebda4bce496c21d7046c8e384}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:35:29.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-286" for this suite. May 29 13:35:35.773: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:35:35.848: INFO: namespace deployment-286 deletion completed in 6.094343913s • [SLOW TEST:15.395 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:35:35.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 29 13:35:36.022: INFO: Create a RollingUpdate DaemonSet May 29 13:35:36.026: INFO: Check that daemon pods launch on every node of the cluster May 29 13:35:36.028: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 13:35:36.044: INFO: Number of nodes with available pods: 0 May 29 13:35:36.044: INFO: Node iruya-worker is running more than one daemon pod May 29 13:35:37.049: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 13:35:37.052: INFO: Number of nodes with available pods: 0 May 29 13:35:37.052: INFO: Node iruya-worker is running more than one daemon pod May 29 13:35:38.127: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 13:35:38.130: INFO: Number of nodes with available pods: 0 May 29 13:35:38.130: INFO: Node iruya-worker is running more than one daemon pod May 29 13:35:39.050: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 13:35:39.053: INFO: Number of nodes with available pods: 0 May 29 13:35:39.053: INFO: Node iruya-worker is running more than one daemon pod May 29 13:35:40.051: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 13:35:40.056: INFO: Number of nodes with available pods: 2 May 29 13:35:40.056: INFO: Number of running nodes: 2, number of available pods: 2 May 29 13:35:40.056: INFO: Update the DaemonSet to trigger a rollout May 29 13:35:40.064: INFO: Updating DaemonSet daemon-set May 29 13:35:53.095: INFO: Roll back the DaemonSet before rollout is complete May 29 13:35:53.102: INFO: Updating DaemonSet daemon-set May 29 13:35:53.102: INFO: Make sure DaemonSet rollback is complete May 29 13:35:53.192: INFO: Wrong image for pod: daemon-set-226l5. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 29 13:35:53.192: INFO: Pod daemon-set-226l5 is not available May 29 13:35:53.195: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 13:35:54.200: INFO: Wrong image for pod: daemon-set-226l5. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 29 13:35:54.200: INFO: Pod daemon-set-226l5 is not available May 29 13:35:54.204: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 13:35:55.200: INFO: Wrong image for pod: daemon-set-226l5. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 29 13:35:55.201: INFO: Pod daemon-set-226l5 is not available May 29 13:35:55.205: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 13:35:56.200: INFO: Pod daemon-set-xn924 is not available May 29 13:35:56.204: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8228, will wait for the garbage collector to delete the pods May 29 13:35:56.269: INFO: Deleting DaemonSet.extensions daemon-set took: 7.183561ms May 29 13:35:56.570: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.270217ms May 29 13:36:01.973: INFO: Number of nodes with available pods: 0 May 29 13:36:01.973: INFO: Number of running nodes: 0, number of available pods: 0 May 29 13:36:01.976: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8228/daemonsets","resourceVersion":"13550547"},"items":null} May 29 13:36:01.979: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8228/pods","resourceVersion":"13550547"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:36:01.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8228" for this suite. May 29 13:36:10.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:36:10.111: INFO: namespace daemonsets-8228 deletion completed in 8.12024409s • [SLOW TEST:34.264 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:36:10.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 29 13:36:14.340: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:36:14.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4905" for this suite. May 29 13:36:20.768: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:36:20.847: INFO: namespace container-runtime-4905 deletion completed in 6.105387176s • [SLOW TEST:10.734 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:36:20.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 29 13:36:24.989: INFO: Waiting up to 5m0s for pod "client-envvars-bde2829a-7802-4e35-bd6a-63483a8d9746" in namespace "pods-4387" to be "success or failure" May 29 13:36:25.004: INFO: Pod "client-envvars-bde2829a-7802-4e35-bd6a-63483a8d9746": Phase="Pending", Reason="", readiness=false. Elapsed: 15.111124ms May 29 13:36:27.008: INFO: Pod "client-envvars-bde2829a-7802-4e35-bd6a-63483a8d9746": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018870787s May 29 13:36:29.012: INFO: Pod "client-envvars-bde2829a-7802-4e35-bd6a-63483a8d9746": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022552169s STEP: Saw pod success May 29 13:36:29.012: INFO: Pod "client-envvars-bde2829a-7802-4e35-bd6a-63483a8d9746" satisfied condition "success or failure" May 29 13:36:29.014: INFO: Trying to get logs from node iruya-worker2 pod client-envvars-bde2829a-7802-4e35-bd6a-63483a8d9746 container env3cont: STEP: delete the pod May 29 13:36:29.048: INFO: Waiting for pod client-envvars-bde2829a-7802-4e35-bd6a-63483a8d9746 to disappear May 29 13:36:29.058: INFO: Pod client-envvars-bde2829a-7802-4e35-bd6a-63483a8d9746 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:36:29.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4387" for this suite. May 29 13:37:09.092: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:37:09.170: INFO: namespace pods-4387 deletion completed in 40.107829952s • [SLOW TEST:48.323 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:37:09.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod May 29 13:37:13.260: INFO: Pod pod-hostip-16f09b31-9c10-488e-9d12-9cc791c6327a has hostIP: 172.17.0.6 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:37:13.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-99" for this suite. May 29 13:37:35.278: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:37:35.355: INFO: namespace pods-99 deletion completed in 22.092083838s • [SLOW TEST:26.185 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:37:35.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 29 13:37:35.435: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3a798c9e-b28a-4a4f-8195-fcb2cda98f81" in namespace "downward-api-9289" to be "success or failure" May 29 13:37:35.442: INFO: Pod "downwardapi-volume-3a798c9e-b28a-4a4f-8195-fcb2cda98f81": Phase="Pending", Reason="", readiness=false. Elapsed: 7.441295ms May 29 13:37:37.447: INFO: Pod "downwardapi-volume-3a798c9e-b28a-4a4f-8195-fcb2cda98f81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011952773s May 29 13:37:39.450: INFO: Pod "downwardapi-volume-3a798c9e-b28a-4a4f-8195-fcb2cda98f81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014977281s STEP: Saw pod success May 29 13:37:39.450: INFO: Pod "downwardapi-volume-3a798c9e-b28a-4a4f-8195-fcb2cda98f81" satisfied condition "success or failure" May 29 13:37:39.452: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-3a798c9e-b28a-4a4f-8195-fcb2cda98f81 container client-container: STEP: delete the pod May 29 13:37:39.473: INFO: Waiting for pod downwardapi-volume-3a798c9e-b28a-4a4f-8195-fcb2cda98f81 to disappear May 29 13:37:39.478: INFO: Pod downwardapi-volume-3a798c9e-b28a-4a4f-8195-fcb2cda98f81 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:37:39.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9289" for this suite. May 29 13:37:45.494: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:37:45.575: INFO: namespace downward-api-9289 deletion completed in 6.093982408s • [SLOW TEST:10.219 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:37:45.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-bb92d458-f322-401f-8e3c-dd2fd94a0398 STEP: Creating a pod to test consume configMaps May 29 13:37:45.696: INFO: Waiting up to 5m0s for pod "pod-configmaps-6bcb2805-5c71-4505-b06c-ea098655db92" in namespace "configmap-7483" to be "success or failure" May 29 13:37:45.742: INFO: Pod "pod-configmaps-6bcb2805-5c71-4505-b06c-ea098655db92": Phase="Pending", Reason="", readiness=false. Elapsed: 45.938526ms May 29 13:37:47.746: INFO: Pod "pod-configmaps-6bcb2805-5c71-4505-b06c-ea098655db92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050422717s May 29 13:37:49.751: INFO: Pod "pod-configmaps-6bcb2805-5c71-4505-b06c-ea098655db92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055153185s STEP: Saw pod success May 29 13:37:49.751: INFO: Pod "pod-configmaps-6bcb2805-5c71-4505-b06c-ea098655db92" satisfied condition "success or failure" May 29 13:37:49.754: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-6bcb2805-5c71-4505-b06c-ea098655db92 container configmap-volume-test: STEP: delete the pod May 29 13:37:49.781: INFO: Waiting for pod pod-configmaps-6bcb2805-5c71-4505-b06c-ea098655db92 to disappear May 29 13:37:49.784: INFO: Pod pod-configmaps-6bcb2805-5c71-4505-b06c-ea098655db92 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:37:49.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7483" for this suite. May 29 13:37:55.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:37:55.912: INFO: namespace configmap-7483 deletion completed in 6.097780024s • [SLOW TEST:10.336 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:37:55.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 29 13:37:55.983: INFO: PodSpec: initContainers in spec.initContainers May 29 13:38:48.468: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-81b727d2-4464-4496-9fdc-ea06b7385f12", GenerateName:"", Namespace:"init-container-4110", SelfLink:"/api/v1/namespaces/init-container-4110/pods/pod-init-81b727d2-4464-4496-9fdc-ea06b7385f12", UID:"e028d520-0f23-479c-9ea5-6fc3c27e11a0", ResourceVersion:"13551077", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63726356275, loc:(*time.Location)(0x7ead8c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"983805009"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-v4czc", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001d78000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-v4czc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-v4czc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-v4czc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0027e2b78), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002f2b980), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0027e2c00)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0027e2c20)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0027e2c28), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0027e2c2c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726356276, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726356276, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726356276, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726356275, loc:(*time.Location)(0x7ead8c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.5", PodIP:"10.244.1.162", StartTime:(*v1.Time)(0xc00167b060), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc00167b0c0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000df4850)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://e2a43dc31acf0a4fbc2564217cb322026077476bfc1e12c957e25f643e93fb0c"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00167b0e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00167b0a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:38:48.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4110" for this suite. May 29 13:39:10.526: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:39:10.634: INFO: namespace init-container-4110 deletion completed in 22.133043439s • [SLOW TEST:74.722 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:39:10.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 29 13:39:10.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9132' May 29 13:39:13.518: INFO: stderr: "" May 29 13:39:13.518: INFO: stdout: "replicationcontroller/redis-master created\n" May 29 13:39:13.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9132' May 29 13:39:13.830: INFO: stderr: "" May 29 13:39:13.830: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. May 29 13:39:14.834: INFO: Selector matched 1 pods for map[app:redis] May 29 13:39:14.834: INFO: Found 0 / 1 May 29 13:39:15.834: INFO: Selector matched 1 pods for map[app:redis] May 29 13:39:15.834: INFO: Found 0 / 1 May 29 13:39:16.835: INFO: Selector matched 1 pods for map[app:redis] May 29 13:39:16.835: INFO: Found 0 / 1 May 29 13:39:17.835: INFO: Selector matched 1 pods for map[app:redis] May 29 13:39:17.836: INFO: Found 1 / 1 May 29 13:39:17.836: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 29 13:39:17.839: INFO: Selector matched 1 pods for map[app:redis] May 29 13:39:17.839: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 29 13:39:17.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-wsrzz --namespace=kubectl-9132' May 29 13:39:17.954: INFO: stderr: "" May 29 13:39:17.954: INFO: stdout: "Name: redis-master-wsrzz\nNamespace: kubectl-9132\nPriority: 0\nNode: iruya-worker/172.17.0.6\nStart Time: Fri, 29 May 2020 13:39:13 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.13\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://255d0eb2b4f5c51b3e11b31f67fa7d84ddf7f8a7f816d1b4622e1c32da2fa99c\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 29 May 2020 13:39:16 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-8b8gn (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-8b8gn:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-8b8gn\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-9132/redis-master-wsrzz to iruya-worker\n Normal Pulled 3s kubelet, iruya-worker Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, iruya-worker Created container redis-master\n Normal Started 1s kubelet, iruya-worker Started container redis-master\n" May 29 13:39:17.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-9132' May 29 13:39:18.076: INFO: stderr: "" May 29 13:39:18.077: INFO: stdout: "Name: redis-master\nNamespace: kubectl-9132\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: redis-master-wsrzz\n" May 29 13:39:18.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-9132' May 29 13:39:18.182: INFO: stderr: "" May 29 13:39:18.182: INFO: stdout: "Name: redis-master\nNamespace: kubectl-9132\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.96.57.229\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.2.13:6379\nSession Affinity: None\nEvents: \n" May 29 13:39:18.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane' May 29 13:39:18.309: INFO: stderr: "" May 29 13:39:18.309: INFO: stdout: "Name: iruya-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:24:20 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Fri, 29 May 2020 13:39:03 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 29 May 2020 13:39:03 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 29 May 2020 13:39:03 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 29 May 2020 13:39:03 +0000 Sun, 15 Mar 2020 18:25:00 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.7\n Hostname: iruya-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 09f14f6f4d1640fcaab2243401c9f154\n System UUID: 7c6ca533-492e-400c-b058-c282f97a69ec\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.15.7\n Kube-Proxy Version: v1.15.7\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-iruya-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 74d\n kube-system kindnet-zn8sx 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 74d\n kube-system kube-apiserver-iruya-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 74d\n kube-system kube-controller-manager-iruya-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 74d\n kube-system kube-proxy-46nsr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 74d\n kube-system kube-scheduler-iruya-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 74d\n local-path-storage local-path-provisioner-d4947b89c-72frh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 74d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 29 13:39:18.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-9132' May 29 13:39:18.409: INFO: stderr: "" May 29 13:39:18.409: INFO: stdout: "Name: kubectl-9132\nLabels: e2e-framework=kubectl\n e2e-run=26abe1af-5de1-4004-8563-3494e36fb2cd\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:39:18.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9132" for this suite. May 29 13:39:40.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:39:40.557: INFO: namespace kubectl-9132 deletion completed in 22.144001295s • [SLOW TEST:29.923 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:39:40.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 29 13:39:40.654: INFO: Waiting up to 5m0s for pod "downwardapi-volume-59d5458c-d0f9-4765-b31e-2a3bcca27646" in namespace "projected-2345" to be "success or failure" May 29 13:39:40.687: INFO: Pod "downwardapi-volume-59d5458c-d0f9-4765-b31e-2a3bcca27646": Phase="Pending", Reason="", readiness=false. Elapsed: 32.251591ms May 29 13:39:42.691: INFO: Pod "downwardapi-volume-59d5458c-d0f9-4765-b31e-2a3bcca27646": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036307715s May 29 13:39:44.696: INFO: Pod "downwardapi-volume-59d5458c-d0f9-4765-b31e-2a3bcca27646": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041139735s STEP: Saw pod success May 29 13:39:44.696: INFO: Pod "downwardapi-volume-59d5458c-d0f9-4765-b31e-2a3bcca27646" satisfied condition "success or failure" May 29 13:39:44.700: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-59d5458c-d0f9-4765-b31e-2a3bcca27646 container client-container: STEP: delete the pod May 29 13:39:44.716: INFO: Waiting for pod downwardapi-volume-59d5458c-d0f9-4765-b31e-2a3bcca27646 to disappear May 29 13:39:44.721: INFO: Pod downwardapi-volume-59d5458c-d0f9-4765-b31e-2a3bcca27646 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:39:44.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2345" for this suite. May 29 13:39:50.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:39:50.822: INFO: namespace projected-2345 deletion completed in 6.098656745s • [SLOW TEST:10.265 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:39:50.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 29 13:39:50.922: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4f57c40c-810e-4316-95ad-aad8c66535f8" in namespace "projected-573" to be "success or failure" May 29 13:39:50.931: INFO: Pod "downwardapi-volume-4f57c40c-810e-4316-95ad-aad8c66535f8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.918568ms May 29 13:39:52.944: INFO: Pod "downwardapi-volume-4f57c40c-810e-4316-95ad-aad8c66535f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021843122s May 29 13:39:54.948: INFO: Pod "downwardapi-volume-4f57c40c-810e-4316-95ad-aad8c66535f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025887989s STEP: Saw pod success May 29 13:39:54.948: INFO: Pod "downwardapi-volume-4f57c40c-810e-4316-95ad-aad8c66535f8" satisfied condition "success or failure" May 29 13:39:54.950: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-4f57c40c-810e-4316-95ad-aad8c66535f8 container client-container: STEP: delete the pod May 29 13:39:54.968: INFO: Waiting for pod downwardapi-volume-4f57c40c-810e-4316-95ad-aad8c66535f8 to disappear May 29 13:39:54.973: INFO: Pod downwardapi-volume-4f57c40c-810e-4316-95ad-aad8c66535f8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:39:54.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-573" for this suite. May 29 13:40:00.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:40:01.065: INFO: namespace projected-573 deletion completed in 6.088518142s • [SLOW TEST:10.243 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:40:01.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:40:05.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9264" for this suite. May 29 13:40:55.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:40:55.283: INFO: namespace kubelet-test-9264 deletion completed in 50.089059125s • [SLOW TEST:54.218 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:40:55.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0529 13:41:35.804338 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 29 13:41:35.804: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:41:35.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1325" for this suite. May 29 13:41:45.854: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:41:45.953: INFO: namespace gc-1325 deletion completed in 10.146414061s • [SLOW TEST:50.670 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:41:45.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 29 13:41:46.389: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8bc98f19-7987-41f4-a38c-98f2646c224a" in namespace "downward-api-2570" to be "success or failure" May 29 13:41:46.414: INFO: Pod "downwardapi-volume-8bc98f19-7987-41f4-a38c-98f2646c224a": Phase="Pending", Reason="", readiness=false. Elapsed: 24.341196ms May 29 13:41:48.418: INFO: Pod "downwardapi-volume-8bc98f19-7987-41f4-a38c-98f2646c224a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028700041s May 29 13:41:50.424: INFO: Pod "downwardapi-volume-8bc98f19-7987-41f4-a38c-98f2646c224a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035043067s STEP: Saw pod success May 29 13:41:50.424: INFO: Pod "downwardapi-volume-8bc98f19-7987-41f4-a38c-98f2646c224a" satisfied condition "success or failure" May 29 13:41:50.427: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-8bc98f19-7987-41f4-a38c-98f2646c224a container client-container: STEP: delete the pod May 29 13:41:50.460: INFO: Waiting for pod downwardapi-volume-8bc98f19-7987-41f4-a38c-98f2646c224a to disappear May 29 13:41:50.490: INFO: Pod downwardapi-volume-8bc98f19-7987-41f4-a38c-98f2646c224a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:41:50.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2570" for this suite. May 29 13:41:56.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:41:56.593: INFO: namespace downward-api-2570 deletion completed in 6.098684282s • [SLOW TEST:10.640 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:41:56.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 29 13:41:56.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-248' May 29 13:41:56.752: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 29 13:41:56.752: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: rolling-update to same image controller May 29 13:41:56.788: INFO: scanned /root for discovery docs: May 29 13:41:56.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-248' May 29 13:42:13.697: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 29 13:42:13.697: INFO: stdout: "Created e2e-test-nginx-rc-ddd1871588a1a751c02f57e3764fbdb6\nScaling up e2e-test-nginx-rc-ddd1871588a1a751c02f57e3764fbdb6 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-ddd1871588a1a751c02f57e3764fbdb6 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-ddd1871588a1a751c02f57e3764fbdb6 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" May 29 13:42:13.697: INFO: stdout: "Created e2e-test-nginx-rc-ddd1871588a1a751c02f57e3764fbdb6\nScaling up e2e-test-nginx-rc-ddd1871588a1a751c02f57e3764fbdb6 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-ddd1871588a1a751c02f57e3764fbdb6 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-ddd1871588a1a751c02f57e3764fbdb6 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. May 29 13:42:13.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-248' May 29 13:42:13.859: INFO: stderr: "" May 29 13:42:13.859: INFO: stdout: "e2e-test-nginx-rc-ddd1871588a1a751c02f57e3764fbdb6-fcn9j " May 29 13:42:13.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-ddd1871588a1a751c02f57e3764fbdb6-fcn9j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-248' May 29 13:42:13.958: INFO: stderr: "" May 29 13:42:13.958: INFO: stdout: "true" May 29 13:42:13.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-ddd1871588a1a751c02f57e3764fbdb6-fcn9j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-248' May 29 13:42:14.052: INFO: stderr: "" May 29 13:42:14.052: INFO: stdout: "docker.io/library/nginx:1.14-alpine" May 29 13:42:14.052: INFO: e2e-test-nginx-rc-ddd1871588a1a751c02f57e3764fbdb6-fcn9j is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 May 29 13:42:14.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-248' May 29 13:42:14.165: INFO: stderr: "" May 29 13:42:14.165: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:42:14.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-248" for this suite. May 29 13:42:36.860: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:42:36.940: INFO: namespace kubectl-248 deletion completed in 22.771756968s • [SLOW TEST:40.346 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:42:36.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium May 29 13:42:37.014: INFO: Waiting up to 5m0s for pod "pod-d462f806-da47-4cf1-bb95-a0ee84c9160b" in namespace "emptydir-3186" to be "success or failure" May 29 13:42:37.019: INFO: Pod "pod-d462f806-da47-4cf1-bb95-a0ee84c9160b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.178894ms May 29 13:42:39.023: INFO: Pod "pod-d462f806-da47-4cf1-bb95-a0ee84c9160b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008574467s May 29 13:42:41.027: INFO: Pod "pod-d462f806-da47-4cf1-bb95-a0ee84c9160b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012004266s STEP: Saw pod success May 29 13:42:41.027: INFO: Pod "pod-d462f806-da47-4cf1-bb95-a0ee84c9160b" satisfied condition "success or failure" May 29 13:42:41.029: INFO: Trying to get logs from node iruya-worker2 pod pod-d462f806-da47-4cf1-bb95-a0ee84c9160b container test-container: STEP: delete the pod May 29 13:42:41.054: INFO: Waiting for pod pod-d462f806-da47-4cf1-bb95-a0ee84c9160b to disappear May 29 13:42:41.066: INFO: Pod pod-d462f806-da47-4cf1-bb95-a0ee84c9160b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:42:41.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3186" for this suite. May 29 13:42:47.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:42:47.199: INFO: namespace emptydir-3186 deletion completed in 6.129950581s • [SLOW TEST:10.259 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:42:47.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes May 29 13:42:51.309: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 29 13:42:56.408: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:42:56.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8492" for this suite. May 29 13:43:02.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:43:02.522: INFO: namespace pods-8492 deletion completed in 6.104916549s • [SLOW TEST:15.323 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:43:02.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs May 29 13:43:02.611: INFO: Waiting up to 5m0s for pod "pod-885da05a-8d1e-477c-bd38-6e5e613a2031" in namespace "emptydir-3555" to be "success or failure" May 29 13:43:02.615: INFO: Pod "pod-885da05a-8d1e-477c-bd38-6e5e613a2031": Phase="Pending", Reason="", readiness=false. Elapsed: 3.398126ms May 29 13:43:04.695: INFO: Pod "pod-885da05a-8d1e-477c-bd38-6e5e613a2031": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083676937s May 29 13:43:06.700: INFO: Pod "pod-885da05a-8d1e-477c-bd38-6e5e613a2031": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.088240347s STEP: Saw pod success May 29 13:43:06.700: INFO: Pod "pod-885da05a-8d1e-477c-bd38-6e5e613a2031" satisfied condition "success or failure" May 29 13:43:06.703: INFO: Trying to get logs from node iruya-worker2 pod pod-885da05a-8d1e-477c-bd38-6e5e613a2031 container test-container: STEP: delete the pod May 29 13:43:06.724: INFO: Waiting for pod pod-885da05a-8d1e-477c-bd38-6e5e613a2031 to disappear May 29 13:43:06.744: INFO: Pod pod-885da05a-8d1e-477c-bd38-6e5e613a2031 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:43:06.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3555" for this suite. May 29 13:43:12.763: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:43:12.838: INFO: namespace emptydir-3555 deletion completed in 6.091291661s • [SLOW TEST:10.316 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:43:12.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 29 13:43:12.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-868' May 29 13:43:13.077: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 29 13:43:13.077: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 May 29 13:43:17.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-868' May 29 13:43:17.268: INFO: stderr: "" May 29 13:43:17.268: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:43:17.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-868" for this suite. May 29 13:43:39.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:43:39.369: INFO: namespace kubectl-868 deletion completed in 22.097812438s • [SLOW TEST:26.531 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:43:39.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-1890 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1890 to expose endpoints map[] May 29 13:43:39.506: INFO: Get endpoints failed (13.512788ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 29 13:43:40.510: INFO: successfully validated that service multi-endpoint-test in namespace services-1890 exposes endpoints map[] (1.017682213s elapsed) STEP: Creating pod pod1 in namespace services-1890 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1890 to expose endpoints map[pod1:[100]] May 29 13:43:44.562: INFO: successfully validated that service multi-endpoint-test in namespace services-1890 exposes endpoints map[pod1:[100]] (4.044145727s elapsed) STEP: Creating pod pod2 in namespace services-1890 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1890 to expose endpoints map[pod1:[100] pod2:[101]] May 29 13:43:47.628: INFO: successfully validated that service multi-endpoint-test in namespace services-1890 exposes endpoints map[pod1:[100] pod2:[101]] (3.062170533s elapsed) STEP: Deleting pod pod1 in namespace services-1890 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1890 to expose endpoints map[pod2:[101]] May 29 13:43:48.656: INFO: successfully validated that service multi-endpoint-test in namespace services-1890 exposes endpoints map[pod2:[101]] (1.023021791s elapsed) STEP: Deleting pod pod2 in namespace services-1890 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1890 to expose endpoints map[] May 29 13:43:49.695: INFO: successfully validated that service multi-endpoint-test in namespace services-1890 exposes endpoints map[] (1.03312811s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:43:49.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1890" for this suite. May 29 13:43:55.899: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:43:55.976: INFO: namespace services-1890 deletion completed in 6.087208751s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:16.606 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:43:55.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-1376/configmap-test-3ee5a2a5-4dbb-49d1-8828-bf9a0f2ee0f4 STEP: Creating a pod to test consume configMaps May 29 13:43:56.104: INFO: Waiting up to 5m0s for pod "pod-configmaps-8e5a68b0-ee05-4f15-8bf8-48c87c45478e" in namespace "configmap-1376" to be "success or failure" May 29 13:43:56.107: INFO: Pod "pod-configmaps-8e5a68b0-ee05-4f15-8bf8-48c87c45478e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.169644ms May 29 13:43:58.119: INFO: Pod "pod-configmaps-8e5a68b0-ee05-4f15-8bf8-48c87c45478e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01547192s May 29 13:44:00.134: INFO: Pod "pod-configmaps-8e5a68b0-ee05-4f15-8bf8-48c87c45478e": Phase="Running", Reason="", readiness=true. Elapsed: 4.030040718s May 29 13:44:02.138: INFO: Pod "pod-configmaps-8e5a68b0-ee05-4f15-8bf8-48c87c45478e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.034157479s STEP: Saw pod success May 29 13:44:02.138: INFO: Pod "pod-configmaps-8e5a68b0-ee05-4f15-8bf8-48c87c45478e" satisfied condition "success or failure" May 29 13:44:02.142: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-8e5a68b0-ee05-4f15-8bf8-48c87c45478e container env-test: STEP: delete the pod May 29 13:44:02.156: INFO: Waiting for pod pod-configmaps-8e5a68b0-ee05-4f15-8bf8-48c87c45478e to disappear May 29 13:44:02.175: INFO: Pod pod-configmaps-8e5a68b0-ee05-4f15-8bf8-48c87c45478e no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:44:02.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1376" for this suite. May 29 13:44:08.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:44:08.277: INFO: namespace configmap-1376 deletion completed in 6.099163764s • [SLOW TEST:12.300 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:44:08.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-165b19be-469b-4cda-a1c4-7205eccb806d in namespace container-probe-3556 May 29 13:44:12.376: INFO: Started pod liveness-165b19be-469b-4cda-a1c4-7205eccb806d in namespace container-probe-3556 STEP: checking the pod's current state and verifying that restartCount is present May 29 13:44:12.379: INFO: Initial restart count of pod liveness-165b19be-469b-4cda-a1c4-7205eccb806d is 0 May 29 13:44:34.430: INFO: Restart count of pod container-probe-3556/liveness-165b19be-469b-4cda-a1c4-7205eccb806d is now 1 (22.050877656s elapsed) May 29 13:44:54.471: INFO: Restart count of pod container-probe-3556/liveness-165b19be-469b-4cda-a1c4-7205eccb806d is now 2 (42.0928104s elapsed) May 29 13:45:14.528: INFO: Restart count of pod container-probe-3556/liveness-165b19be-469b-4cda-a1c4-7205eccb806d is now 3 (1m2.149411516s elapsed) May 29 13:45:34.659: INFO: Restart count of pod container-probe-3556/liveness-165b19be-469b-4cda-a1c4-7205eccb806d is now 4 (1m22.280796096s elapsed) May 29 13:46:35.117: INFO: Restart count of pod container-probe-3556/liveness-165b19be-469b-4cda-a1c4-7205eccb806d is now 5 (2m22.73871246s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:46:35.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3556" for this suite. May 29 13:46:41.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:46:41.211: INFO: namespace container-probe-3556 deletion completed in 6.076523914s • [SLOW TEST:152.933 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:46:41.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-401c2c4f-6cb0-409d-851d-970ccc8713cb in namespace container-probe-2749 May 29 13:46:45.311: INFO: Started pod busybox-401c2c4f-6cb0-409d-851d-970ccc8713cb in namespace container-probe-2749 STEP: checking the pod's current state and verifying that restartCount is present May 29 13:46:45.314: INFO: Initial restart count of pod busybox-401c2c4f-6cb0-409d-851d-970ccc8713cb is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:50:45.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2749" for this suite. May 29 13:50:51.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:50:52.028: INFO: namespace container-probe-2749 deletion completed in 6.10591141s • [SLOW TEST:250.817 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:50:52.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 29 13:50:52.116: INFO: Waiting up to 5m0s for pod "downward-api-c72f0771-0684-43b2-acdd-a04e14b8d08e" in namespace "downward-api-4259" to be "success or failure" May 29 13:50:52.119: INFO: Pod "downward-api-c72f0771-0684-43b2-acdd-a04e14b8d08e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.010649ms May 29 13:50:54.123: INFO: Pod "downward-api-c72f0771-0684-43b2-acdd-a04e14b8d08e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007102267s May 29 13:50:56.127: INFO: Pod "downward-api-c72f0771-0684-43b2-acdd-a04e14b8d08e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011193323s STEP: Saw pod success May 29 13:50:56.128: INFO: Pod "downward-api-c72f0771-0684-43b2-acdd-a04e14b8d08e" satisfied condition "success or failure" May 29 13:50:56.131: INFO: Trying to get logs from node iruya-worker pod downward-api-c72f0771-0684-43b2-acdd-a04e14b8d08e container dapi-container: STEP: delete the pod May 29 13:50:56.150: INFO: Waiting for pod downward-api-c72f0771-0684-43b2-acdd-a04e14b8d08e to disappear May 29 13:50:56.155: INFO: Pod downward-api-c72f0771-0684-43b2-acdd-a04e14b8d08e no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:50:56.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4259" for this suite. May 29 13:51:02.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:51:02.261: INFO: namespace downward-api-4259 deletion completed in 6.101714991s • [SLOW TEST:10.232 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:51:02.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults May 29 13:51:02.325: INFO: Waiting up to 5m0s for pod "client-containers-ffcf4c68-6c4f-4c5b-a63d-77f46ad3184d" in namespace "containers-2215" to be "success or failure" May 29 13:51:02.349: INFO: Pod "client-containers-ffcf4c68-6c4f-4c5b-a63d-77f46ad3184d": Phase="Pending", Reason="", readiness=false. Elapsed: 23.382782ms May 29 13:51:04.353: INFO: Pod "client-containers-ffcf4c68-6c4f-4c5b-a63d-77f46ad3184d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027433039s May 29 13:51:06.356: INFO: Pod "client-containers-ffcf4c68-6c4f-4c5b-a63d-77f46ad3184d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031129231s STEP: Saw pod success May 29 13:51:06.356: INFO: Pod "client-containers-ffcf4c68-6c4f-4c5b-a63d-77f46ad3184d" satisfied condition "success or failure" May 29 13:51:06.359: INFO: Trying to get logs from node iruya-worker2 pod client-containers-ffcf4c68-6c4f-4c5b-a63d-77f46ad3184d container test-container: STEP: delete the pod May 29 13:51:06.410: INFO: Waiting for pod client-containers-ffcf4c68-6c4f-4c5b-a63d-77f46ad3184d to disappear May 29 13:51:06.431: INFO: Pod client-containers-ffcf4c68-6c4f-4c5b-a63d-77f46ad3184d no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:51:06.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2215" for this suite. May 29 13:51:12.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:51:12.540: INFO: namespace containers-2215 deletion completed in 6.104838564s • [SLOW TEST:10.279 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:51:12.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium May 29 13:51:12.607: INFO: Waiting up to 5m0s for pod "pod-852ba50e-4bf6-4e81-ad62-78643fef11a1" in namespace "emptydir-8206" to be "success or failure" May 29 13:51:12.611: INFO: Pod "pod-852ba50e-4bf6-4e81-ad62-78643fef11a1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.156525ms May 29 13:51:14.614: INFO: Pod "pod-852ba50e-4bf6-4e81-ad62-78643fef11a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006758096s May 29 13:51:16.618: INFO: Pod "pod-852ba50e-4bf6-4e81-ad62-78643fef11a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01088621s STEP: Saw pod success May 29 13:51:16.618: INFO: Pod "pod-852ba50e-4bf6-4e81-ad62-78643fef11a1" satisfied condition "success or failure" May 29 13:51:16.621: INFO: Trying to get logs from node iruya-worker pod pod-852ba50e-4bf6-4e81-ad62-78643fef11a1 container test-container: STEP: delete the pod May 29 13:51:16.666: INFO: Waiting for pod pod-852ba50e-4bf6-4e81-ad62-78643fef11a1 to disappear May 29 13:51:16.682: INFO: Pod pod-852ba50e-4bf6-4e81-ad62-78643fef11a1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:51:16.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8206" for this suite. May 29 13:51:22.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:51:22.844: INFO: namespace emptydir-8206 deletion completed in 6.158371368s • [SLOW TEST:10.304 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:51:22.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-8ed3a8fb-91b8-481d-b9cc-ee072ef04dcf STEP: Creating a pod to test consume configMaps May 29 13:51:22.976: INFO: Waiting up to 5m0s for pod "pod-configmaps-c4ff6a53-9d3c-4b3f-94fa-f2e33f28160a" in namespace "configmap-5412" to be "success or failure" May 29 13:51:23.008: INFO: Pod "pod-configmaps-c4ff6a53-9d3c-4b3f-94fa-f2e33f28160a": Phase="Pending", Reason="", readiness=false. Elapsed: 31.516563ms May 29 13:51:25.012: INFO: Pod "pod-configmaps-c4ff6a53-9d3c-4b3f-94fa-f2e33f28160a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035799061s May 29 13:51:27.016: INFO: Pod "pod-configmaps-c4ff6a53-9d3c-4b3f-94fa-f2e33f28160a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039754786s STEP: Saw pod success May 29 13:51:27.016: INFO: Pod "pod-configmaps-c4ff6a53-9d3c-4b3f-94fa-f2e33f28160a" satisfied condition "success or failure" May 29 13:51:27.019: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-c4ff6a53-9d3c-4b3f-94fa-f2e33f28160a container configmap-volume-test: STEP: delete the pod May 29 13:51:27.064: INFO: Waiting for pod pod-configmaps-c4ff6a53-9d3c-4b3f-94fa-f2e33f28160a to disappear May 29 13:51:27.110: INFO: Pod pod-configmaps-c4ff6a53-9d3c-4b3f-94fa-f2e33f28160a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:51:27.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5412" for this suite. May 29 13:51:33.125: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:51:33.201: INFO: namespace configmap-5412 deletion completed in 6.086831463s • [SLOW TEST:10.357 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:51:33.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-64a3f831-20cc-43fc-b054-8b19249ab823 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-64a3f831-20cc-43fc-b054-8b19249ab823 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:51:41.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-528" for this suite. May 29 13:51:55.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:51:55.520: INFO: namespace projected-528 deletion completed in 14.100577584s • [SLOW TEST:22.318 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:51:55.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-7de4cc11-6738-411b-9123-dfe189d7453b STEP: Creating a pod to test consume configMaps May 29 13:51:55.602: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8bd10a06-85b5-4920-8758-89d9353a5c4e" in namespace "projected-9593" to be "success or failure" May 29 13:51:55.605: INFO: Pod "pod-projected-configmaps-8bd10a06-85b5-4920-8758-89d9353a5c4e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.393224ms May 29 13:51:57.609: INFO: Pod "pod-projected-configmaps-8bd10a06-85b5-4920-8758-89d9353a5c4e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007462188s May 29 13:51:59.614: INFO: Pod "pod-projected-configmaps-8bd10a06-85b5-4920-8758-89d9353a5c4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012031437s STEP: Saw pod success May 29 13:51:59.614: INFO: Pod "pod-projected-configmaps-8bd10a06-85b5-4920-8758-89d9353a5c4e" satisfied condition "success or failure" May 29 13:51:59.617: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-8bd10a06-85b5-4920-8758-89d9353a5c4e container projected-configmap-volume-test: STEP: delete the pod May 29 13:51:59.637: INFO: Waiting for pod pod-projected-configmaps-8bd10a06-85b5-4920-8758-89d9353a5c4e to disappear May 29 13:51:59.641: INFO: Pod pod-projected-configmaps-8bd10a06-85b5-4920-8758-89d9353a5c4e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:51:59.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9593" for this suite. May 29 13:52:05.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:52:05.751: INFO: namespace projected-9593 deletion completed in 6.107194209s • [SLOW TEST:10.231 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:52:05.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-209cef1e-5863-4ca0-a0ff-cc0550c54e50 STEP: Creating a pod to test consume secrets May 29 13:52:05.859: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8c9b88ac-cbfe-4853-afb1-b4ffc42ea9b0" in namespace "projected-4161" to be "success or failure" May 29 13:52:05.907: INFO: Pod "pod-projected-secrets-8c9b88ac-cbfe-4853-afb1-b4ffc42ea9b0": Phase="Pending", Reason="", readiness=false. Elapsed: 48.326277ms May 29 13:52:07.938: INFO: Pod "pod-projected-secrets-8c9b88ac-cbfe-4853-afb1-b4ffc42ea9b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078986345s May 29 13:52:09.942: INFO: Pod "pod-projected-secrets-8c9b88ac-cbfe-4853-afb1-b4ffc42ea9b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.08341202s STEP: Saw pod success May 29 13:52:09.942: INFO: Pod "pod-projected-secrets-8c9b88ac-cbfe-4853-afb1-b4ffc42ea9b0" satisfied condition "success or failure" May 29 13:52:09.946: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-8c9b88ac-cbfe-4853-afb1-b4ffc42ea9b0 container projected-secret-volume-test: STEP: delete the pod May 29 13:52:09.967: INFO: Waiting for pod pod-projected-secrets-8c9b88ac-cbfe-4853-afb1-b4ffc42ea9b0 to disappear May 29 13:52:09.982: INFO: Pod pod-projected-secrets-8c9b88ac-cbfe-4853-afb1-b4ffc42ea9b0 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:52:09.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4161" for this suite. May 29 13:52:15.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:52:16.067: INFO: namespace projected-4161 deletion completed in 6.082018031s • [SLOW TEST:10.316 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:52:16.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod May 29 13:52:16.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7093' May 29 13:52:19.404: INFO: stderr: "" May 29 13:52:19.404: INFO: stdout: "pod/pause created\n" May 29 13:52:19.404: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 29 13:52:19.404: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-7093" to be "running and ready" May 29 13:52:19.421: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 17.098639ms May 29 13:52:21.426: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021516161s May 29 13:52:23.430: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.0261182s May 29 13:52:23.430: INFO: Pod "pause" satisfied condition "running and ready" May 29 13:52:23.430: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod May 29 13:52:23.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-7093' May 29 13:52:23.530: INFO: stderr: "" May 29 13:52:23.530: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 29 13:52:23.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7093' May 29 13:52:23.638: INFO: stderr: "" May 29 13:52:23.638: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod May 29 13:52:23.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-7093' May 29 13:52:23.736: INFO: stderr: "" May 29 13:52:23.736: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 29 13:52:23.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7093' May 29 13:52:23.826: INFO: stderr: "" May 29 13:52:23.826: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources May 29 13:52:23.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7093' May 29 13:52:23.946: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 29 13:52:23.946: INFO: stdout: "pod \"pause\" force deleted\n" May 29 13:52:23.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-7093' May 29 13:52:24.039: INFO: stderr: "No resources found.\n" May 29 13:52:24.039: INFO: stdout: "" May 29 13:52:24.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-7093 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 29 13:52:24.130: INFO: stderr: "" May 29 13:52:24.130: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:52:24.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7093" for this suite. May 29 13:52:30.319: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:52:30.399: INFO: namespace kubectl-7093 deletion completed in 6.266047451s • [SLOW TEST:14.332 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:52:30.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-4384 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-4384 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4384 May 29 13:52:30.519: INFO: Found 0 stateful pods, waiting for 1 May 29 13:52:40.524: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 29 13:52:40.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4384 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 29 13:52:40.834: INFO: stderr: "I0529 13:52:40.647373 2165 log.go:172] (0xc000a44630) (0xc0005f2aa0) Create stream\nI0529 13:52:40.647430 2165 log.go:172] (0xc000a44630) (0xc0005f2aa0) Stream added, broadcasting: 1\nI0529 13:52:40.650290 2165 log.go:172] (0xc000a44630) Reply frame received for 1\nI0529 13:52:40.650789 2165 log.go:172] (0xc000a44630) (0xc00099e000) Create stream\nI0529 13:52:40.650886 2165 log.go:172] (0xc000a44630) (0xc00099e000) Stream added, broadcasting: 3\nI0529 13:52:40.652586 2165 log.go:172] (0xc000a44630) Reply frame received for 3\nI0529 13:52:40.652642 2165 log.go:172] (0xc000a44630) (0xc00099e0a0) Create stream\nI0529 13:52:40.652671 2165 log.go:172] (0xc000a44630) (0xc00099e0a0) Stream added, broadcasting: 5\nI0529 13:52:40.653865 2165 log.go:172] (0xc000a44630) Reply frame received for 5\nI0529 13:52:40.759786 2165 log.go:172] (0xc000a44630) Data frame received for 5\nI0529 13:52:40.759834 2165 log.go:172] (0xc00099e0a0) (5) Data frame handling\nI0529 13:52:40.759860 2165 log.go:172] (0xc00099e0a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0529 13:52:40.824288 2165 log.go:172] (0xc000a44630) Data frame received for 5\nI0529 13:52:40.824358 2165 log.go:172] (0xc00099e0a0) (5) Data frame handling\nI0529 13:52:40.824388 2165 log.go:172] (0xc000a44630) Data frame received for 3\nI0529 13:52:40.824409 2165 log.go:172] (0xc00099e000) (3) Data frame handling\nI0529 13:52:40.824428 2165 log.go:172] (0xc00099e000) (3) Data frame sent\nI0529 13:52:40.824464 2165 log.go:172] (0xc000a44630) Data frame received for 3\nI0529 13:52:40.824475 2165 log.go:172] (0xc00099e000) (3) Data frame handling\nI0529 13:52:40.826812 2165 log.go:172] (0xc000a44630) Data frame received for 1\nI0529 13:52:40.826839 2165 log.go:172] (0xc0005f2aa0) (1) Data frame handling\nI0529 13:52:40.826852 2165 log.go:172] (0xc0005f2aa0) (1) Data frame sent\nI0529 13:52:40.827282 2165 log.go:172] (0xc000a44630) (0xc0005f2aa0) Stream removed, broadcasting: 1\nI0529 13:52:40.827537 2165 log.go:172] (0xc000a44630) (0xc0005f2aa0) Stream removed, broadcasting: 1\nI0529 13:52:40.827558 2165 log.go:172] (0xc000a44630) (0xc00099e000) Stream removed, broadcasting: 3\nI0529 13:52:40.827568 2165 log.go:172] (0xc000a44630) (0xc00099e0a0) Stream removed, broadcasting: 5\n" May 29 13:52:40.834: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 29 13:52:40.834: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 29 13:52:40.838: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 29 13:52:50.842: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 29 13:52:50.843: INFO: Waiting for statefulset status.replicas updated to 0 May 29 13:52:50.858: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999641s May 29 13:52:51.862: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.992485243s May 29 13:52:52.866: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.988563614s May 29 13:52:53.871: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.98461926s May 29 13:52:54.875: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.979762389s May 29 13:52:55.881: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.975290595s May 29 13:52:56.886: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.969364946s May 29 13:52:57.913: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.964072344s May 29 13:52:58.917: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.937492171s May 29 13:52:59.922: INFO: Verifying statefulset ss doesn't scale past 1 for another 933.540828ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4384 May 29 13:53:00.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4384 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 29 13:53:01.151: INFO: stderr: "I0529 13:53:01.055296 2185 log.go:172] (0xc000a82630) (0xc00068ed20) Create stream\nI0529 13:53:01.055346 2185 log.go:172] (0xc000a82630) (0xc00068ed20) Stream added, broadcasting: 1\nI0529 13:53:01.058907 2185 log.go:172] (0xc000a82630) Reply frame received for 1\nI0529 13:53:01.058948 2185 log.go:172] (0xc000a82630) (0xc00068e3c0) Create stream\nI0529 13:53:01.058960 2185 log.go:172] (0xc000a82630) (0xc00068e3c0) Stream added, broadcasting: 3\nI0529 13:53:01.059981 2185 log.go:172] (0xc000a82630) Reply frame received for 3\nI0529 13:53:01.060026 2185 log.go:172] (0xc000a82630) (0xc000016000) Create stream\nI0529 13:53:01.060039 2185 log.go:172] (0xc000a82630) (0xc000016000) Stream added, broadcasting: 5\nI0529 13:53:01.060974 2185 log.go:172] (0xc000a82630) Reply frame received for 5\nI0529 13:53:01.143216 2185 log.go:172] (0xc000a82630) Data frame received for 3\nI0529 13:53:01.143365 2185 log.go:172] (0xc00068e3c0) (3) Data frame handling\nI0529 13:53:01.143436 2185 log.go:172] (0xc00068e3c0) (3) Data frame sent\nI0529 13:53:01.143592 2185 log.go:172] (0xc000a82630) Data frame received for 3\nI0529 13:53:01.143614 2185 log.go:172] (0xc00068e3c0) (3) Data frame handling\nI0529 13:53:01.143634 2185 log.go:172] (0xc000a82630) Data frame received for 5\nI0529 13:53:01.143661 2185 log.go:172] (0xc000016000) (5) Data frame handling\nI0529 13:53:01.143683 2185 log.go:172] (0xc000016000) (5) Data frame sent\nI0529 13:53:01.143693 2185 log.go:172] (0xc000a82630) Data frame received for 5\nI0529 13:53:01.143700 2185 log.go:172] (0xc000016000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0529 13:53:01.144883 2185 log.go:172] (0xc000a82630) Data frame received for 1\nI0529 13:53:01.144912 2185 log.go:172] (0xc00068ed20) (1) Data frame handling\nI0529 13:53:01.144925 2185 log.go:172] (0xc00068ed20) (1) Data frame sent\nI0529 13:53:01.144938 2185 log.go:172] (0xc000a82630) (0xc00068ed20) Stream removed, broadcasting: 1\nI0529 13:53:01.145336 2185 log.go:172] (0xc000a82630) (0xc00068ed20) Stream removed, broadcasting: 1\nI0529 13:53:01.145356 2185 log.go:172] (0xc000a82630) (0xc00068e3c0) Stream removed, broadcasting: 3\nI0529 13:53:01.145363 2185 log.go:172] (0xc000a82630) (0xc000016000) Stream removed, broadcasting: 5\n" May 29 13:53:01.151: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 29 13:53:01.151: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 29 13:53:01.155: INFO: Found 1 stateful pods, waiting for 3 May 29 13:53:11.160: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 29 13:53:11.160: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 29 13:53:11.160: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 29 13:53:11.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4384 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 29 13:53:11.401: INFO: stderr: "I0529 13:53:11.296165 2205 log.go:172] (0xc000aca420) (0xc0008b86e0) Create stream\nI0529 13:53:11.296237 2205 log.go:172] (0xc000aca420) (0xc0008b86e0) Stream added, broadcasting: 1\nI0529 13:53:11.299073 2205 log.go:172] (0xc000aca420) Reply frame received for 1\nI0529 13:53:11.299122 2205 log.go:172] (0xc000aca420) (0xc0008b8780) Create stream\nI0529 13:53:11.299135 2205 log.go:172] (0xc000aca420) (0xc0008b8780) Stream added, broadcasting: 3\nI0529 13:53:11.300315 2205 log.go:172] (0xc000aca420) Reply frame received for 3\nI0529 13:53:11.300359 2205 log.go:172] (0xc000aca420) (0xc0008b8820) Create stream\nI0529 13:53:11.300372 2205 log.go:172] (0xc000aca420) (0xc0008b8820) Stream added, broadcasting: 5\nI0529 13:53:11.301471 2205 log.go:172] (0xc000aca420) Reply frame received for 5\nI0529 13:53:11.393403 2205 log.go:172] (0xc000aca420) Data frame received for 5\nI0529 13:53:11.393447 2205 log.go:172] (0xc0008b8820) (5) Data frame handling\nI0529 13:53:11.393462 2205 log.go:172] (0xc0008b8820) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0529 13:53:11.393881 2205 log.go:172] (0xc000aca420) Data frame received for 5\nI0529 13:53:11.393893 2205 log.go:172] (0xc0008b8820) (5) Data frame handling\nI0529 13:53:11.393908 2205 log.go:172] (0xc000aca420) Data frame received for 3\nI0529 13:53:11.393913 2205 log.go:172] (0xc0008b8780) (3) Data frame handling\nI0529 13:53:11.393920 2205 log.go:172] (0xc0008b8780) (3) Data frame sent\nI0529 13:53:11.393926 2205 log.go:172] (0xc000aca420) Data frame received for 3\nI0529 13:53:11.393931 2205 log.go:172] (0xc0008b8780) (3) Data frame handling\nI0529 13:53:11.395093 2205 log.go:172] (0xc000aca420) Data frame received for 1\nI0529 13:53:11.395126 2205 log.go:172] (0xc0008b86e0) (1) Data frame handling\nI0529 13:53:11.395146 2205 log.go:172] (0xc0008b86e0) (1) Data frame sent\nI0529 13:53:11.395175 2205 log.go:172] (0xc000aca420) (0xc0008b86e0) Stream removed, broadcasting: 1\nI0529 13:53:11.395201 2205 log.go:172] (0xc000aca420) Go away received\nI0529 13:53:11.395620 2205 log.go:172] (0xc000aca420) (0xc0008b86e0) Stream removed, broadcasting: 1\nI0529 13:53:11.395644 2205 log.go:172] (0xc000aca420) (0xc0008b8780) Stream removed, broadcasting: 3\nI0529 13:53:11.395656 2205 log.go:172] (0xc000aca420) (0xc0008b8820) Stream removed, broadcasting: 5\n" May 29 13:53:11.401: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 29 13:53:11.401: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 29 13:53:11.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4384 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 29 13:53:11.688: INFO: stderr: "I0529 13:53:11.519079 2224 log.go:172] (0xc000117080) (0xc00033ea00) Create stream\nI0529 13:53:11.519138 2224 log.go:172] (0xc000117080) (0xc00033ea00) Stream added, broadcasting: 1\nI0529 13:53:11.521540 2224 log.go:172] (0xc000117080) Reply frame received for 1\nI0529 13:53:11.521615 2224 log.go:172] (0xc000117080) (0xc000782000) Create stream\nI0529 13:53:11.521648 2224 log.go:172] (0xc000117080) (0xc000782000) Stream added, broadcasting: 3\nI0529 13:53:11.522765 2224 log.go:172] (0xc000117080) Reply frame received for 3\nI0529 13:53:11.522838 2224 log.go:172] (0xc000117080) (0xc000900000) Create stream\nI0529 13:53:11.522861 2224 log.go:172] (0xc000117080) (0xc000900000) Stream added, broadcasting: 5\nI0529 13:53:11.523853 2224 log.go:172] (0xc000117080) Reply frame received for 5\nI0529 13:53:11.641015 2224 log.go:172] (0xc000117080) Data frame received for 5\nI0529 13:53:11.641042 2224 log.go:172] (0xc000900000) (5) Data frame handling\nI0529 13:53:11.641054 2224 log.go:172] (0xc000900000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0529 13:53:11.680244 2224 log.go:172] (0xc000117080) Data frame received for 3\nI0529 13:53:11.680294 2224 log.go:172] (0xc000782000) (3) Data frame handling\nI0529 13:53:11.680310 2224 log.go:172] (0xc000782000) (3) Data frame sent\nI0529 13:53:11.680391 2224 log.go:172] (0xc000117080) Data frame received for 5\nI0529 13:53:11.680429 2224 log.go:172] (0xc000900000) (5) Data frame handling\nI0529 13:53:11.681590 2224 log.go:172] (0xc000117080) Data frame received for 3\nI0529 13:53:11.681622 2224 log.go:172] (0xc000782000) (3) Data frame handling\nI0529 13:53:11.683114 2224 log.go:172] (0xc000117080) Data frame received for 1\nI0529 13:53:11.683135 2224 log.go:172] (0xc00033ea00) (1) Data frame handling\nI0529 13:53:11.683145 2224 log.go:172] (0xc00033ea00) (1) Data frame sent\nI0529 13:53:11.683156 2224 log.go:172] (0xc000117080) (0xc00033ea00) Stream removed, broadcasting: 1\nI0529 13:53:11.683198 2224 log.go:172] (0xc000117080) Go away received\nI0529 13:53:11.683533 2224 log.go:172] (0xc000117080) (0xc00033ea00) Stream removed, broadcasting: 1\nI0529 13:53:11.683550 2224 log.go:172] (0xc000117080) (0xc000782000) Stream removed, broadcasting: 3\nI0529 13:53:11.683557 2224 log.go:172] (0xc000117080) (0xc000900000) Stream removed, broadcasting: 5\n" May 29 13:53:11.688: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 29 13:53:11.688: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 29 13:53:11.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4384 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 29 13:53:11.940: INFO: stderr: "I0529 13:53:11.820019 2243 log.go:172] (0xc000a2e420) (0xc0003a2820) Create stream\nI0529 13:53:11.820072 2243 log.go:172] (0xc000a2e420) (0xc0003a2820) Stream added, broadcasting: 1\nI0529 13:53:11.833044 2243 log.go:172] (0xc000a2e420) Reply frame received for 1\nI0529 13:53:11.835061 2243 log.go:172] (0xc000a2e420) (0xc0009d2000) Create stream\nI0529 13:53:11.835086 2243 log.go:172] (0xc000a2e420) (0xc0009d2000) Stream added, broadcasting: 3\nI0529 13:53:11.838776 2243 log.go:172] (0xc000a2e420) Reply frame received for 3\nI0529 13:53:11.839452 2243 log.go:172] (0xc000a2e420) (0xc00010a280) Create stream\nI0529 13:53:11.839465 2243 log.go:172] (0xc000a2e420) (0xc00010a280) Stream added, broadcasting: 5\nI0529 13:53:11.840281 2243 log.go:172] (0xc000a2e420) Reply frame received for 5\nI0529 13:53:11.898572 2243 log.go:172] (0xc000a2e420) Data frame received for 5\nI0529 13:53:11.898604 2243 log.go:172] (0xc00010a280) (5) Data frame handling\nI0529 13:53:11.898625 2243 log.go:172] (0xc00010a280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0529 13:53:11.932329 2243 log.go:172] (0xc000a2e420) Data frame received for 3\nI0529 13:53:11.932385 2243 log.go:172] (0xc0009d2000) (3) Data frame handling\nI0529 13:53:11.932405 2243 log.go:172] (0xc0009d2000) (3) Data frame sent\nI0529 13:53:11.932441 2243 log.go:172] (0xc000a2e420) Data frame received for 5\nI0529 13:53:11.932457 2243 log.go:172] (0xc00010a280) (5) Data frame handling\nI0529 13:53:11.932569 2243 log.go:172] (0xc000a2e420) Data frame received for 3\nI0529 13:53:11.932593 2243 log.go:172] (0xc0009d2000) (3) Data frame handling\nI0529 13:53:11.934471 2243 log.go:172] (0xc000a2e420) Data frame received for 1\nI0529 13:53:11.934493 2243 log.go:172] (0xc0003a2820) (1) Data frame handling\nI0529 13:53:11.934518 2243 log.go:172] (0xc0003a2820) (1) Data frame sent\nI0529 13:53:11.934710 2243 log.go:172] (0xc000a2e420) (0xc0003a2820) Stream removed, broadcasting: 1\nI0529 13:53:11.934752 2243 log.go:172] (0xc000a2e420) Go away received\nI0529 13:53:11.935207 2243 log.go:172] (0xc000a2e420) (0xc0003a2820) Stream removed, broadcasting: 1\nI0529 13:53:11.935232 2243 log.go:172] (0xc000a2e420) (0xc0009d2000) Stream removed, broadcasting: 3\nI0529 13:53:11.935248 2243 log.go:172] (0xc000a2e420) (0xc00010a280) Stream removed, broadcasting: 5\n" May 29 13:53:11.940: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 29 13:53:11.940: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 29 13:53:11.940: INFO: Waiting for statefulset status.replicas updated to 0 May 29 13:53:11.943: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 29 13:53:21.952: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 29 13:53:21.952: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 29 13:53:21.952: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 29 13:53:21.967: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.99999945s May 29 13:53:22.971: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.992836194s May 29 13:53:23.978: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.988120409s May 29 13:53:24.984: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.981706805s May 29 13:53:25.989: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.976012014s May 29 13:53:26.994: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.970402307s May 29 13:53:27.999: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.965477251s May 29 13:53:29.005: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.960329361s May 29 13:53:30.008: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.955016048s May 29 13:53:31.014: INFO: Verifying statefulset ss doesn't scale past 3 for another 950.888556ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4384 May 29 13:53:32.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4384 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 29 13:53:32.224: INFO: stderr: "I0529 13:53:32.136198 2263 log.go:172] (0xc000116dc0) (0xc00039a6e0) Create stream\nI0529 13:53:32.136252 2263 log.go:172] (0xc000116dc0) (0xc00039a6e0) Stream added, broadcasting: 1\nI0529 13:53:32.139563 2263 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0529 13:53:32.139638 2263 log.go:172] (0xc000116dc0) (0xc0006e6280) Create stream\nI0529 13:53:32.139669 2263 log.go:172] (0xc000116dc0) (0xc0006e6280) Stream added, broadcasting: 3\nI0529 13:53:32.141086 2263 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0529 13:53:32.141277 2263 log.go:172] (0xc000116dc0) (0xc0002b8000) Create stream\nI0529 13:53:32.141306 2263 log.go:172] (0xc000116dc0) (0xc0002b8000) Stream added, broadcasting: 5\nI0529 13:53:32.142560 2263 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0529 13:53:32.216033 2263 log.go:172] (0xc000116dc0) Data frame received for 3\nI0529 13:53:32.216078 2263 log.go:172] (0xc0006e6280) (3) Data frame handling\nI0529 13:53:32.216090 2263 log.go:172] (0xc0006e6280) (3) Data frame sent\nI0529 13:53:32.216100 2263 log.go:172] (0xc000116dc0) Data frame received for 3\nI0529 13:53:32.216107 2263 log.go:172] (0xc0006e6280) (3) Data frame handling\nI0529 13:53:32.216120 2263 log.go:172] (0xc000116dc0) Data frame received for 5\nI0529 13:53:32.216140 2263 log.go:172] (0xc0002b8000) (5) Data frame handling\nI0529 13:53:32.216154 2263 log.go:172] (0xc0002b8000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0529 13:53:32.216165 2263 log.go:172] (0xc000116dc0) Data frame received for 5\nI0529 13:53:32.216198 2263 log.go:172] (0xc0002b8000) (5) Data frame handling\nI0529 13:53:32.217869 2263 log.go:172] (0xc000116dc0) Data frame received for 1\nI0529 13:53:32.217907 2263 log.go:172] (0xc00039a6e0) (1) Data frame handling\nI0529 13:53:32.217926 2263 log.go:172] (0xc00039a6e0) (1) Data frame sent\nI0529 13:53:32.217963 2263 log.go:172] (0xc000116dc0) (0xc00039a6e0) Stream removed, broadcasting: 1\nI0529 13:53:32.217997 2263 log.go:172] (0xc000116dc0) Go away received\nI0529 13:53:32.218343 2263 log.go:172] (0xc000116dc0) (0xc00039a6e0) Stream removed, broadcasting: 1\nI0529 13:53:32.218365 2263 log.go:172] (0xc000116dc0) (0xc0006e6280) Stream removed, broadcasting: 3\nI0529 13:53:32.218375 2263 log.go:172] (0xc000116dc0) (0xc0002b8000) Stream removed, broadcasting: 5\n" May 29 13:53:32.224: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 29 13:53:32.224: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 29 13:53:32.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4384 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 29 13:53:32.435: INFO: stderr: "I0529 13:53:32.358694 2283 log.go:172] (0xc000128dc0) (0xc000abe8c0) Create stream\nI0529 13:53:32.358783 2283 log.go:172] (0xc000128dc0) (0xc000abe8c0) Stream added, broadcasting: 1\nI0529 13:53:32.362322 2283 log.go:172] (0xc000128dc0) Reply frame received for 1\nI0529 13:53:32.362359 2283 log.go:172] (0xc000128dc0) (0xc000702280) Create stream\nI0529 13:53:32.362369 2283 log.go:172] (0xc000128dc0) (0xc000702280) Stream added, broadcasting: 3\nI0529 13:53:32.363361 2283 log.go:172] (0xc000128dc0) Reply frame received for 3\nI0529 13:53:32.363408 2283 log.go:172] (0xc000128dc0) (0xc000abe000) Create stream\nI0529 13:53:32.363426 2283 log.go:172] (0xc000128dc0) (0xc000abe000) Stream added, broadcasting: 5\nI0529 13:53:32.364312 2283 log.go:172] (0xc000128dc0) Reply frame received for 5\nI0529 13:53:32.430076 2283 log.go:172] (0xc000128dc0) Data frame received for 5\nI0529 13:53:32.430111 2283 log.go:172] (0xc000abe000) (5) Data frame handling\nI0529 13:53:32.430123 2283 log.go:172] (0xc000abe000) (5) Data frame sent\nI0529 13:53:32.430131 2283 log.go:172] (0xc000128dc0) Data frame received for 5\nI0529 13:53:32.430138 2283 log.go:172] (0xc000abe000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0529 13:53:32.430170 2283 log.go:172] (0xc000128dc0) Data frame received for 3\nI0529 13:53:32.430186 2283 log.go:172] (0xc000702280) (3) Data frame handling\nI0529 13:53:32.430204 2283 log.go:172] (0xc000702280) (3) Data frame sent\nI0529 13:53:32.430217 2283 log.go:172] (0xc000128dc0) Data frame received for 3\nI0529 13:53:32.430224 2283 log.go:172] (0xc000702280) (3) Data frame handling\nI0529 13:53:32.431025 2283 log.go:172] (0xc000128dc0) Data frame received for 1\nI0529 13:53:32.431055 2283 log.go:172] (0xc000abe8c0) (1) Data frame handling\nI0529 13:53:32.431068 2283 log.go:172] (0xc000abe8c0) (1) Data frame sent\nI0529 13:53:32.431080 2283 log.go:172] (0xc000128dc0) (0xc000abe8c0) Stream removed, broadcasting: 1\nI0529 13:53:32.431145 2283 log.go:172] (0xc000128dc0) Go away received\nI0529 13:53:32.431385 2283 log.go:172] (0xc000128dc0) (0xc000abe8c0) Stream removed, broadcasting: 1\nI0529 13:53:32.431407 2283 log.go:172] (0xc000128dc0) (0xc000702280) Stream removed, broadcasting: 3\nI0529 13:53:32.431417 2283 log.go:172] (0xc000128dc0) (0xc000abe000) Stream removed, broadcasting: 5\n" May 29 13:53:32.435: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 29 13:53:32.435: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 29 13:53:32.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4384 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 29 13:53:32.638: INFO: stderr: "I0529 13:53:32.563529 2304 log.go:172] (0xc00098a420) (0xc0003e8820) Create stream\nI0529 13:53:32.563601 2304 log.go:172] (0xc00098a420) (0xc0003e8820) Stream added, broadcasting: 1\nI0529 13:53:32.566650 2304 log.go:172] (0xc00098a420) Reply frame received for 1\nI0529 13:53:32.566752 2304 log.go:172] (0xc00098a420) (0xc000866000) Create stream\nI0529 13:53:32.566813 2304 log.go:172] (0xc00098a420) (0xc000866000) Stream added, broadcasting: 3\nI0529 13:53:32.568382 2304 log.go:172] (0xc00098a420) Reply frame received for 3\nI0529 13:53:32.568567 2304 log.go:172] (0xc00098a420) (0xc0006941e0) Create stream\nI0529 13:53:32.568594 2304 log.go:172] (0xc00098a420) (0xc0006941e0) Stream added, broadcasting: 5\nI0529 13:53:32.569633 2304 log.go:172] (0xc00098a420) Reply frame received for 5\nI0529 13:53:32.631057 2304 log.go:172] (0xc00098a420) Data frame received for 5\nI0529 13:53:32.631088 2304 log.go:172] (0xc0006941e0) (5) Data frame handling\nI0529 13:53:32.631102 2304 log.go:172] (0xc0006941e0) (5) Data frame sent\nI0529 13:53:32.631108 2304 log.go:172] (0xc00098a420) Data frame received for 5\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0529 13:53:32.631117 2304 log.go:172] (0xc0006941e0) (5) Data frame handling\nI0529 13:53:32.631151 2304 log.go:172] (0xc00098a420) Data frame received for 3\nI0529 13:53:32.631157 2304 log.go:172] (0xc000866000) (3) Data frame handling\nI0529 13:53:32.631164 2304 log.go:172] (0xc000866000) (3) Data frame sent\nI0529 13:53:32.631169 2304 log.go:172] (0xc00098a420) Data frame received for 3\nI0529 13:53:32.631174 2304 log.go:172] (0xc000866000) (3) Data frame handling\nI0529 13:53:32.632716 2304 log.go:172] (0xc00098a420) Data frame received for 1\nI0529 13:53:32.632811 2304 log.go:172] (0xc0003e8820) (1) Data frame handling\nI0529 13:53:32.632902 2304 log.go:172] (0xc0003e8820) (1) Data frame sent\nI0529 13:53:32.632938 2304 log.go:172] (0xc00098a420) (0xc0003e8820) Stream removed, broadcasting: 1\nI0529 13:53:32.632987 2304 log.go:172] (0xc00098a420) Go away received\nI0529 13:53:32.633479 2304 log.go:172] (0xc00098a420) (0xc0003e8820) Stream removed, broadcasting: 1\nI0529 13:53:32.633499 2304 log.go:172] (0xc00098a420) (0xc000866000) Stream removed, broadcasting: 3\nI0529 13:53:32.633508 2304 log.go:172] (0xc00098a420) (0xc0006941e0) Stream removed, broadcasting: 5\n" May 29 13:53:32.639: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 29 13:53:32.639: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 29 13:53:32.639: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 29 13:53:52.656: INFO: Deleting all statefulset in ns statefulset-4384 May 29 13:53:52.660: INFO: Scaling statefulset ss to 0 May 29 13:53:52.692: INFO: Waiting for statefulset status.replicas updated to 0 May 29 13:53:52.695: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:53:52.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4384" for this suite. May 29 13:53:58.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:53:58.832: INFO: namespace statefulset-4384 deletion completed in 6.121350662s • [SLOW TEST:88.432 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:53:58.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 29 13:54:03.497: INFO: Successfully updated pod "labelsupdate22a661d5-215e-4828-9f69-7149404e30a6" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:54:05.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5212" for this suite. May 29 13:54:27.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:54:27.631: INFO: namespace downward-api-5212 deletion completed in 22.082089043s • [SLOW TEST:28.799 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:54:27.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC May 29 13:54:27.707: INFO: namespace kubectl-3796 May 29 13:54:27.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3796' May 29 13:54:28.002: INFO: stderr: "" May 29 13:54:28.002: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 29 13:54:29.006: INFO: Selector matched 1 pods for map[app:redis] May 29 13:54:29.006: INFO: Found 0 / 1 May 29 13:54:30.007: INFO: Selector matched 1 pods for map[app:redis] May 29 13:54:30.007: INFO: Found 0 / 1 May 29 13:54:31.006: INFO: Selector matched 1 pods for map[app:redis] May 29 13:54:31.007: INFO: Found 0 / 1 May 29 13:54:32.006: INFO: Selector matched 1 pods for map[app:redis] May 29 13:54:32.007: INFO: Found 1 / 1 May 29 13:54:32.007: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 29 13:54:32.010: INFO: Selector matched 1 pods for map[app:redis] May 29 13:54:32.010: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 29 13:54:32.010: INFO: wait on redis-master startup in kubectl-3796 May 29 13:54:32.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-jlll5 redis-master --namespace=kubectl-3796' May 29 13:54:32.121: INFO: stderr: "" May 29 13:54:32.121: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 29 May 13:54:31.066 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 29 May 13:54:31.066 # Server started, Redis version 3.2.12\n1:M 29 May 13:54:31.066 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 29 May 13:54:31.066 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC May 29 13:54:32.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-3796' May 29 13:54:32.257: INFO: stderr: "" May 29 13:54:32.257: INFO: stdout: "service/rm2 exposed\n" May 29 13:54:32.267: INFO: Service rm2 in namespace kubectl-3796 found. STEP: exposing service May 29 13:54:34.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-3796' May 29 13:54:34.479: INFO: stderr: "" May 29 13:54:34.479: INFO: stdout: "service/rm3 exposed\n" May 29 13:54:34.483: INFO: Service rm3 in namespace kubectl-3796 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:54:36.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3796" for this suite. May 29 13:54:58.510: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:54:58.582: INFO: namespace kubectl-3796 deletion completed in 22.087680247s • [SLOW TEST:30.951 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:54:58.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 29 13:54:58.662: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 29 13:54:58.681: INFO: Number of nodes with available pods: 0 May 29 13:54:58.681: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 29 13:54:58.740: INFO: Number of nodes with available pods: 0 May 29 13:54:58.741: INFO: Node iruya-worker is running more than one daemon pod May 29 13:54:59.745: INFO: Number of nodes with available pods: 0 May 29 13:54:59.745: INFO: Node iruya-worker is running more than one daemon pod May 29 13:55:00.790: INFO: Number of nodes with available pods: 0 May 29 13:55:00.790: INFO: Node iruya-worker is running more than one daemon pod May 29 13:55:01.745: INFO: Number of nodes with available pods: 1 May 29 13:55:01.745: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 29 13:55:01.779: INFO: Number of nodes with available pods: 1 May 29 13:55:01.779: INFO: Number of running nodes: 0, number of available pods: 1 May 29 13:55:02.783: INFO: Number of nodes with available pods: 0 May 29 13:55:02.783: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 29 13:55:02.809: INFO: Number of nodes with available pods: 0 May 29 13:55:02.809: INFO: Node iruya-worker is running more than one daemon pod May 29 13:55:03.813: INFO: Number of nodes with available pods: 0 May 29 13:55:03.813: INFO: Node iruya-worker is running more than one daemon pod May 29 13:55:04.813: INFO: Number of nodes with available pods: 0 May 29 13:55:04.813: INFO: Node iruya-worker is running more than one daemon pod May 29 13:55:05.813: INFO: Number of nodes with available pods: 0 May 29 13:55:05.813: INFO: Node iruya-worker is running more than one daemon pod May 29 13:55:06.813: INFO: Number of nodes with available pods: 0 May 29 13:55:06.813: INFO: Node iruya-worker is running more than one daemon pod May 29 13:55:07.813: INFO: Number of nodes with available pods: 0 May 29 13:55:07.813: INFO: Node iruya-worker is running more than one daemon pod May 29 13:55:08.813: INFO: Number of nodes with available pods: 0 May 29 13:55:08.813: INFO: Node iruya-worker is running more than one daemon pod May 29 13:55:09.813: INFO: Number of nodes with available pods: 0 May 29 13:55:09.813: INFO: Node iruya-worker is running more than one daemon pod May 29 13:55:10.812: INFO: Number of nodes with available pods: 0 May 29 13:55:10.812: INFO: Node iruya-worker is running more than one daemon pod May 29 13:55:11.812: INFO: Number of nodes with available pods: 0 May 29 13:55:11.812: INFO: Node iruya-worker is running more than one daemon pod May 29 13:55:12.813: INFO: Number of nodes with available pods: 0 May 29 13:55:12.813: INFO: Node iruya-worker is running more than one daemon pod May 29 13:55:13.813: INFO: Number of nodes with available pods: 0 May 29 13:55:13.813: INFO: Node iruya-worker is running more than one daemon pod May 29 13:55:14.812: INFO: Number of nodes with available pods: 0 May 29 13:55:14.812: INFO: Node iruya-worker is running more than one daemon pod May 29 13:55:15.814: INFO: Number of nodes with available pods: 1 May 29 13:55:15.814: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-747, will wait for the garbage collector to delete the pods May 29 13:55:15.880: INFO: Deleting DaemonSet.extensions daemon-set took: 6.353099ms May 29 13:55:16.180: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.292091ms May 29 13:55:22.186: INFO: Number of nodes with available pods: 0 May 29 13:55:22.186: INFO: Number of running nodes: 0, number of available pods: 0 May 29 13:55:22.188: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-747/daemonsets","resourceVersion":"13554218"},"items":null} May 29 13:55:22.191: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-747/pods","resourceVersion":"13554218"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:55:22.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-747" for this suite. May 29 13:55:28.272: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:55:28.350: INFO: namespace daemonsets-747 deletion completed in 6.091104413s • [SLOW TEST:29.768 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:55:28.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions May 29 13:55:28.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 29 13:55:28.554: INFO: stderr: "" May 29 13:55:28.554: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:55:28.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7290" for this suite. May 29 13:55:34.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:55:34.764: INFO: namespace kubectl-7290 deletion completed in 6.206281165s • [SLOW TEST:6.414 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:55:34.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 29 13:55:38.875: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:55:39.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2227" for this suite. May 29 13:55:45.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:55:45.180: INFO: namespace container-runtime-2227 deletion completed in 6.098153606s • [SLOW TEST:10.415 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:55:45.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc May 29 13:55:45.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3049' May 29 13:55:45.542: INFO: stderr: "" May 29 13:55:45.542: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. May 29 13:55:46.550: INFO: Selector matched 1 pods for map[app:redis] May 29 13:55:46.550: INFO: Found 0 / 1 May 29 13:55:47.668: INFO: Selector matched 1 pods for map[app:redis] May 29 13:55:47.668: INFO: Found 0 / 1 May 29 13:55:48.547: INFO: Selector matched 1 pods for map[app:redis] May 29 13:55:48.547: INFO: Found 0 / 1 May 29 13:55:49.547: INFO: Selector matched 1 pods for map[app:redis] May 29 13:55:49.547: INFO: Found 1 / 1 May 29 13:55:49.547: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 29 13:55:49.550: INFO: Selector matched 1 pods for map[app:redis] May 29 13:55:49.550: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings May 29 13:55:49.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5q4vk redis-master --namespace=kubectl-3049' May 29 13:55:49.658: INFO: stderr: "" May 29 13:55:49.658: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 29 May 13:55:48.316 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 29 May 13:55:48.316 # Server started, Redis version 3.2.12\n1:M 29 May 13:55:48.316 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 29 May 13:55:48.316 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines May 29 13:55:49.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5q4vk redis-master --namespace=kubectl-3049 --tail=1' May 29 13:55:49.763: INFO: stderr: "" May 29 13:55:49.763: INFO: stdout: "1:M 29 May 13:55:48.316 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes May 29 13:55:49.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5q4vk redis-master --namespace=kubectl-3049 --limit-bytes=1' May 29 13:55:49.882: INFO: stderr: "" May 29 13:55:49.882: INFO: stdout: " " STEP: exposing timestamps May 29 13:55:49.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5q4vk redis-master --namespace=kubectl-3049 --tail=1 --timestamps' May 29 13:55:49.986: INFO: stderr: "" May 29 13:55:49.986: INFO: stdout: "2020-05-29T13:55:48.317097819Z 1:M 29 May 13:55:48.316 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range May 29 13:55:52.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5q4vk redis-master --namespace=kubectl-3049 --since=1s' May 29 13:55:52.603: INFO: stderr: "" May 29 13:55:52.603: INFO: stdout: "" May 29 13:55:52.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5q4vk redis-master --namespace=kubectl-3049 --since=24h' May 29 13:55:52.708: INFO: stderr: "" May 29 13:55:52.708: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 29 May 13:55:48.316 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 29 May 13:55:48.316 # Server started, Redis version 3.2.12\n1:M 29 May 13:55:48.316 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 29 May 13:55:48.316 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources May 29 13:55:52.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3049' May 29 13:55:52.808: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 29 13:55:52.808: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" May 29 13:55:52.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-3049' May 29 13:55:52.926: INFO: stderr: "No resources found.\n" May 29 13:55:52.926: INFO: stdout: "" May 29 13:55:52.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-3049 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 29 13:55:53.068: INFO: stderr: "" May 29 13:55:53.068: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:55:53.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3049" for this suite. May 29 13:56:15.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:56:15.168: INFO: namespace kubectl-3049 deletion completed in 22.09715749s • [SLOW TEST:29.988 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:56:15.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs May 29 13:56:15.281: INFO: Waiting up to 5m0s for pod "pod-d821b40e-08bc-45ce-954a-8816e99d6874" in namespace "emptydir-4892" to be "success or failure" May 29 13:56:15.290: INFO: Pod "pod-d821b40e-08bc-45ce-954a-8816e99d6874": Phase="Pending", Reason="", readiness=false. Elapsed: 8.739007ms May 29 13:56:17.293: INFO: Pod "pod-d821b40e-08bc-45ce-954a-8816e99d6874": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012536101s May 29 13:56:19.297: INFO: Pod "pod-d821b40e-08bc-45ce-954a-8816e99d6874": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016328298s STEP: Saw pod success May 29 13:56:19.297: INFO: Pod "pod-d821b40e-08bc-45ce-954a-8816e99d6874" satisfied condition "success or failure" May 29 13:56:19.301: INFO: Trying to get logs from node iruya-worker2 pod pod-d821b40e-08bc-45ce-954a-8816e99d6874 container test-container: STEP: delete the pod May 29 13:56:19.338: INFO: Waiting for pod pod-d821b40e-08bc-45ce-954a-8816e99d6874 to disappear May 29 13:56:19.347: INFO: Pod pod-d821b40e-08bc-45ce-954a-8816e99d6874 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:56:19.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4892" for this suite. May 29 13:56:25.362: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:56:25.441: INFO: namespace emptydir-4892 deletion completed in 6.090349449s • [SLOW TEST:10.272 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:56:25.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-4b52ec64-f012-48bb-b96a-1a07d9c42f5b STEP: Creating a pod to test consume secrets May 29 13:56:25.505: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d334f970-9f5c-4f7b-9d14-d11c3c601687" in namespace "projected-8750" to be "success or failure" May 29 13:56:25.509: INFO: Pod "pod-projected-secrets-d334f970-9f5c-4f7b-9d14-d11c3c601687": Phase="Pending", Reason="", readiness=false. Elapsed: 3.938489ms May 29 13:56:27.512: INFO: Pod "pod-projected-secrets-d334f970-9f5c-4f7b-9d14-d11c3c601687": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007050327s May 29 13:56:29.516: INFO: Pod "pod-projected-secrets-d334f970-9f5c-4f7b-9d14-d11c3c601687": Phase="Running", Reason="", readiness=true. Elapsed: 4.011507849s May 29 13:56:31.520: INFO: Pod "pod-projected-secrets-d334f970-9f5c-4f7b-9d14-d11c3c601687": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015370805s STEP: Saw pod success May 29 13:56:31.520: INFO: Pod "pod-projected-secrets-d334f970-9f5c-4f7b-9d14-d11c3c601687" satisfied condition "success or failure" May 29 13:56:31.523: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-d334f970-9f5c-4f7b-9d14-d11c3c601687 container projected-secret-volume-test: STEP: delete the pod May 29 13:56:31.607: INFO: Waiting for pod pod-projected-secrets-d334f970-9f5c-4f7b-9d14-d11c3c601687 to disappear May 29 13:56:31.609: INFO: Pod pod-projected-secrets-d334f970-9f5c-4f7b-9d14-d11c3c601687 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:56:31.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8750" for this suite. May 29 13:56:37.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:56:37.730: INFO: namespace projected-8750 deletion completed in 6.116753186s • [SLOW TEST:12.288 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:56:37.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 29 13:56:45.085: INFO: 0 pods remaining May 29 13:56:45.085: INFO: 0 pods has nil DeletionTimestamp May 29 13:56:45.085: INFO: STEP: Gathering metrics W0529 13:56:45.878989 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 29 13:56:45.879: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:56:45.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4452" for this suite. May 29 13:56:52.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:56:52.292: INFO: namespace gc-4452 deletion completed in 6.397571848s • [SLOW TEST:14.561 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:56:52.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-hlf9 STEP: Creating a pod to test atomic-volume-subpath May 29 13:56:52.362: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-hlf9" in namespace "subpath-6262" to be "success or failure" May 29 13:56:52.421: INFO: Pod "pod-subpath-test-projected-hlf9": Phase="Pending", Reason="", readiness=false. Elapsed: 58.890045ms May 29 13:56:54.425: INFO: Pod "pod-subpath-test-projected-hlf9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062559513s May 29 13:56:56.429: INFO: Pod "pod-subpath-test-projected-hlf9": Phase="Running", Reason="", readiness=true. Elapsed: 4.066885108s May 29 13:56:58.433: INFO: Pod "pod-subpath-test-projected-hlf9": Phase="Running", Reason="", readiness=true. Elapsed: 6.071058754s May 29 13:57:00.436: INFO: Pod "pod-subpath-test-projected-hlf9": Phase="Running", Reason="", readiness=true. Elapsed: 8.07439661s May 29 13:57:02.440: INFO: Pod "pod-subpath-test-projected-hlf9": Phase="Running", Reason="", readiness=true. Elapsed: 10.07796072s May 29 13:57:04.445: INFO: Pod "pod-subpath-test-projected-hlf9": Phase="Running", Reason="", readiness=true. Elapsed: 12.08282716s May 29 13:57:06.449: INFO: Pod "pod-subpath-test-projected-hlf9": Phase="Running", Reason="", readiness=true. Elapsed: 14.087231431s May 29 13:57:08.454: INFO: Pod "pod-subpath-test-projected-hlf9": Phase="Running", Reason="", readiness=true. Elapsed: 16.091529053s May 29 13:57:10.458: INFO: Pod "pod-subpath-test-projected-hlf9": Phase="Running", Reason="", readiness=true. Elapsed: 18.095989514s May 29 13:57:12.463: INFO: Pod "pod-subpath-test-projected-hlf9": Phase="Running", Reason="", readiness=true. Elapsed: 20.100750324s May 29 13:57:14.467: INFO: Pod "pod-subpath-test-projected-hlf9": Phase="Running", Reason="", readiness=true. Elapsed: 22.105196068s May 29 13:57:16.471: INFO: Pod "pod-subpath-test-projected-hlf9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.108605176s STEP: Saw pod success May 29 13:57:16.471: INFO: Pod "pod-subpath-test-projected-hlf9" satisfied condition "success or failure" May 29 13:57:16.473: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-projected-hlf9 container test-container-subpath-projected-hlf9: STEP: delete the pod May 29 13:57:16.496: INFO: Waiting for pod pod-subpath-test-projected-hlf9 to disappear May 29 13:57:16.501: INFO: Pod pod-subpath-test-projected-hlf9 no longer exists STEP: Deleting pod pod-subpath-test-projected-hlf9 May 29 13:57:16.501: INFO: Deleting pod "pod-subpath-test-projected-hlf9" in namespace "subpath-6262" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:57:16.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6262" for this suite. May 29 13:57:22.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:57:22.614: INFO: namespace subpath-6262 deletion completed in 6.107068901s • [SLOW TEST:30.323 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:57:22.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin May 29 13:57:22.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9477 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 29 13:57:25.930: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0529 13:57:25.851111 2636 log.go:172] (0xc000126e70) (0xc000a0a500) Create stream\nI0529 13:57:25.851156 2636 log.go:172] (0xc000126e70) (0xc000a0a500) Stream added, broadcasting: 1\nI0529 13:57:25.853503 2636 log.go:172] (0xc000126e70) Reply frame received for 1\nI0529 13:57:25.853537 2636 log.go:172] (0xc000126e70) (0xc00054e000) Create stream\nI0529 13:57:25.853547 2636 log.go:172] (0xc000126e70) (0xc00054e000) Stream added, broadcasting: 3\nI0529 13:57:25.854449 2636 log.go:172] (0xc000126e70) Reply frame received for 3\nI0529 13:57:25.854473 2636 log.go:172] (0xc000126e70) (0xc000556000) Create stream\nI0529 13:57:25.854480 2636 log.go:172] (0xc000126e70) (0xc000556000) Stream added, broadcasting: 5\nI0529 13:57:25.855083 2636 log.go:172] (0xc000126e70) Reply frame received for 5\nI0529 13:57:25.855109 2636 log.go:172] (0xc000126e70) (0xc000a0a5a0) Create stream\nI0529 13:57:25.855124 2636 log.go:172] (0xc000126e70) (0xc000a0a5a0) Stream added, broadcasting: 7\nI0529 13:57:25.855856 2636 log.go:172] (0xc000126e70) Reply frame received for 7\nI0529 13:57:25.855999 2636 log.go:172] (0xc00054e000) (3) Writing data frame\nI0529 13:57:25.856173 2636 log.go:172] (0xc00054e000) (3) Writing data frame\nI0529 13:57:25.856799 2636 log.go:172] (0xc000126e70) Data frame received for 5\nI0529 13:57:25.856821 2636 log.go:172] (0xc000556000) (5) Data frame handling\nI0529 13:57:25.856835 2636 log.go:172] (0xc000556000) (5) Data frame sent\nI0529 13:57:25.857690 2636 log.go:172] (0xc000126e70) Data frame received for 5\nI0529 13:57:25.857701 2636 log.go:172] (0xc000556000) (5) Data frame handling\nI0529 13:57:25.857706 2636 log.go:172] (0xc000556000) (5) Data frame sent\nI0529 13:57:25.908085 2636 log.go:172] (0xc000126e70) Data frame received for 7\nI0529 13:57:25.908149 2636 log.go:172] (0xc000a0a5a0) (7) Data frame handling\nI0529 13:57:25.908185 2636 log.go:172] (0xc000126e70) Data frame received for 5\nI0529 13:57:25.908211 2636 log.go:172] (0xc000556000) (5) Data frame handling\nI0529 13:57:25.908552 2636 log.go:172] (0xc000126e70) Data frame received for 1\nI0529 13:57:25.908592 2636 log.go:172] (0xc000a0a500) (1) Data frame handling\nI0529 13:57:25.908619 2636 log.go:172] (0xc000a0a500) (1) Data frame sent\nI0529 13:57:25.908644 2636 log.go:172] (0xc000126e70) (0xc00054e000) Stream removed, broadcasting: 3\nI0529 13:57:25.908791 2636 log.go:172] (0xc000126e70) (0xc000a0a500) Stream removed, broadcasting: 1\nI0529 13:57:25.908828 2636 log.go:172] (0xc000126e70) Go away received\nI0529 13:57:25.908995 2636 log.go:172] (0xc000126e70) (0xc000a0a500) Stream removed, broadcasting: 1\nI0529 13:57:25.909058 2636 log.go:172] (0xc000126e70) (0xc00054e000) Stream removed, broadcasting: 3\nI0529 13:57:25.909081 2636 log.go:172] (0xc000126e70) (0xc000556000) Stream removed, broadcasting: 5\nI0529 13:57:25.909100 2636 log.go:172] (0xc000126e70) (0xc000a0a5a0) Stream removed, broadcasting: 7\n" May 29 13:57:25.930: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:57:27.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9477" for this suite. May 29 13:57:33.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:57:34.041: INFO: namespace kubectl-9477 deletion completed in 6.100506699s • [SLOW TEST:11.427 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:57:34.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8546.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8546.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8546.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8546.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8546.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8546.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8546.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8546.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8546.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8546.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8546.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 126.12.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.12.126_udp@PTR;check="$$(dig +tcp +noall +answer +search 126.12.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.12.126_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8546.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8546.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8546.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8546.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8546.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8546.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8546.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8546.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8546.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8546.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8546.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 126.12.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.12.126_udp@PTR;check="$$(dig +tcp +noall +answer +search 126.12.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.12.126_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 29 13:57:40.229: INFO: Unable to read wheezy_udp@dns-test-service.dns-8546.svc.cluster.local from pod dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b: the server could not find the requested resource (get pods dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b) May 29 13:57:40.234: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8546.svc.cluster.local from pod dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b: the server could not find the requested resource (get pods dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b) May 29 13:57:40.242: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local from pod dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b: the server could not find the requested resource (get pods dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b) May 29 13:57:40.245: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local from pod dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b: the server could not find the requested resource (get pods dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b) May 29 13:57:40.276: INFO: Unable to read jessie_udp@dns-test-service.dns-8546.svc.cluster.local from pod dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b: the server could not find the requested resource (get pods dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b) May 29 13:57:40.280: INFO: Unable to read jessie_tcp@dns-test-service.dns-8546.svc.cluster.local from pod dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b: the server could not find the requested resource (get pods dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b) May 29 13:57:40.285: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local from pod dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b: the server could not find the requested resource (get pods dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b) May 29 13:57:40.298: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local from pod dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b: the server could not find the requested resource (get pods dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b) May 29 13:57:40.311: INFO: Lookups using dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b failed for: [wheezy_udp@dns-test-service.dns-8546.svc.cluster.local wheezy_tcp@dns-test-service.dns-8546.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local jessie_udp@dns-test-service.dns-8546.svc.cluster.local jessie_tcp@dns-test-service.dns-8546.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local] May 29 13:57:45.318: INFO: Unable to read wheezy_udp@dns-test-service.dns-8546.svc.cluster.local from pod dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b: the server could not find the requested resource (get pods dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b) May 29 13:57:45.322: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8546.svc.cluster.local from pod dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b: the server could not find the requested resource (get pods dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b) May 29 13:57:45.324: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local from pod dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b: the server could not find the requested resource (get pods dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b) May 29 13:57:45.326: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local from pod dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b: the server could not find the requested resource (get pods dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b) May 29 13:57:45.344: INFO: Unable to read jessie_udp@dns-test-service.dns-8546.svc.cluster.local from pod dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b: the server could not find the requested resource (get pods dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b) May 29 13:57:45.347: INFO: Unable to read jessie_tcp@dns-test-service.dns-8546.svc.cluster.local from pod dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b: the server could not find the requested resource (get pods dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b) May 29 13:57:45.350: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local from pod dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b: the server could not find the requested resource (get pods dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b) May 29 13:57:45.353: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local from pod dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b: the server could not find the requested resource (get pods dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b) May 29 13:57:45.368: INFO: Lookups using dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b failed for: [wheezy_udp@dns-test-service.dns-8546.svc.cluster.local wheezy_tcp@dns-test-service.dns-8546.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local jessie_udp@dns-test-service.dns-8546.svc.cluster.local jessie_tcp@dns-test-service.dns-8546.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local] May 29 13:57:50.315: INFO: Unable to read wheezy_udp@dns-test-service.dns-8546.svc.cluster.local from pod dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b: the server could not find the requested resource (get pods dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b) May 29 13:57:50.318: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8546.svc.cluster.local from pod dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b: the server could not find the requested resource (get pods dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b) May 29 13:57:50.322: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local from pod dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b: the server could not find the requested resource (get pods dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b) May 29 13:57:50.325: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local from pod dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b: the server could not find the requested resource (get pods dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b) May 29 13:57:50.347: INFO: Unable to read jessie_udp@dns-test-service.dns-8546.svc.cluster.local from pod dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b: the server could not find the requested resource (get pods dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b) May 29 13:57:50.350: INFO: Unable to read jessie_tcp@dns-test-service.dns-8546.svc.cluster.local from pod dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b: the server could not find the requested resource (get pods dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b) May 29 13:57:50.353: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local from pod dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b: the server could not find the requested resource (get pods dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b) May 29 13:57:50.356: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local from pod dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b: the server could not find the requested resource (get pods dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b) May 29 13:57:50.375: INFO: Lookups using dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b failed for: [wheezy_udp@dns-test-service.dns-8546.svc.cluster.local wheezy_tcp@dns-test-service.dns-8546.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local jessie_udp@dns-test-service.dns-8546.svc.cluster.local jessie_tcp@dns-test-service.dns-8546.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local] May 29 13:57:55.316: INFO: Unable to read wheezy_udp@dns-test-service.dns-8546.svc.cluster.local from pod dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b: the server could not find the requested resource (get pods dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b) May 29 13:57:55.320: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8546.svc.cluster.local from pod dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b: the server could not find the requested resource (get pods dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b) May 29 13:57:55.324: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local from pod dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b: the server could not find the requested resource (get pods dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b) May 29 13:57:55.327: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local from pod dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b: the server could not find the requested resource (get pods dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b) May 29 13:57:55.349: INFO: Unable to read jessie_udp@dns-test-service.dns-8546.svc.cluster.local from pod dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b: the server could not find the requested resource (get pods dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b) May 29 13:57:55.352: INFO: Unable to read jessie_tcp@dns-test-service.dns-8546.svc.cluster.local from pod dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b: the server could not find the requested resource (get pods dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b) May 29 13:57:55.354: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local from pod dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b: the server could not find the requested resource (get pods dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b) May 29 13:57:55.357: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local from pod dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b: the server could not find the requested resource (get pods dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b) May 29 13:57:55.375: INFO: Lookups using dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b failed for: [wheezy_udp@dns-test-service.dns-8546.svc.cluster.local wheezy_tcp@dns-test-service.dns-8546.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local jessie_udp@dns-test-service.dns-8546.svc.cluster.local jessie_tcp@dns-test-service.dns-8546.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local] May 29 13:58:00.315: INFO: Unable to read wheezy_udp@dns-test-service.dns-8546.svc.cluster.local from pod dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b: the server could not find the requested resource (get pods dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b) May 29 13:58:00.319: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8546.svc.cluster.local from pod dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b: the server could not find the requested resource (get pods dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b) May 29 13:58:00.321: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local from pod dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b: the server could not find the requested resource (get pods dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b) May 29 13:58:00.324: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local from pod dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b: the server could not find the requested resource (get pods dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b) May 29 13:58:00.345: INFO: Unable to read jessie_udp@dns-test-service.dns-8546.svc.cluster.local from pod dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b: the server could not find the requested resource (get pods dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b) May 29 13:58:00.348: INFO: Unable to read jessie_tcp@dns-test-service.dns-8546.svc.cluster.local from pod dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b: the server could not find the requested resource (get pods dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b) May 29 13:58:00.350: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local from pod dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b: the server could not find the requested resource (get pods dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b) May 29 13:58:00.353: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local from pod dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b: the server could not find the requested resource (get pods dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b) May 29 13:58:00.371: INFO: Lookups using dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b failed for: [wheezy_udp@dns-test-service.dns-8546.svc.cluster.local wheezy_tcp@dns-test-service.dns-8546.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local jessie_udp@dns-test-service.dns-8546.svc.cluster.local jessie_tcp@dns-test-service.dns-8546.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local] May 29 13:58:05.316: INFO: Unable to read wheezy_udp@dns-test-service.dns-8546.svc.cluster.local from pod dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b: the server could not find the requested resource (get pods dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b) May 29 13:58:05.319: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8546.svc.cluster.local from pod dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b: the server could not find the requested resource (get pods dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b) May 29 13:58:05.323: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local from pod dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b: the server could not find the requested resource (get pods dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b) May 29 13:58:05.326: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local from pod dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b: the server could not find the requested resource (get pods dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b) May 29 13:58:05.346: INFO: Unable to read jessie_udp@dns-test-service.dns-8546.svc.cluster.local from pod dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b: the server could not find the requested resource (get pods dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b) May 29 13:58:05.349: INFO: Unable to read jessie_tcp@dns-test-service.dns-8546.svc.cluster.local from pod dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b: the server could not find the requested resource (get pods dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b) May 29 13:58:05.351: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local from pod dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b: the server could not find the requested resource (get pods dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b) May 29 13:58:05.354: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local from pod dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b: the server could not find the requested resource (get pods dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b) May 29 13:58:05.371: INFO: Lookups using dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b failed for: [wheezy_udp@dns-test-service.dns-8546.svc.cluster.local wheezy_tcp@dns-test-service.dns-8546.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local jessie_udp@dns-test-service.dns-8546.svc.cluster.local jessie_tcp@dns-test-service.dns-8546.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8546.svc.cluster.local] May 29 13:58:10.387: INFO: DNS probes using dns-8546/dns-test-e925aab3-bdfb-4b86-b193-d8270c03537b succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:58:11.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8546" for this suite. May 29 13:58:17.138: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:58:17.218: INFO: namespace dns-8546 deletion completed in 6.0947428s • [SLOW TEST:43.177 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:58:17.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 29 13:58:21.911: INFO: Successfully updated pod "annotationupdate28bdbe4d-5d2f-40f6-89b0-a27e6a906c70" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:58:23.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2335" for this suite. May 29 13:58:45.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:58:46.045: INFO: namespace downward-api-2335 deletion completed in 22.113588187s • [SLOW TEST:28.826 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:58:46.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC May 29 13:58:46.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2372' May 29 13:58:46.412: INFO: stderr: "" May 29 13:58:46.412: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 29 13:58:47.428: INFO: Selector matched 1 pods for map[app:redis] May 29 13:58:47.429: INFO: Found 0 / 1 May 29 13:58:48.418: INFO: Selector matched 1 pods for map[app:redis] May 29 13:58:48.418: INFO: Found 0 / 1 May 29 13:58:49.434: INFO: Selector matched 1 pods for map[app:redis] May 29 13:58:49.434: INFO: Found 0 / 1 May 29 13:58:50.417: INFO: Selector matched 1 pods for map[app:redis] May 29 13:58:50.417: INFO: Found 1 / 1 May 29 13:58:50.417: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 29 13:58:50.421: INFO: Selector matched 1 pods for map[app:redis] May 29 13:58:50.421: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 29 13:58:50.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-flw25 --namespace=kubectl-2372 -p {"metadata":{"annotations":{"x":"y"}}}' May 29 13:58:50.520: INFO: stderr: "" May 29 13:58:50.520: INFO: stdout: "pod/redis-master-flw25 patched\n" STEP: checking annotations May 29 13:58:50.524: INFO: Selector matched 1 pods for map[app:redis] May 29 13:58:50.524: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 13:58:50.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2372" for this suite. May 29 13:59:12.540: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 13:59:12.619: INFO: namespace kubectl-2372 deletion completed in 22.092043457s • [SLOW TEST:26.574 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 13:59:12.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 29 13:59:13.302: INFO: Pod name wrapped-volume-race-d1e7fd53-fb54-4524-9562-244ed488dc48: Found 0 pods out of 5 May 29 13:59:18.311: INFO: Pod name wrapped-volume-race-d1e7fd53-fb54-4524-9562-244ed488dc48: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-d1e7fd53-fb54-4524-9562-244ed488dc48 in namespace emptydir-wrapper-641, will wait for the garbage collector to delete the pods May 29 13:59:32.401: INFO: Deleting ReplicationController wrapped-volume-race-d1e7fd53-fb54-4524-9562-244ed488dc48 took: 8.306515ms May 29 13:59:32.702: INFO: Terminating ReplicationController wrapped-volume-race-d1e7fd53-fb54-4524-9562-244ed488dc48 pods took: 300.345441ms STEP: Creating RC which spawns configmap-volume pods May 29 14:00:12.431: INFO: Pod name wrapped-volume-race-1a67b8b4-70d7-4de3-8c45-ce3d557bd0ec: Found 0 pods out of 5 May 29 14:00:17.443: INFO: Pod name wrapped-volume-race-1a67b8b4-70d7-4de3-8c45-ce3d557bd0ec: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-1a67b8b4-70d7-4de3-8c45-ce3d557bd0ec in namespace emptydir-wrapper-641, will wait for the garbage collector to delete the pods May 29 14:00:31.523: INFO: Deleting ReplicationController wrapped-volume-race-1a67b8b4-70d7-4de3-8c45-ce3d557bd0ec took: 6.718645ms May 29 14:00:31.923: INFO: Terminating ReplicationController wrapped-volume-race-1a67b8b4-70d7-4de3-8c45-ce3d557bd0ec pods took: 400.260783ms STEP: Creating RC which spawns configmap-volume pods May 29 14:01:12.650: INFO: Pod name wrapped-volume-race-2b5328b3-8c38-4efc-9607-0a9814f2e86a: Found 0 pods out of 5 May 29 14:01:17.659: INFO: Pod name wrapped-volume-race-2b5328b3-8c38-4efc-9607-0a9814f2e86a: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-2b5328b3-8c38-4efc-9607-0a9814f2e86a in namespace emptydir-wrapper-641, will wait for the garbage collector to delete the pods May 29 14:01:31.775: INFO: Deleting ReplicationController wrapped-volume-race-2b5328b3-8c38-4efc-9607-0a9814f2e86a took: 38.756545ms May 29 14:01:32.075: INFO: Terminating ReplicationController wrapped-volume-race-2b5328b3-8c38-4efc-9607-0a9814f2e86a pods took: 300.311811ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:02:13.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-641" for this suite. May 29 14:02:21.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:02:21.846: INFO: namespace emptydir-wrapper-641 deletion completed in 8.099227216s • [SLOW TEST:189.227 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:02:21.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-4f913e6e-5553-4e69-9ed5-ca0e910ce2a8 STEP: Creating a pod to test consume secrets May 29 14:02:22.210: INFO: Waiting up to 5m0s for pod "pod-secrets-09d741f9-dd89-4ff0-a03b-39e06cc76dda" in namespace "secrets-5121" to be "success or failure" May 29 14:02:22.227: INFO: Pod "pod-secrets-09d741f9-dd89-4ff0-a03b-39e06cc76dda": Phase="Pending", Reason="", readiness=false. Elapsed: 16.841347ms May 29 14:02:24.281: INFO: Pod "pod-secrets-09d741f9-dd89-4ff0-a03b-39e06cc76dda": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07112518s May 29 14:02:26.285: INFO: Pod "pod-secrets-09d741f9-dd89-4ff0-a03b-39e06cc76dda": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075072522s STEP: Saw pod success May 29 14:02:26.285: INFO: Pod "pod-secrets-09d741f9-dd89-4ff0-a03b-39e06cc76dda" satisfied condition "success or failure" May 29 14:02:26.288: INFO: Trying to get logs from node iruya-worker pod pod-secrets-09d741f9-dd89-4ff0-a03b-39e06cc76dda container secret-volume-test: STEP: delete the pod May 29 14:02:26.323: INFO: Waiting for pod pod-secrets-09d741f9-dd89-4ff0-a03b-39e06cc76dda to disappear May 29 14:02:26.365: INFO: Pod pod-secrets-09d741f9-dd89-4ff0-a03b-39e06cc76dda no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:02:26.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5121" for this suite. May 29 14:02:32.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:02:32.464: INFO: namespace secrets-5121 deletion completed in 6.094515668s STEP: Destroying namespace "secret-namespace-7310" for this suite. May 29 14:02:38.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:02:38.599: INFO: namespace secret-namespace-7310 deletion completed in 6.135309048s • [SLOW TEST:16.753 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:02:38.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6219.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6219.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6219.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6219.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 29 14:02:44.797: INFO: DNS probes using dns-test-c11a2ad9-7279-4318-b632-36387bd5f0d9 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6219.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6219.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6219.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6219.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 29 14:02:52.950: INFO: File wheezy_udp@dns-test-service-3.dns-6219.svc.cluster.local from pod dns-6219/dns-test-bc4321ef-cb66-4f28-b753-a7be95f51a67 contains 'foo.example.com. ' instead of 'bar.example.com.' May 29 14:02:52.954: INFO: File jessie_udp@dns-test-service-3.dns-6219.svc.cluster.local from pod dns-6219/dns-test-bc4321ef-cb66-4f28-b753-a7be95f51a67 contains 'foo.example.com. ' instead of 'bar.example.com.' May 29 14:02:52.954: INFO: Lookups using dns-6219/dns-test-bc4321ef-cb66-4f28-b753-a7be95f51a67 failed for: [wheezy_udp@dns-test-service-3.dns-6219.svc.cluster.local jessie_udp@dns-test-service-3.dns-6219.svc.cluster.local] May 29 14:02:57.958: INFO: File wheezy_udp@dns-test-service-3.dns-6219.svc.cluster.local from pod dns-6219/dns-test-bc4321ef-cb66-4f28-b753-a7be95f51a67 contains 'foo.example.com. ' instead of 'bar.example.com.' May 29 14:02:57.962: INFO: File jessie_udp@dns-test-service-3.dns-6219.svc.cluster.local from pod dns-6219/dns-test-bc4321ef-cb66-4f28-b753-a7be95f51a67 contains 'foo.example.com. ' instead of 'bar.example.com.' May 29 14:02:57.962: INFO: Lookups using dns-6219/dns-test-bc4321ef-cb66-4f28-b753-a7be95f51a67 failed for: [wheezy_udp@dns-test-service-3.dns-6219.svc.cluster.local jessie_udp@dns-test-service-3.dns-6219.svc.cluster.local] May 29 14:03:02.959: INFO: File wheezy_udp@dns-test-service-3.dns-6219.svc.cluster.local from pod dns-6219/dns-test-bc4321ef-cb66-4f28-b753-a7be95f51a67 contains 'foo.example.com. ' instead of 'bar.example.com.' May 29 14:03:02.963: INFO: File jessie_udp@dns-test-service-3.dns-6219.svc.cluster.local from pod dns-6219/dns-test-bc4321ef-cb66-4f28-b753-a7be95f51a67 contains 'foo.example.com. ' instead of 'bar.example.com.' May 29 14:03:02.963: INFO: Lookups using dns-6219/dns-test-bc4321ef-cb66-4f28-b753-a7be95f51a67 failed for: [wheezy_udp@dns-test-service-3.dns-6219.svc.cluster.local jessie_udp@dns-test-service-3.dns-6219.svc.cluster.local] May 29 14:03:07.959: INFO: File wheezy_udp@dns-test-service-3.dns-6219.svc.cluster.local from pod dns-6219/dns-test-bc4321ef-cb66-4f28-b753-a7be95f51a67 contains 'foo.example.com. ' instead of 'bar.example.com.' May 29 14:03:07.962: INFO: File jessie_udp@dns-test-service-3.dns-6219.svc.cluster.local from pod dns-6219/dns-test-bc4321ef-cb66-4f28-b753-a7be95f51a67 contains 'foo.example.com. ' instead of 'bar.example.com.' May 29 14:03:07.962: INFO: Lookups using dns-6219/dns-test-bc4321ef-cb66-4f28-b753-a7be95f51a67 failed for: [wheezy_udp@dns-test-service-3.dns-6219.svc.cluster.local jessie_udp@dns-test-service-3.dns-6219.svc.cluster.local] May 29 14:03:12.959: INFO: File wheezy_udp@dns-test-service-3.dns-6219.svc.cluster.local from pod dns-6219/dns-test-bc4321ef-cb66-4f28-b753-a7be95f51a67 contains 'foo.example.com. ' instead of 'bar.example.com.' May 29 14:03:12.963: INFO: File jessie_udp@dns-test-service-3.dns-6219.svc.cluster.local from pod dns-6219/dns-test-bc4321ef-cb66-4f28-b753-a7be95f51a67 contains 'foo.example.com. ' instead of 'bar.example.com.' May 29 14:03:12.963: INFO: Lookups using dns-6219/dns-test-bc4321ef-cb66-4f28-b753-a7be95f51a67 failed for: [wheezy_udp@dns-test-service-3.dns-6219.svc.cluster.local jessie_udp@dns-test-service-3.dns-6219.svc.cluster.local] May 29 14:03:17.962: INFO: DNS probes using dns-test-bc4321ef-cb66-4f28-b753-a7be95f51a67 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6219.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-6219.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6219.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-6219.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 29 14:03:26.674: INFO: DNS probes using dns-test-8c742dc2-8c8a-4423-951c-865c52366b21 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:03:26.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6219" for this suite. May 29 14:03:33.199: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:03:33.264: INFO: namespace dns-6219 deletion completed in 6.100129219s • [SLOW TEST:54.665 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:03:33.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 29 14:03:37.854: INFO: Successfully updated pod "pod-update-activedeadlineseconds-7d03fe56-5185-400b-a7e5-bd5a4a66919f" May 29 14:03:37.854: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-7d03fe56-5185-400b-a7e5-bd5a4a66919f" in namespace "pods-2486" to be "terminated due to deadline exceeded" May 29 14:03:37.861: INFO: Pod "pod-update-activedeadlineseconds-7d03fe56-5185-400b-a7e5-bd5a4a66919f": Phase="Running", Reason="", readiness=true. Elapsed: 6.975954ms May 29 14:03:39.863: INFO: Pod "pod-update-activedeadlineseconds-7d03fe56-5185-400b-a7e5-bd5a4a66919f": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.009568593s May 29 14:03:39.863: INFO: Pod "pod-update-activedeadlineseconds-7d03fe56-5185-400b-a7e5-bd5a4a66919f" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:03:39.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2486" for this suite. May 29 14:03:45.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:03:45.956: INFO: namespace pods-2486 deletion completed in 6.089520797s • [SLOW TEST:12.691 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:03:45.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:04:46.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7366" for this suite. May 29 14:05:08.077: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:05:08.153: INFO: namespace container-probe-7366 deletion completed in 22.105139795s • [SLOW TEST:82.196 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:05:08.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-2a4f4132-a540-42c2-a880-d222291fb9f9 STEP: Creating a pod to test consume secrets May 29 14:05:08.296: INFO: Waiting up to 5m0s for pod "pod-secrets-dd2b3c6e-18d1-4bad-8fb8-9a8b023910b3" in namespace "secrets-3820" to be "success or failure" May 29 14:05:08.300: INFO: Pod "pod-secrets-dd2b3c6e-18d1-4bad-8fb8-9a8b023910b3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.764376ms May 29 14:05:10.303: INFO: Pod "pod-secrets-dd2b3c6e-18d1-4bad-8fb8-9a8b023910b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007317551s May 29 14:05:12.307: INFO: Pod "pod-secrets-dd2b3c6e-18d1-4bad-8fb8-9a8b023910b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011145075s STEP: Saw pod success May 29 14:05:12.307: INFO: Pod "pod-secrets-dd2b3c6e-18d1-4bad-8fb8-9a8b023910b3" satisfied condition "success or failure" May 29 14:05:12.310: INFO: Trying to get logs from node iruya-worker pod pod-secrets-dd2b3c6e-18d1-4bad-8fb8-9a8b023910b3 container secret-volume-test: STEP: delete the pod May 29 14:05:12.435: INFO: Waiting for pod pod-secrets-dd2b3c6e-18d1-4bad-8fb8-9a8b023910b3 to disappear May 29 14:05:12.450: INFO: Pod pod-secrets-dd2b3c6e-18d1-4bad-8fb8-9a8b023910b3 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:05:12.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3820" for this suite. May 29 14:05:18.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:05:18.551: INFO: namespace secrets-3820 deletion completed in 6.088880763s • [SLOW TEST:10.397 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:05:18.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-1058/configmap-test-c1ae4f43-6ad2-43cc-9326-22437bb82d70 STEP: Creating a pod to test consume configMaps May 29 14:05:18.610: INFO: Waiting up to 5m0s for pod "pod-configmaps-55083003-29a8-4734-8924-e06b988f8049" in namespace "configmap-1058" to be "success or failure" May 29 14:05:18.624: INFO: Pod "pod-configmaps-55083003-29a8-4734-8924-e06b988f8049": Phase="Pending", Reason="", readiness=false. Elapsed: 13.67656ms May 29 14:05:20.629: INFO: Pod "pod-configmaps-55083003-29a8-4734-8924-e06b988f8049": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018081486s May 29 14:05:22.633: INFO: Pod "pod-configmaps-55083003-29a8-4734-8924-e06b988f8049": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023014035s STEP: Saw pod success May 29 14:05:22.634: INFO: Pod "pod-configmaps-55083003-29a8-4734-8924-e06b988f8049" satisfied condition "success or failure" May 29 14:05:22.637: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-55083003-29a8-4734-8924-e06b988f8049 container env-test: STEP: delete the pod May 29 14:05:22.661: INFO: Waiting for pod pod-configmaps-55083003-29a8-4734-8924-e06b988f8049 to disappear May 29 14:05:22.671: INFO: Pod pod-configmaps-55083003-29a8-4734-8924-e06b988f8049 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:05:22.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1058" for this suite. May 29 14:05:28.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:05:28.797: INFO: namespace configmap-1058 deletion completed in 6.122794676s • [SLOW TEST:10.247 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:05:28.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:05:28.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7672" for this suite. May 29 14:05:34.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:05:34.987: INFO: namespace services-7672 deletion completed in 6.084518015s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.189 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:05:34.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0529 14:05:46.770933 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 29 14:05:46.771: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:05:46.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6673" for this suite. May 29 14:05:56.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:05:57.093: INFO: namespace gc-6673 deletion completed in 10.318957783s • [SLOW TEST:22.106 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:05:57.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 29 14:05:57.233: INFO: Waiting up to 5m0s for pod "downward-api-f5229176-2a3e-468d-8415-706ee1104cab" in namespace "downward-api-2579" to be "success or failure" May 29 14:05:57.257: INFO: Pod "downward-api-f5229176-2a3e-468d-8415-706ee1104cab": Phase="Pending", Reason="", readiness=false. Elapsed: 23.792799ms May 29 14:05:59.290: INFO: Pod "downward-api-f5229176-2a3e-468d-8415-706ee1104cab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056761796s May 29 14:06:01.294: INFO: Pod "downward-api-f5229176-2a3e-468d-8415-706ee1104cab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060889877s STEP: Saw pod success May 29 14:06:01.294: INFO: Pod "downward-api-f5229176-2a3e-468d-8415-706ee1104cab" satisfied condition "success or failure" May 29 14:06:01.297: INFO: Trying to get logs from node iruya-worker pod downward-api-f5229176-2a3e-468d-8415-706ee1104cab container dapi-container: STEP: delete the pod May 29 14:06:01.333: INFO: Waiting for pod downward-api-f5229176-2a3e-468d-8415-706ee1104cab to disappear May 29 14:06:01.337: INFO: Pod downward-api-f5229176-2a3e-468d-8415-706ee1104cab no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:06:01.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2579" for this suite. May 29 14:06:07.352: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:06:07.431: INFO: namespace downward-api-2579 deletion completed in 6.089055966s • [SLOW TEST:10.337 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:06:07.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-0ea1be58-2c38-496d-a95c-933994221e14 STEP: Creating a pod to test consume configMaps May 29 14:06:07.495: INFO: Waiting up to 5m0s for pod "pod-configmaps-895f3610-2ea7-4965-98af-aa0da8fc273b" in namespace "configmap-9749" to be "success or failure" May 29 14:06:07.499: INFO: Pod "pod-configmaps-895f3610-2ea7-4965-98af-aa0da8fc273b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.127622ms May 29 14:06:09.503: INFO: Pod "pod-configmaps-895f3610-2ea7-4965-98af-aa0da8fc273b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0085232s May 29 14:06:11.508: INFO: Pod "pod-configmaps-895f3610-2ea7-4965-98af-aa0da8fc273b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012669843s STEP: Saw pod success May 29 14:06:11.508: INFO: Pod "pod-configmaps-895f3610-2ea7-4965-98af-aa0da8fc273b" satisfied condition "success or failure" May 29 14:06:11.511: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-895f3610-2ea7-4965-98af-aa0da8fc273b container configmap-volume-test: STEP: delete the pod May 29 14:06:11.531: INFO: Waiting for pod pod-configmaps-895f3610-2ea7-4965-98af-aa0da8fc273b to disappear May 29 14:06:11.578: INFO: Pod pod-configmaps-895f3610-2ea7-4965-98af-aa0da8fc273b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:06:11.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9749" for this suite. May 29 14:06:17.599: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:06:17.687: INFO: namespace configmap-9749 deletion completed in 6.104813391s • [SLOW TEST:10.256 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:06:17.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 29 14:06:17.832: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ef70d052-f933-4dd2-afa2-433c47e6fe2a" in namespace "projected-6993" to be "success or failure" May 29 14:06:17.850: INFO: Pod "downwardapi-volume-ef70d052-f933-4dd2-afa2-433c47e6fe2a": Phase="Pending", Reason="", readiness=false. Elapsed: 17.965681ms May 29 14:06:19.962: INFO: Pod "downwardapi-volume-ef70d052-f933-4dd2-afa2-433c47e6fe2a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130425732s May 29 14:06:22.010: INFO: Pod "downwardapi-volume-ef70d052-f933-4dd2-afa2-433c47e6fe2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.178572988s STEP: Saw pod success May 29 14:06:22.010: INFO: Pod "downwardapi-volume-ef70d052-f933-4dd2-afa2-433c47e6fe2a" satisfied condition "success or failure" May 29 14:06:22.014: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-ef70d052-f933-4dd2-afa2-433c47e6fe2a container client-container: STEP: delete the pod May 29 14:06:22.082: INFO: Waiting for pod downwardapi-volume-ef70d052-f933-4dd2-afa2-433c47e6fe2a to disappear May 29 14:06:22.207: INFO: Pod downwardapi-volume-ef70d052-f933-4dd2-afa2-433c47e6fe2a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:06:22.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6993" for this suite. May 29 14:06:28.228: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:06:28.301: INFO: namespace projected-6993 deletion completed in 6.090242681s • [SLOW TEST:10.613 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:06:28.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 29 14:06:32.639: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:06:32.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9532" for this suite. May 29 14:06:38.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:06:38.969: INFO: namespace container-runtime-9532 deletion completed in 6.141342476s • [SLOW TEST:10.667 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:06:38.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-3bc964e7-268f-432c-acbe-386099cb8e58 STEP: Creating a pod to test consume secrets May 29 14:06:39.101: INFO: Waiting up to 5m0s for pod "pod-secrets-53dfcb30-407e-42ff-a8d1-565d4363302b" in namespace "secrets-564" to be "success or failure" May 29 14:06:39.154: INFO: Pod "pod-secrets-53dfcb30-407e-42ff-a8d1-565d4363302b": Phase="Pending", Reason="", readiness=false. Elapsed: 53.505422ms May 29 14:06:41.158: INFO: Pod "pod-secrets-53dfcb30-407e-42ff-a8d1-565d4363302b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057789983s May 29 14:06:43.163: INFO: Pod "pod-secrets-53dfcb30-407e-42ff-a8d1-565d4363302b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062008s STEP: Saw pod success May 29 14:06:43.163: INFO: Pod "pod-secrets-53dfcb30-407e-42ff-a8d1-565d4363302b" satisfied condition "success or failure" May 29 14:06:43.166: INFO: Trying to get logs from node iruya-worker pod pod-secrets-53dfcb30-407e-42ff-a8d1-565d4363302b container secret-volume-test: STEP: delete the pod May 29 14:06:43.208: INFO: Waiting for pod pod-secrets-53dfcb30-407e-42ff-a8d1-565d4363302b to disappear May 29 14:06:43.219: INFO: Pod pod-secrets-53dfcb30-407e-42ff-a8d1-565d4363302b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:06:43.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-564" for this suite. May 29 14:06:49.234: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:06:49.350: INFO: namespace secrets-564 deletion completed in 6.127768518s • [SLOW TEST:10.380 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:06:49.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 29 14:06:49.407: INFO: Waiting up to 5m0s for pod "downward-api-35f4d45e-7886-40ae-b826-338aebf998fd" in namespace "downward-api-8276" to be "success or failure" May 29 14:06:49.410: INFO: Pod "downward-api-35f4d45e-7886-40ae-b826-338aebf998fd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.531969ms May 29 14:06:51.414: INFO: Pod "downward-api-35f4d45e-7886-40ae-b826-338aebf998fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006851187s May 29 14:06:53.418: INFO: Pod "downward-api-35f4d45e-7886-40ae-b826-338aebf998fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010923519s STEP: Saw pod success May 29 14:06:53.418: INFO: Pod "downward-api-35f4d45e-7886-40ae-b826-338aebf998fd" satisfied condition "success or failure" May 29 14:06:53.421: INFO: Trying to get logs from node iruya-worker2 pod downward-api-35f4d45e-7886-40ae-b826-338aebf998fd container dapi-container: STEP: delete the pod May 29 14:06:53.447: INFO: Waiting for pod downward-api-35f4d45e-7886-40ae-b826-338aebf998fd to disappear May 29 14:06:53.459: INFO: Pod downward-api-35f4d45e-7886-40ae-b826-338aebf998fd no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:06:53.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8276" for this suite. May 29 14:06:59.474: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:06:59.595: INFO: namespace downward-api-8276 deletion completed in 6.132449733s • [SLOW TEST:10.245 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:06:59.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs May 29 14:06:59.659: INFO: Waiting up to 5m0s for pod "pod-f700affd-f16c-49df-a255-27e1d79782aa" in namespace "emptydir-1677" to be "success or failure" May 29 14:06:59.663: INFO: Pod "pod-f700affd-f16c-49df-a255-27e1d79782aa": Phase="Pending", Reason="", readiness=false. Elapsed: 3.699135ms May 29 14:07:01.681: INFO: Pod "pod-f700affd-f16c-49df-a255-27e1d79782aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021921036s May 29 14:07:03.686: INFO: Pod "pod-f700affd-f16c-49df-a255-27e1d79782aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026589545s STEP: Saw pod success May 29 14:07:03.686: INFO: Pod "pod-f700affd-f16c-49df-a255-27e1d79782aa" satisfied condition "success or failure" May 29 14:07:03.689: INFO: Trying to get logs from node iruya-worker pod pod-f700affd-f16c-49df-a255-27e1d79782aa container test-container: STEP: delete the pod May 29 14:07:03.744: INFO: Waiting for pod pod-f700affd-f16c-49df-a255-27e1d79782aa to disappear May 29 14:07:03.818: INFO: Pod pod-f700affd-f16c-49df-a255-27e1d79782aa no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:07:03.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1677" for this suite. May 29 14:07:09.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:07:09.908: INFO: namespace emptydir-1677 deletion completed in 6.084917582s • [SLOW TEST:10.312 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:07:09.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-5kt9 STEP: Creating a pod to test atomic-volume-subpath May 29 14:07:10.114: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-5kt9" in namespace "subpath-5554" to be "success or failure" May 29 14:07:10.123: INFO: Pod "pod-subpath-test-secret-5kt9": Phase="Pending", Reason="", readiness=false. Elapsed: 9.34027ms May 29 14:07:12.135: INFO: Pod "pod-subpath-test-secret-5kt9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021620783s May 29 14:07:14.140: INFO: Pod "pod-subpath-test-secret-5kt9": Phase="Running", Reason="", readiness=true. Elapsed: 4.025662479s May 29 14:07:16.144: INFO: Pod "pod-subpath-test-secret-5kt9": Phase="Running", Reason="", readiness=true. Elapsed: 6.029960208s May 29 14:07:18.149: INFO: Pod "pod-subpath-test-secret-5kt9": Phase="Running", Reason="", readiness=true. Elapsed: 8.035293229s May 29 14:07:20.153: INFO: Pod "pod-subpath-test-secret-5kt9": Phase="Running", Reason="", readiness=true. Elapsed: 10.039071425s May 29 14:07:22.156: INFO: Pod "pod-subpath-test-secret-5kt9": Phase="Running", Reason="", readiness=true. Elapsed: 12.041931313s May 29 14:07:24.160: INFO: Pod "pod-subpath-test-secret-5kt9": Phase="Running", Reason="", readiness=true. Elapsed: 14.045803602s May 29 14:07:26.164: INFO: Pod "pod-subpath-test-secret-5kt9": Phase="Running", Reason="", readiness=true. Elapsed: 16.04994287s May 29 14:07:28.169: INFO: Pod "pod-subpath-test-secret-5kt9": Phase="Running", Reason="", readiness=true. Elapsed: 18.054643657s May 29 14:07:30.174: INFO: Pod "pod-subpath-test-secret-5kt9": Phase="Running", Reason="", readiness=true. Elapsed: 20.059629191s May 29 14:07:32.177: INFO: Pod "pod-subpath-test-secret-5kt9": Phase="Running", Reason="", readiness=true. Elapsed: 22.06355352s May 29 14:07:34.182: INFO: Pod "pod-subpath-test-secret-5kt9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.067905328s STEP: Saw pod success May 29 14:07:34.182: INFO: Pod "pod-subpath-test-secret-5kt9" satisfied condition "success or failure" May 29 14:07:34.190: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-secret-5kt9 container test-container-subpath-secret-5kt9: STEP: delete the pod May 29 14:07:34.215: INFO: Waiting for pod pod-subpath-test-secret-5kt9 to disappear May 29 14:07:34.232: INFO: Pod pod-subpath-test-secret-5kt9 no longer exists STEP: Deleting pod pod-subpath-test-secret-5kt9 May 29 14:07:34.232: INFO: Deleting pod "pod-subpath-test-secret-5kt9" in namespace "subpath-5554" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:07:34.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5554" for this suite. May 29 14:07:40.277: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:07:40.465: INFO: namespace subpath-5554 deletion completed in 6.229354889s • [SLOW TEST:30.557 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:07:40.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-4066 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4066 to expose endpoints map[] May 29 14:07:40.586: INFO: Get endpoints failed (14.09242ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 29 14:07:41.590: INFO: successfully validated that service endpoint-test2 in namespace services-4066 exposes endpoints map[] (1.018301361s elapsed) STEP: Creating pod pod1 in namespace services-4066 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4066 to expose endpoints map[pod1:[80]] May 29 14:07:45.657: INFO: successfully validated that service endpoint-test2 in namespace services-4066 exposes endpoints map[pod1:[80]] (4.059322181s elapsed) STEP: Creating pod pod2 in namespace services-4066 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4066 to expose endpoints map[pod1:[80] pod2:[80]] May 29 14:07:48.779: INFO: successfully validated that service endpoint-test2 in namespace services-4066 exposes endpoints map[pod1:[80] pod2:[80]] (3.117827645s elapsed) STEP: Deleting pod pod1 in namespace services-4066 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4066 to expose endpoints map[pod2:[80]] May 29 14:07:49.832: INFO: successfully validated that service endpoint-test2 in namespace services-4066 exposes endpoints map[pod2:[80]] (1.04772526s elapsed) STEP: Deleting pod pod2 in namespace services-4066 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4066 to expose endpoints map[] May 29 14:07:51.011: INFO: successfully validated that service endpoint-test2 in namespace services-4066 exposes endpoints map[] (1.174724212s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:07:51.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4066" for this suite. May 29 14:08:13.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:08:13.281: INFO: namespace services-4066 deletion completed in 22.096907171s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:32.816 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:08:13.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-8076887b-aec4-4081-8d7e-3a0b54f8fd2a STEP: Creating configMap with name cm-test-opt-upd-b8670c0b-9955-4110-b6d7-413a44780770 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-8076887b-aec4-4081-8d7e-3a0b54f8fd2a STEP: Updating configmap cm-test-opt-upd-b8670c0b-9955-4110-b6d7-413a44780770 STEP: Creating configMap with name cm-test-opt-create-865d5f85-875e-4fa7-9b93-b2463f058d86 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:08:23.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4370" for this suite. May 29 14:08:45.549: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:08:45.632: INFO: namespace configmap-4370 deletion completed in 22.099943645s • [SLOW TEST:32.351 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:08:45.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium May 29 14:08:45.690: INFO: Waiting up to 5m0s for pod "pod-54579f23-4548-46ef-975d-113166adb850" in namespace "emptydir-7856" to be "success or failure" May 29 14:08:45.718: INFO: Pod "pod-54579f23-4548-46ef-975d-113166adb850": Phase="Pending", Reason="", readiness=false. Elapsed: 27.583477ms May 29 14:08:47.722: INFO: Pod "pod-54579f23-4548-46ef-975d-113166adb850": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03190999s May 29 14:08:49.726: INFO: Pod "pod-54579f23-4548-46ef-975d-113166adb850": Phase="Running", Reason="", readiness=true. Elapsed: 4.036098112s May 29 14:08:51.731: INFO: Pod "pod-54579f23-4548-46ef-975d-113166adb850": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.040491834s STEP: Saw pod success May 29 14:08:51.731: INFO: Pod "pod-54579f23-4548-46ef-975d-113166adb850" satisfied condition "success or failure" May 29 14:08:51.733: INFO: Trying to get logs from node iruya-worker2 pod pod-54579f23-4548-46ef-975d-113166adb850 container test-container: STEP: delete the pod May 29 14:08:51.766: INFO: Waiting for pod pod-54579f23-4548-46ef-975d-113166adb850 to disappear May 29 14:08:51.778: INFO: Pod pod-54579f23-4548-46ef-975d-113166adb850 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:08:51.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7856" for this suite. May 29 14:08:57.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:08:57.914: INFO: namespace emptydir-7856 deletion completed in 6.13237427s • [SLOW TEST:12.281 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:08:57.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command May 29 14:08:58.014: INFO: Waiting up to 5m0s for pod "var-expansion-1ef1304f-bba5-4540-bfb6-db74ca3291eb" in namespace "var-expansion-7135" to be "success or failure" May 29 14:08:58.018: INFO: Pod "var-expansion-1ef1304f-bba5-4540-bfb6-db74ca3291eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.189034ms May 29 14:09:00.023: INFO: Pod "var-expansion-1ef1304f-bba5-4540-bfb6-db74ca3291eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008736075s May 29 14:09:02.027: INFO: Pod "var-expansion-1ef1304f-bba5-4540-bfb6-db74ca3291eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013146059s STEP: Saw pod success May 29 14:09:02.027: INFO: Pod "var-expansion-1ef1304f-bba5-4540-bfb6-db74ca3291eb" satisfied condition "success or failure" May 29 14:09:02.031: INFO: Trying to get logs from node iruya-worker pod var-expansion-1ef1304f-bba5-4540-bfb6-db74ca3291eb container dapi-container: STEP: delete the pod May 29 14:09:02.068: INFO: Waiting for pod var-expansion-1ef1304f-bba5-4540-bfb6-db74ca3291eb to disappear May 29 14:09:02.080: INFO: Pod var-expansion-1ef1304f-bba5-4540-bfb6-db74ca3291eb no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:09:02.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7135" for this suite. May 29 14:09:08.112: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:09:08.202: INFO: namespace var-expansion-7135 deletion completed in 6.11752797s • [SLOW TEST:10.288 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:09:08.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-jzvs STEP: Creating a pod to test atomic-volume-subpath May 29 14:09:08.307: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-jzvs" in namespace "subpath-5472" to be "success or failure" May 29 14:09:08.327: INFO: Pod "pod-subpath-test-downwardapi-jzvs": Phase="Pending", Reason="", readiness=false. Elapsed: 19.059603ms May 29 14:09:10.331: INFO: Pod "pod-subpath-test-downwardapi-jzvs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023854923s May 29 14:09:12.336: INFO: Pod "pod-subpath-test-downwardapi-jzvs": Phase="Running", Reason="", readiness=true. Elapsed: 4.028323803s May 29 14:09:14.341: INFO: Pod "pod-subpath-test-downwardapi-jzvs": Phase="Running", Reason="", readiness=true. Elapsed: 6.033360042s May 29 14:09:16.344: INFO: Pod "pod-subpath-test-downwardapi-jzvs": Phase="Running", Reason="", readiness=true. Elapsed: 8.036829763s May 29 14:09:18.349: INFO: Pod "pod-subpath-test-downwardapi-jzvs": Phase="Running", Reason="", readiness=true. Elapsed: 10.041863322s May 29 14:09:20.354: INFO: Pod "pod-subpath-test-downwardapi-jzvs": Phase="Running", Reason="", readiness=true. Elapsed: 12.046278002s May 29 14:09:22.358: INFO: Pod "pod-subpath-test-downwardapi-jzvs": Phase="Running", Reason="", readiness=true. Elapsed: 14.050800185s May 29 14:09:24.363: INFO: Pod "pod-subpath-test-downwardapi-jzvs": Phase="Running", Reason="", readiness=true. Elapsed: 16.055141107s May 29 14:09:26.367: INFO: Pod "pod-subpath-test-downwardapi-jzvs": Phase="Running", Reason="", readiness=true. Elapsed: 18.059589144s May 29 14:09:28.372: INFO: Pod "pod-subpath-test-downwardapi-jzvs": Phase="Running", Reason="", readiness=true. Elapsed: 20.064378377s May 29 14:09:30.376: INFO: Pod "pod-subpath-test-downwardapi-jzvs": Phase="Running", Reason="", readiness=true. Elapsed: 22.069007332s May 29 14:09:32.515: INFO: Pod "pod-subpath-test-downwardapi-jzvs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.207311492s STEP: Saw pod success May 29 14:09:32.515: INFO: Pod "pod-subpath-test-downwardapi-jzvs" satisfied condition "success or failure" May 29 14:09:32.569: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-downwardapi-jzvs container test-container-subpath-downwardapi-jzvs: STEP: delete the pod May 29 14:09:32.593: INFO: Waiting for pod pod-subpath-test-downwardapi-jzvs to disappear May 29 14:09:32.598: INFO: Pod pod-subpath-test-downwardapi-jzvs no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-jzvs May 29 14:09:32.598: INFO: Deleting pod "pod-subpath-test-downwardapi-jzvs" in namespace "subpath-5472" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:09:32.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5472" for this suite. May 29 14:09:38.632: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:09:38.706: INFO: namespace subpath-5472 deletion completed in 6.088312041s • [SLOW TEST:30.503 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:09:38.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-a135c9e0-3c2b-46fc-a7cd-018fa34bcba0 STEP: Creating secret with name s-test-opt-upd-b861b188-5de8-49cd-b1b6-47d9e02e9cfb STEP: Creating the pod STEP: Deleting secret s-test-opt-del-a135c9e0-3c2b-46fc-a7cd-018fa34bcba0 STEP: Updating secret s-test-opt-upd-b861b188-5de8-49cd-b1b6-47d9e02e9cfb STEP: Creating secret with name s-test-opt-create-5bc03f62-4b7a-4e76-b3f5-8ea4c8916553 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:10:49.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6880" for this suite. May 29 14:11:13.268: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:11:13.346: INFO: namespace secrets-6880 deletion completed in 24.088493164s • [SLOW TEST:94.640 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:11:13.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 29 14:11:13.415: INFO: Creating deployment "test-recreate-deployment" May 29 14:11:13.428: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 29 14:11:13.440: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 29 14:11:15.447: INFO: Waiting deployment "test-recreate-deployment" to complete May 29 14:11:15.450: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726358273, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726358273, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726358273, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726358273, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 29 14:11:17.454: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 29 14:11:17.461: INFO: Updating deployment test-recreate-deployment May 29 14:11:17.461: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 29 14:11:18.133: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-9789,SelfLink:/apis/apps/v1/namespaces/deployment-9789/deployments/test-recreate-deployment,UID:6ef62847-56f5-4caa-a9c4-74dd744b10dd,ResourceVersion:13558280,Generation:2,CreationTimestamp:2020-05-29 14:11:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-05-29 14:11:17 +0000 UTC 2020-05-29 14:11:17 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-05-29 14:11:18 +0000 UTC 2020-05-29 14:11:13 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} May 29 14:11:18.170: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-9789,SelfLink:/apis/apps/v1/namespaces/deployment-9789/replicasets/test-recreate-deployment-5c8c9cc69d,UID:4c427b0a-f517-4380-9cf7-ca0ba2055406,ResourceVersion:13558279,Generation:1,CreationTimestamp:2020-05-29 14:11:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 6ef62847-56f5-4caa-a9c4-74dd744b10dd 0xc002d9c6c7 0xc002d9c6c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 29 14:11:18.170: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 29 14:11:18.170: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-9789,SelfLink:/apis/apps/v1/namespaces/deployment-9789/replicasets/test-recreate-deployment-6df85df6b9,UID:a751f5ad-ec3b-4386-88b8-cb490a5d0f55,ResourceVersion:13558269,Generation:2,CreationTimestamp:2020-05-29 14:11:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 6ef62847-56f5-4caa-a9c4-74dd744b10dd 0xc002d9c797 0xc002d9c798}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 29 14:11:18.398: INFO: Pod "test-recreate-deployment-5c8c9cc69d-dsvlz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-dsvlz,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-9789,SelfLink:/api/v1/namespaces/deployment-9789/pods/test-recreate-deployment-5c8c9cc69d-dsvlz,UID:eeba0870-fc52-4bee-9523-de6cc64defd7,ResourceVersion:13558282,Generation:0,CreationTimestamp:2020-05-29 14:11:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 4c427b0a-f517-4380-9cf7-ca0ba2055406 0xc002d9d067 0xc002d9d068}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c9btb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c9btb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-c9btb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d9d0e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d9d100}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:17 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-29 14:11:18 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:11:18.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9789" for this suite. May 29 14:11:24.605: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:11:24.682: INFO: namespace deployment-9789 deletion completed in 6.280320762s • [SLOW TEST:11.336 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:11:24.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-3044 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-3044 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3044 May 29 14:11:24.779: INFO: Found 0 stateful pods, waiting for 1 May 29 14:11:34.783: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 29 14:11:34.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 29 14:11:37.578: INFO: stderr: "I0529 14:11:37.344857 2706 log.go:172] (0xc000130e70) (0xc0005b08c0) Create stream\nI0529 14:11:37.344900 2706 log.go:172] (0xc000130e70) (0xc0005b08c0) Stream added, broadcasting: 1\nI0529 14:11:37.347642 2706 log.go:172] (0xc000130e70) Reply frame received for 1\nI0529 14:11:37.347723 2706 log.go:172] (0xc000130e70) (0xc000814000) Create stream\nI0529 14:11:37.347742 2706 log.go:172] (0xc000130e70) (0xc000814000) Stream added, broadcasting: 3\nI0529 14:11:37.348709 2706 log.go:172] (0xc000130e70) Reply frame received for 3\nI0529 14:11:37.348905 2706 log.go:172] (0xc000130e70) (0xc0008140a0) Create stream\nI0529 14:11:37.348926 2706 log.go:172] (0xc000130e70) (0xc0008140a0) Stream added, broadcasting: 5\nI0529 14:11:37.350117 2706 log.go:172] (0xc000130e70) Reply frame received for 5\nI0529 14:11:37.537928 2706 log.go:172] (0xc000130e70) Data frame received for 5\nI0529 14:11:37.537964 2706 log.go:172] (0xc0008140a0) (5) Data frame handling\nI0529 14:11:37.537986 2706 log.go:172] (0xc0008140a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0529 14:11:37.568648 2706 log.go:172] (0xc000130e70) Data frame received for 5\nI0529 14:11:37.568669 2706 log.go:172] (0xc0008140a0) (5) Data frame handling\nI0529 14:11:37.568689 2706 log.go:172] (0xc000130e70) Data frame received for 3\nI0529 14:11:37.568697 2706 log.go:172] (0xc000814000) (3) Data frame handling\nI0529 14:11:37.568708 2706 log.go:172] (0xc000814000) (3) Data frame sent\nI0529 14:11:37.569067 2706 log.go:172] (0xc000130e70) Data frame received for 3\nI0529 14:11:37.569090 2706 log.go:172] (0xc000814000) (3) Data frame handling\nI0529 14:11:37.570573 2706 log.go:172] (0xc000130e70) Data frame received for 1\nI0529 14:11:37.570608 2706 log.go:172] (0xc0005b08c0) (1) Data frame handling\nI0529 14:11:37.570633 2706 log.go:172] (0xc0005b08c0) (1) Data frame sent\nI0529 14:11:37.570663 2706 log.go:172] (0xc000130e70) (0xc0005b08c0) Stream removed, broadcasting: 1\nI0529 14:11:37.570698 2706 log.go:172] (0xc000130e70) Go away received\nI0529 14:11:37.571017 2706 log.go:172] (0xc000130e70) (0xc0005b08c0) Stream removed, broadcasting: 1\nI0529 14:11:37.571030 2706 log.go:172] (0xc000130e70) (0xc000814000) Stream removed, broadcasting: 3\nI0529 14:11:37.571038 2706 log.go:172] (0xc000130e70) (0xc0008140a0) Stream removed, broadcasting: 5\n" May 29 14:11:37.579: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 29 14:11:37.579: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 29 14:11:37.582: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 29 14:11:47.588: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 29 14:11:47.588: INFO: Waiting for statefulset status.replicas updated to 0 May 29 14:11:47.624: INFO: POD NODE PHASE GRACE CONDITIONS May 29 14:11:47.624: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:24 +0000 UTC }] May 29 14:11:47.625: INFO: May 29 14:11:47.625: INFO: StatefulSet ss has not reached scale 3, at 1 May 29 14:11:48.630: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.969798696s May 29 14:11:49.908: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.965027386s May 29 14:11:50.925: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.686400165s May 29 14:11:51.930: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.669445495s May 29 14:11:52.934: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.664739369s May 29 14:11:53.940: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.660382169s May 29 14:11:54.945: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.654926402s May 29 14:11:55.950: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.649744571s May 29 14:11:56.955: INFO: Verifying statefulset ss doesn't scale past 3 for another 644.511784ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3044 May 29 14:11:57.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 29 14:11:58.213: INFO: stderr: "I0529 14:11:58.126717 2739 log.go:172] (0xc0009ba420) (0xc0008b86e0) Create stream\nI0529 14:11:58.126770 2739 log.go:172] (0xc0009ba420) (0xc0008b86e0) Stream added, broadcasting: 1\nI0529 14:11:58.128992 2739 log.go:172] (0xc0009ba420) Reply frame received for 1\nI0529 14:11:58.129026 2739 log.go:172] (0xc0009ba420) (0xc00090c000) Create stream\nI0529 14:11:58.129039 2739 log.go:172] (0xc0009ba420) (0xc00090c000) Stream added, broadcasting: 3\nI0529 14:11:58.130179 2739 log.go:172] (0xc0009ba420) Reply frame received for 3\nI0529 14:11:58.130246 2739 log.go:172] (0xc0009ba420) (0xc0008b8780) Create stream\nI0529 14:11:58.130273 2739 log.go:172] (0xc0009ba420) (0xc0008b8780) Stream added, broadcasting: 5\nI0529 14:11:58.131053 2739 log.go:172] (0xc0009ba420) Reply frame received for 5\nI0529 14:11:58.206585 2739 log.go:172] (0xc0009ba420) Data frame received for 3\nI0529 14:11:58.206636 2739 log.go:172] (0xc00090c000) (3) Data frame handling\nI0529 14:11:58.206666 2739 log.go:172] (0xc0009ba420) Data frame received for 5\nI0529 14:11:58.206707 2739 log.go:172] (0xc0008b8780) (5) Data frame handling\nI0529 14:11:58.206724 2739 log.go:172] (0xc0008b8780) (5) Data frame sent\nI0529 14:11:58.206738 2739 log.go:172] (0xc0009ba420) Data frame received for 5\nI0529 14:11:58.206749 2739 log.go:172] (0xc0008b8780) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0529 14:11:58.206779 2739 log.go:172] (0xc00090c000) (3) Data frame sent\nI0529 14:11:58.206795 2739 log.go:172] (0xc0009ba420) Data frame received for 3\nI0529 14:11:58.206811 2739 log.go:172] (0xc00090c000) (3) Data frame handling\nI0529 14:11:58.208568 2739 log.go:172] (0xc0009ba420) Data frame received for 1\nI0529 14:11:58.208604 2739 log.go:172] (0xc0008b86e0) (1) Data frame handling\nI0529 14:11:58.208625 2739 log.go:172] (0xc0008b86e0) (1) Data frame sent\nI0529 14:11:58.208638 2739 log.go:172] (0xc0009ba420) (0xc0008b86e0) Stream removed, broadcasting: 1\nI0529 14:11:58.208650 2739 log.go:172] (0xc0009ba420) Go away received\nI0529 14:11:58.209019 2739 log.go:172] (0xc0009ba420) (0xc0008b86e0) Stream removed, broadcasting: 1\nI0529 14:11:58.209039 2739 log.go:172] (0xc0009ba420) (0xc00090c000) Stream removed, broadcasting: 3\nI0529 14:11:58.209046 2739 log.go:172] (0xc0009ba420) (0xc0008b8780) Stream removed, broadcasting: 5\n" May 29 14:11:58.213: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 29 14:11:58.213: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 29 14:11:58.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 29 14:11:58.435: INFO: stderr: "I0529 14:11:58.359030 2758 log.go:172] (0xc00012a6e0) (0xc000650820) Create stream\nI0529 14:11:58.359092 2758 log.go:172] (0xc00012a6e0) (0xc000650820) Stream added, broadcasting: 1\nI0529 14:11:58.362386 2758 log.go:172] (0xc00012a6e0) Reply frame received for 1\nI0529 14:11:58.362422 2758 log.go:172] (0xc00012a6e0) (0xc000650000) Create stream\nI0529 14:11:58.362433 2758 log.go:172] (0xc00012a6e0) (0xc000650000) Stream added, broadcasting: 3\nI0529 14:11:58.363148 2758 log.go:172] (0xc00012a6e0) Reply frame received for 3\nI0529 14:11:58.363175 2758 log.go:172] (0xc00012a6e0) (0xc000674280) Create stream\nI0529 14:11:58.363184 2758 log.go:172] (0xc00012a6e0) (0xc000674280) Stream added, broadcasting: 5\nI0529 14:11:58.364042 2758 log.go:172] (0xc00012a6e0) Reply frame received for 5\nI0529 14:11:58.423059 2758 log.go:172] (0xc00012a6e0) Data frame received for 3\nI0529 14:11:58.423094 2758 log.go:172] (0xc000650000) (3) Data frame handling\nI0529 14:11:58.423112 2758 log.go:172] (0xc000650000) (3) Data frame sent\nI0529 14:11:58.423120 2758 log.go:172] (0xc00012a6e0) Data frame received for 3\nI0529 14:11:58.423127 2758 log.go:172] (0xc000650000) (3) Data frame handling\nI0529 14:11:58.425699 2758 log.go:172] (0xc00012a6e0) Data frame received for 5\nI0529 14:11:58.425725 2758 log.go:172] (0xc000674280) (5) Data frame handling\nI0529 14:11:58.425743 2758 log.go:172] (0xc000674280) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0529 14:11:58.426255 2758 log.go:172] (0xc00012a6e0) Data frame received for 5\nI0529 14:11:58.426275 2758 log.go:172] (0xc000674280) (5) Data frame handling\nI0529 14:11:58.427954 2758 log.go:172] (0xc00012a6e0) Data frame received for 1\nI0529 14:11:58.427970 2758 log.go:172] (0xc000650820) (1) Data frame handling\nI0529 14:11:58.427985 2758 log.go:172] (0xc000650820) (1) Data frame sent\nI0529 14:11:58.428166 2758 log.go:172] (0xc00012a6e0) (0xc000650820) Stream removed, broadcasting: 1\nI0529 14:11:58.428202 2758 log.go:172] (0xc00012a6e0) Go away received\nI0529 14:11:58.428449 2758 log.go:172] (0xc00012a6e0) (0xc000650820) Stream removed, broadcasting: 1\nI0529 14:11:58.428504 2758 log.go:172] (0xc00012a6e0) (0xc000650000) Stream removed, broadcasting: 3\nI0529 14:11:58.428515 2758 log.go:172] (0xc00012a6e0) (0xc000674280) Stream removed, broadcasting: 5\n" May 29 14:11:58.435: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 29 14:11:58.435: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 29 14:11:58.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 29 14:11:58.618: INFO: stderr: "I0529 14:11:58.555869 2778 log.go:172] (0xc0008ee420) (0xc0002ca820) Create stream\nI0529 14:11:58.555926 2778 log.go:172] (0xc0008ee420) (0xc0002ca820) Stream added, broadcasting: 1\nI0529 14:11:58.558786 2778 log.go:172] (0xc0008ee420) Reply frame received for 1\nI0529 14:11:58.558845 2778 log.go:172] (0xc0008ee420) (0xc000810000) Create stream\nI0529 14:11:58.558868 2778 log.go:172] (0xc0008ee420) (0xc000810000) Stream added, broadcasting: 3\nI0529 14:11:58.559908 2778 log.go:172] (0xc0008ee420) Reply frame received for 3\nI0529 14:11:58.559964 2778 log.go:172] (0xc0008ee420) (0xc0002ca8c0) Create stream\nI0529 14:11:58.559984 2778 log.go:172] (0xc0008ee420) (0xc0002ca8c0) Stream added, broadcasting: 5\nI0529 14:11:58.561045 2778 log.go:172] (0xc0008ee420) Reply frame received for 5\nI0529 14:11:58.610446 2778 log.go:172] (0xc0008ee420) Data frame received for 5\nI0529 14:11:58.610481 2778 log.go:172] (0xc0002ca8c0) (5) Data frame handling\nI0529 14:11:58.610496 2778 log.go:172] (0xc0002ca8c0) (5) Data frame sent\nI0529 14:11:58.610506 2778 log.go:172] (0xc0008ee420) Data frame received for 5\nI0529 14:11:58.610515 2778 log.go:172] (0xc0002ca8c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0529 14:11:58.610542 2778 log.go:172] (0xc0008ee420) Data frame received for 3\nI0529 14:11:58.610552 2778 log.go:172] (0xc000810000) (3) Data frame handling\nI0529 14:11:58.610568 2778 log.go:172] (0xc000810000) (3) Data frame sent\nI0529 14:11:58.610590 2778 log.go:172] (0xc0008ee420) Data frame received for 3\nI0529 14:11:58.610602 2778 log.go:172] (0xc000810000) (3) Data frame handling\nI0529 14:11:58.612090 2778 log.go:172] (0xc0008ee420) Data frame received for 1\nI0529 14:11:58.612133 2778 log.go:172] (0xc0002ca820) (1) Data frame handling\nI0529 14:11:58.612167 2778 log.go:172] (0xc0002ca820) (1) Data frame sent\nI0529 14:11:58.612189 2778 log.go:172] (0xc0008ee420) (0xc0002ca820) Stream removed, broadcasting: 1\nI0529 14:11:58.612219 2778 log.go:172] (0xc0008ee420) Go away received\nI0529 14:11:58.612597 2778 log.go:172] (0xc0008ee420) (0xc0002ca820) Stream removed, broadcasting: 1\nI0529 14:11:58.612614 2778 log.go:172] (0xc0008ee420) (0xc000810000) Stream removed, broadcasting: 3\nI0529 14:11:58.612627 2778 log.go:172] (0xc0008ee420) (0xc0002ca8c0) Stream removed, broadcasting: 5\n" May 29 14:11:58.618: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 29 14:11:58.618: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 29 14:11:58.622: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 29 14:11:58.622: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 29 14:11:58.622: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 29 14:11:58.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 29 14:11:58.817: INFO: stderr: "I0529 14:11:58.751870 2799 log.go:172] (0xc000a16580) (0xc000590be0) Create stream\nI0529 14:11:58.751923 2799 log.go:172] (0xc000a16580) (0xc000590be0) Stream added, broadcasting: 1\nI0529 14:11:58.755291 2799 log.go:172] (0xc000a16580) Reply frame received for 1\nI0529 14:11:58.755339 2799 log.go:172] (0xc000a16580) (0xc000590320) Create stream\nI0529 14:11:58.755357 2799 log.go:172] (0xc000a16580) (0xc000590320) Stream added, broadcasting: 3\nI0529 14:11:58.756256 2799 log.go:172] (0xc000a16580) Reply frame received for 3\nI0529 14:11:58.756283 2799 log.go:172] (0xc000a16580) (0xc000206000) Create stream\nI0529 14:11:58.756300 2799 log.go:172] (0xc000a16580) (0xc000206000) Stream added, broadcasting: 5\nI0529 14:11:58.757542 2799 log.go:172] (0xc000a16580) Reply frame received for 5\nI0529 14:11:58.810621 2799 log.go:172] (0xc000a16580) Data frame received for 3\nI0529 14:11:58.810665 2799 log.go:172] (0xc000590320) (3) Data frame handling\nI0529 14:11:58.810680 2799 log.go:172] (0xc000590320) (3) Data frame sent\nI0529 14:11:58.810691 2799 log.go:172] (0xc000a16580) Data frame received for 3\nI0529 14:11:58.810701 2799 log.go:172] (0xc000590320) (3) Data frame handling\nI0529 14:11:58.810742 2799 log.go:172] (0xc000a16580) Data frame received for 5\nI0529 14:11:58.810782 2799 log.go:172] (0xc000206000) (5) Data frame handling\nI0529 14:11:58.810819 2799 log.go:172] (0xc000206000) (5) Data frame sent\nI0529 14:11:58.810843 2799 log.go:172] (0xc000a16580) Data frame received for 5\nI0529 14:11:58.810859 2799 log.go:172] (0xc000206000) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0529 14:11:58.812111 2799 log.go:172] (0xc000a16580) Data frame received for 1\nI0529 14:11:58.812148 2799 log.go:172] (0xc000590be0) (1) Data frame handling\nI0529 14:11:58.812169 2799 log.go:172] (0xc000590be0) (1) Data frame sent\nI0529 14:11:58.812192 2799 log.go:172] (0xc000a16580) (0xc000590be0) Stream removed, broadcasting: 1\nI0529 14:11:58.812216 2799 log.go:172] (0xc000a16580) Go away received\nI0529 14:11:58.813060 2799 log.go:172] (0xc000a16580) (0xc000590be0) Stream removed, broadcasting: 1\nI0529 14:11:58.813102 2799 log.go:172] (0xc000a16580) (0xc000590320) Stream removed, broadcasting: 3\nI0529 14:11:58.813330 2799 log.go:172] (0xc000a16580) (0xc000206000) Stream removed, broadcasting: 5\n" May 29 14:11:58.817: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 29 14:11:58.817: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 29 14:11:58.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 29 14:11:59.060: INFO: stderr: "I0529 14:11:58.940025 2820 log.go:172] (0xc000116dc0) (0xc00071a6e0) Create stream\nI0529 14:11:58.940073 2820 log.go:172] (0xc000116dc0) (0xc00071a6e0) Stream added, broadcasting: 1\nI0529 14:11:58.943404 2820 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0529 14:11:58.943484 2820 log.go:172] (0xc000116dc0) (0xc0007a8000) Create stream\nI0529 14:11:58.943514 2820 log.go:172] (0xc000116dc0) (0xc0007a8000) Stream added, broadcasting: 3\nI0529 14:11:58.944621 2820 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0529 14:11:58.944655 2820 log.go:172] (0xc000116dc0) (0xc000654140) Create stream\nI0529 14:11:58.944677 2820 log.go:172] (0xc000116dc0) (0xc000654140) Stream added, broadcasting: 5\nI0529 14:11:58.945833 2820 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0529 14:11:59.015882 2820 log.go:172] (0xc000116dc0) Data frame received for 5\nI0529 14:11:59.015909 2820 log.go:172] (0xc000654140) (5) Data frame handling\nI0529 14:11:59.015928 2820 log.go:172] (0xc000654140) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0529 14:11:59.051944 2820 log.go:172] (0xc000116dc0) Data frame received for 3\nI0529 14:11:59.051971 2820 log.go:172] (0xc0007a8000) (3) Data frame handling\nI0529 14:11:59.051989 2820 log.go:172] (0xc0007a8000) (3) Data frame sent\nI0529 14:11:59.051996 2820 log.go:172] (0xc000116dc0) Data frame received for 3\nI0529 14:11:59.052002 2820 log.go:172] (0xc0007a8000) (3) Data frame handling\nI0529 14:11:59.052146 2820 log.go:172] (0xc000116dc0) Data frame received for 5\nI0529 14:11:59.052165 2820 log.go:172] (0xc000654140) (5) Data frame handling\nI0529 14:11:59.054301 2820 log.go:172] (0xc000116dc0) Data frame received for 1\nI0529 14:11:59.054329 2820 log.go:172] (0xc00071a6e0) (1) Data frame handling\nI0529 14:11:59.054341 2820 log.go:172] (0xc00071a6e0) (1) Data frame sent\nI0529 14:11:59.054359 2820 log.go:172] (0xc000116dc0) (0xc00071a6e0) Stream removed, broadcasting: 1\nI0529 14:11:59.054374 2820 log.go:172] (0xc000116dc0) Go away received\nI0529 14:11:59.054658 2820 log.go:172] (0xc000116dc0) (0xc00071a6e0) Stream removed, broadcasting: 1\nI0529 14:11:59.054673 2820 log.go:172] (0xc000116dc0) (0xc0007a8000) Stream removed, broadcasting: 3\nI0529 14:11:59.054679 2820 log.go:172] (0xc000116dc0) (0xc000654140) Stream removed, broadcasting: 5\n" May 29 14:11:59.060: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 29 14:11:59.060: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 29 14:11:59.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 29 14:11:59.337: INFO: stderr: "I0529 14:11:59.232004 2840 log.go:172] (0xc000972370) (0xc000628aa0) Create stream\nI0529 14:11:59.232073 2840 log.go:172] (0xc000972370) (0xc000628aa0) Stream added, broadcasting: 1\nI0529 14:11:59.234426 2840 log.go:172] (0xc000972370) Reply frame received for 1\nI0529 14:11:59.234471 2840 log.go:172] (0xc000972370) (0xc000285ae0) Create stream\nI0529 14:11:59.234486 2840 log.go:172] (0xc000972370) (0xc000285ae0) Stream added, broadcasting: 3\nI0529 14:11:59.235458 2840 log.go:172] (0xc000972370) Reply frame received for 3\nI0529 14:11:59.235503 2840 log.go:172] (0xc000972370) (0xc000888000) Create stream\nI0529 14:11:59.235523 2840 log.go:172] (0xc000972370) (0xc000888000) Stream added, broadcasting: 5\nI0529 14:11:59.236569 2840 log.go:172] (0xc000972370) Reply frame received for 5\nI0529 14:11:59.297823 2840 log.go:172] (0xc000972370) Data frame received for 5\nI0529 14:11:59.297851 2840 log.go:172] (0xc000888000) (5) Data frame handling\nI0529 14:11:59.297876 2840 log.go:172] (0xc000888000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0529 14:11:59.330354 2840 log.go:172] (0xc000972370) Data frame received for 5\nI0529 14:11:59.330407 2840 log.go:172] (0xc000888000) (5) Data frame handling\nI0529 14:11:59.330435 2840 log.go:172] (0xc000972370) Data frame received for 3\nI0529 14:11:59.330444 2840 log.go:172] (0xc000285ae0) (3) Data frame handling\nI0529 14:11:59.330455 2840 log.go:172] (0xc000285ae0) (3) Data frame sent\nI0529 14:11:59.330464 2840 log.go:172] (0xc000972370) Data frame received for 3\nI0529 14:11:59.330500 2840 log.go:172] (0xc000285ae0) (3) Data frame handling\nI0529 14:11:59.331612 2840 log.go:172] (0xc000972370) Data frame received for 1\nI0529 14:11:59.331636 2840 log.go:172] (0xc000628aa0) (1) Data frame handling\nI0529 14:11:59.331655 2840 log.go:172] (0xc000628aa0) (1) Data frame sent\nI0529 14:11:59.331668 2840 log.go:172] (0xc000972370) (0xc000628aa0) Stream removed, broadcasting: 1\nI0529 14:11:59.331732 2840 log.go:172] (0xc000972370) Go away received\nI0529 14:11:59.331935 2840 log.go:172] (0xc000972370) (0xc000628aa0) Stream removed, broadcasting: 1\nI0529 14:11:59.331955 2840 log.go:172] (0xc000972370) (0xc000285ae0) Stream removed, broadcasting: 3\nI0529 14:11:59.331971 2840 log.go:172] (0xc000972370) (0xc000888000) Stream removed, broadcasting: 5\n" May 29 14:11:59.337: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 29 14:11:59.337: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 29 14:11:59.337: INFO: Waiting for statefulset status.replicas updated to 0 May 29 14:11:59.367: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 29 14:12:09.390: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 29 14:12:09.390: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 29 14:12:09.390: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 29 14:12:09.406: INFO: POD NODE PHASE GRACE CONDITIONS May 29 14:12:09.406: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:24 +0000 UTC }] May 29 14:12:09.406: INFO: ss-1 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:47 +0000 UTC }] May 29 14:12:09.406: INFO: ss-2 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:12:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:12:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:47 +0000 UTC }] May 29 14:12:09.406: INFO: May 29 14:12:09.406: INFO: StatefulSet ss has not reached scale 0, at 3 May 29 14:12:10.411: INFO: POD NODE PHASE GRACE CONDITIONS May 29 14:12:10.411: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:24 +0000 UTC }] May 29 14:12:10.411: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:47 +0000 UTC }] May 29 14:12:10.411: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:12:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:12:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:47 +0000 UTC }] May 29 14:12:10.411: INFO: May 29 14:12:10.411: INFO: StatefulSet ss has not reached scale 0, at 3 May 29 14:12:11.416: INFO: POD NODE PHASE GRACE CONDITIONS May 29 14:12:11.416: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:24 +0000 UTC }] May 29 14:12:11.416: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:47 +0000 UTC }] May 29 14:12:11.416: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:12:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:12:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:47 +0000 UTC }] May 29 14:12:11.416: INFO: May 29 14:12:11.416: INFO: StatefulSet ss has not reached scale 0, at 3 May 29 14:12:12.422: INFO: POD NODE PHASE GRACE CONDITIONS May 29 14:12:12.422: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:24 +0000 UTC }] May 29 14:12:12.422: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:12:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:12:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:47 +0000 UTC }] May 29 14:12:12.422: INFO: May 29 14:12:12.422: INFO: StatefulSet ss has not reached scale 0, at 2 May 29 14:12:13.427: INFO: POD NODE PHASE GRACE CONDITIONS May 29 14:12:13.427: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:24 +0000 UTC }] May 29 14:12:13.427: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:12:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:12:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:47 +0000 UTC }] May 29 14:12:13.427: INFO: May 29 14:12:13.427: INFO: StatefulSet ss has not reached scale 0, at 2 May 29 14:12:14.432: INFO: POD NODE PHASE GRACE CONDITIONS May 29 14:12:14.432: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:24 +0000 UTC }] May 29 14:12:14.432: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:12:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:12:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:47 +0000 UTC }] May 29 14:12:14.432: INFO: May 29 14:12:14.432: INFO: StatefulSet ss has not reached scale 0, at 2 May 29 14:12:15.436: INFO: POD NODE PHASE GRACE CONDITIONS May 29 14:12:15.436: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:24 +0000 UTC }] May 29 14:12:15.436: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:12:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:12:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:47 +0000 UTC }] May 29 14:12:15.436: INFO: May 29 14:12:15.436: INFO: StatefulSet ss has not reached scale 0, at 2 May 29 14:12:16.441: INFO: POD NODE PHASE GRACE CONDITIONS May 29 14:12:16.442: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:24 +0000 UTC }] May 29 14:12:16.442: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:12:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:12:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:47 +0000 UTC }] May 29 14:12:16.442: INFO: May 29 14:12:16.442: INFO: StatefulSet ss has not reached scale 0, at 2 May 29 14:12:17.446: INFO: POD NODE PHASE GRACE CONDITIONS May 29 14:12:17.446: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:24 +0000 UTC }] May 29 14:12:17.446: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:12:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:12:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:47 +0000 UTC }] May 29 14:12:17.446: INFO: May 29 14:12:17.446: INFO: StatefulSet ss has not reached scale 0, at 2 May 29 14:12:18.452: INFO: POD NODE PHASE GRACE CONDITIONS May 29 14:12:18.452: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:24 +0000 UTC }] May 29 14:12:18.452: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:12:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:12:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-29 14:11:47 +0000 UTC }] May 29 14:12:18.452: INFO: May 29 14:12:18.452: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3044 May 29 14:12:19.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 29 14:12:19.584: INFO: rc: 1 May 29 14:12:19.585: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc0029e77d0 exit status 1 true [0xc002dd8198 0xc002dd81b0 0xc002dd81c8] [0xc002dd8198 0xc002dd81b0 0xc002dd81c8] [0xc002dd81a8 0xc002dd81c0] [0xba70e0 0xba70e0] 0xc0029c4180 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 May 29 14:12:29.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 29 14:12:29.676: INFO: rc: 1 May 29 14:12:29.676: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0029e7890 exit status 1 true [0xc002dd81d0 0xc002dd81e8 0xc002dd8200] [0xc002dd81d0 0xc002dd81e8 0xc002dd8200] [0xc002dd81e0 0xc002dd81f8] [0xba70e0 0xba70e0] 0xc0029c4660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 29 14:12:39.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 29 14:12:39.779: INFO: rc: 1 May 29 14:12:39.779: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002f46a20 exit status 1 true [0xc000cb4060 0xc000cb4078 0xc000cb4090] [0xc000cb4060 0xc000cb4078 0xc000cb4090] [0xc000cb4070 0xc000cb4088] [0xba70e0 0xba70e0] 0xc002e5e9c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 29 14:12:49.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 29 14:12:49.875: INFO: rc: 1 May 29 14:12:49.875: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0016b4c90 exit status 1 true [0xc000999130 0xc0009991f8 0xc0009992e8] [0xc000999130 0xc0009991f8 0xc0009992e8] [0xc0009991d0 0xc000999298] [0xba70e0 0xba70e0] 0xc002d33c20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 29 14:12:59.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 29 14:12:59.973: INFO: rc: 1 May 29 14:12:59.973: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002f46b10 exit status 1 true [0xc000cb4098 0xc000cb40b0 0xc000cb40c8] [0xc000cb4098 0xc000cb40b0 0xc000cb40c8] [0xc000cb40a8 0xc000cb40c0] [0xba70e0 0xba70e0] 0xc002e5ed20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 29 14:13:09.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 29 14:13:10.075: INFO: rc: 1 May 29 14:13:10.075: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0029e7980 exit status 1 true [0xc002dd8208 0xc002dd8220 0xc002dd8238] [0xc002dd8208 0xc002dd8220 0xc002dd8238] [0xc002dd8218 0xc002dd8230] [0xba70e0 0xba70e0] 0xc0029c5680 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 29 14:13:20.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 29 14:13:20.170: INFO: rc: 1 May 29 14:13:20.170: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0006b5ef0 exit status 1 true [0xc00075f4f0 0xc00075f5f8 0xc00075f6e0] [0xc00075f4f0 0xc00075f5f8 0xc00075f6e0] [0xc00075f570 0xc00075f640] [0xba70e0 0xba70e0] 0xc00283cf00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 29 14:13:30.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 29 14:13:30.277: INFO: rc: 1 May 29 14:13:30.277: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0006b5fb0 exit status 1 true [0xc00075f720 0xc00075f770 0xc00075f820] [0xc00075f720 0xc00075f770 0xc00075f820] [0xc00075f760 0xc00075f7f8] [0xba70e0 0xba70e0] 0xc00283d260 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 29 14:13:40.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 29 14:13:40.363: INFO: rc: 1 May 29 14:13:40.363: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002f46bd0 exit status 1 true [0xc000cb40d0 0xc000cb40e8 0xc000cb4100] [0xc000cb40d0 0xc000cb40e8 0xc000cb4100] [0xc000cb40e0 0xc000cb40f8] [0xba70e0 0xba70e0] 0xc002e5f1a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 29 14:13:50.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 29 14:13:50.470: INFO: rc: 1 May 29 14:13:50.470: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0006b5980 exit status 1 true [0xc00056afd0 0xc00056b420 0xc00056b600] [0xc00056afd0 0xc00056b420 0xc00056b600] [0xc00056b2d0 0xc00056b5c8] [0xba70e0 0xba70e0] 0xc002632180 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 29 14:14:00.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 29 14:14:00.573: INFO: rc: 1 May 29 14:14:00.573: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002802090 exit status 1 true [0xc000186000 0xc000998158 0xc000998288] [0xc000186000 0xc000998158 0xc000998288] [0xc000998078 0xc000998250] [0xba70e0 0xba70e0] 0xc001a62120 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 29 14:14:10.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 29 14:14:10.763: INFO: rc: 1 May 29 14:14:10.763: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002802180 exit status 1 true [0xc000998300 0xc000998488 0xc000998510] [0xc000998300 0xc000998488 0xc000998510] [0xc000998458 0xc0009984b0] [0xba70e0 0xba70e0] 0xc001a62b40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 29 14:14:20.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 29 14:14:20.864: INFO: rc: 1 May 29 14:14:20.864: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002802240 exit status 1 true [0xc000998528 0xc0009985b0 0xc0009986c0] [0xc000998528 0xc0009985b0 0xc0009986c0] [0xc0009985a0 0xc000998628] [0xba70e0 0xba70e0] 0xc001a631a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 29 14:14:30.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 29 14:14:30.967: INFO: rc: 1 May 29 14:14:30.968: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002eb8090 exit status 1 true [0xc002dd8000 0xc002dd8018 0xc002dd8030] [0xc002dd8000 0xc002dd8018 0xc002dd8030] [0xc002dd8010 0xc002dd8028] [0xba70e0 0xba70e0] 0xc00244e9c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 29 14:14:40.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 29 14:14:41.063: INFO: rc: 1 May 29 14:14:41.063: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002eb8180 exit status 1 true [0xc002dd8038 0xc002dd8050 0xc002dd8068] [0xc002dd8038 0xc002dd8050 0xc002dd8068] [0xc002dd8048 0xc002dd8060] [0xba70e0 0xba70e0] 0xc00244f260 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 29 14:14:51.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 29 14:14:51.158: INFO: rc: 1 May 29 14:14:51.158: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc003b660f0 exit status 1 true [0xc00075e008 0xc00075e388 0xc00075e538] [0xc00075e008 0xc00075e388 0xc00075e538] [0xc00075e278 0xc00075e4c0] [0xba70e0 0xba70e0] 0xc002304720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 29 14:15:01.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 29 14:15:01.260: INFO: rc: 1 May 29 14:15:01.260: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002eb8240 exit status 1 true [0xc002dd8070 0xc002dd8088 0xc002dd80a0] [0xc002dd8070 0xc002dd8088 0xc002dd80a0] [0xc002dd8080 0xc002dd8098] [0xba70e0 0xba70e0] 0xc00244f9e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 29 14:15:11.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 29 14:15:11.359: INFO: rc: 1 May 29 14:15:11.359: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0006b5a70 exit status 1 true [0xc00056b710 0xc00056bba0 0xc00056bcd8] [0xc00056b710 0xc00056bba0 0xc00056bcd8] [0xc00056b8e0 0xc00056bc90] [0xba70e0 0xba70e0] 0xc0028f4060 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 29 14:15:21.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 29 14:15:21.456: INFO: rc: 1 May 29 14:15:21.456: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0006b5b30 exit status 1 true [0xc00056bce8 0xc00056bd48 0xc00056bdd8] [0xc00056bce8 0xc00056bd48 0xc00056bdd8] [0xc00056bd10 0xc00056bd80] [0xba70e0 0xba70e0] 0xc0028f4360 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 29 14:15:31.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 29 14:15:31.548: INFO: rc: 1 May 29 14:15:31.549: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002802360 exit status 1 true [0xc000998728 0xc000998780 0xc0009987f8] [0xc000998728 0xc000998780 0xc0009987f8] [0xc000998768 0xc0009987e0] [0xba70e0 0xba70e0] 0xc001a636e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 29 14:15:41.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 29 14:15:41.654: INFO: rc: 1 May 29 14:15:41.654: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc003b66210 exit status 1 true [0xc00075e638 0xc00075ea30 0xc00075eb30] [0xc00075e638 0xc00075ea30 0xc00075eb30] [0xc00075ea00 0xc00075eaa0] [0xba70e0 0xba70e0] 0xc002304f00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 29 14:15:51.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 29 14:15:51.758: INFO: rc: 1 May 29 14:15:51.758: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0028020c0 exit status 1 true [0xc000998000 0xc0009981c8 0xc000998300] [0xc000998000 0xc0009981c8 0xc000998300] [0xc000998158 0xc000998288] [0xba70e0 0xba70e0] 0xc0026320c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 29 14:16:01.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 29 14:16:01.851: INFO: rc: 1 May 29 14:16:01.851: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0006b59b0 exit status 1 true [0xc002dd8000 0xc002dd8018 0xc002dd8030] [0xc002dd8000 0xc002dd8018 0xc002dd8030] [0xc002dd8010 0xc002dd8028] [0xba70e0 0xba70e0] 0xc001a622a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 29 14:16:11.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 29 14:16:11.944: INFO: rc: 1 May 29 14:16:11.944: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0006b5ad0 exit status 1 true [0xc002dd8038 0xc002dd8050 0xc002dd8068] [0xc002dd8038 0xc002dd8050 0xc002dd8068] [0xc002dd8048 0xc002dd8060] [0xba70e0 0xba70e0] 0xc001a62c00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 29 14:16:21.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 29 14:16:22.053: INFO: rc: 1 May 29 14:16:22.053: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc003b660c0 exit status 1 true [0xc00056a358 0xc00056b2d0 0xc00056b5c8] [0xc00056a358 0xc00056b2d0 0xc00056b5c8] [0xc00056b008 0xc00056b528] [0xba70e0 0xba70e0] 0xc00244e9c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 29 14:16:32.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 29 14:16:32.142: INFO: rc: 1 May 29 14:16:32.142: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc003b661b0 exit status 1 true [0xc00056b600 0xc00056b8e0 0xc00056bc90] [0xc00056b600 0xc00056b8e0 0xc00056bc90] [0xc00056b7a0 0xc00056bc28] [0xba70e0 0xba70e0] 0xc00244f260 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 29 14:16:42.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 29 14:16:42.249: INFO: rc: 1 May 29 14:16:42.249: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0028021e0 exit status 1 true [0xc0009983c0 0xc000998498 0xc000998528] [0xc0009983c0 0xc000998498 0xc000998528] [0xc000998488 0xc000998510] [0xba70e0 0xba70e0] 0xc002633260 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 29 14:16:52.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 29 14:16:52.353: INFO: rc: 1 May 29 14:16:52.353: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0028022d0 exit status 1 true [0xc000998550 0xc0009985c0 0xc000998728] [0xc000998550 0xc0009985c0 0xc000998728] [0xc0009985b0 0xc0009986c0] [0xba70e0 0xba70e0] 0xc0028f4300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 29 14:17:02.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 29 14:17:02.453: INFO: rc: 1 May 29 14:17:02.453: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc003b66300 exit status 1 true [0xc00056bcd8 0xc00056bd10 0xc00056bd80] [0xc00056bcd8 0xc00056bd10 0xc00056bd80] [0xc00056bcf8 0xc00056bd70] [0xba70e0 0xba70e0] 0xc00244f9e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 29 14:17:12.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 29 14:17:12.552: INFO: rc: 1 May 29 14:17:12.552: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002eb80f0 exit status 1 true [0xc00075e008 0xc00075e388 0xc00075e538] [0xc00075e008 0xc00075e388 0xc00075e538] [0xc00075e278 0xc00075e4c0] [0xba70e0 0xba70e0] 0xc002304720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 29 14:17:22.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3044 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 29 14:17:22.665: INFO: rc: 1 May 29 14:17:22.665: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: May 29 14:17:22.665: INFO: Scaling statefulset ss to 0 May 29 14:17:22.674: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 29 14:17:22.676: INFO: Deleting all statefulset in ns statefulset-3044 May 29 14:17:22.678: INFO: Scaling statefulset ss to 0 May 29 14:17:22.687: INFO: Waiting for statefulset status.replicas updated to 0 May 29 14:17:22.689: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:17:22.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3044" for this suite. May 29 14:17:28.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:17:28.798: INFO: namespace statefulset-3044 deletion completed in 6.092883653s • [SLOW TEST:364.115 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:17:28.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-4891/secret-test-a14bb4fc-ea4e-4de4-8079-851d5eb21783 STEP: Creating a pod to test consume secrets May 29 14:17:28.895: INFO: Waiting up to 5m0s for pod "pod-configmaps-7fe4354a-3dca-4e2e-a195-728e767b2933" in namespace "secrets-4891" to be "success or failure" May 29 14:17:28.898: INFO: Pod "pod-configmaps-7fe4354a-3dca-4e2e-a195-728e767b2933": Phase="Pending", Reason="", readiness=false. Elapsed: 2.754878ms May 29 14:17:30.902: INFO: Pod "pod-configmaps-7fe4354a-3dca-4e2e-a195-728e767b2933": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00700262s May 29 14:17:32.941: INFO: Pod "pod-configmaps-7fe4354a-3dca-4e2e-a195-728e767b2933": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045854075s STEP: Saw pod success May 29 14:17:32.941: INFO: Pod "pod-configmaps-7fe4354a-3dca-4e2e-a195-728e767b2933" satisfied condition "success or failure" May 29 14:17:32.944: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-7fe4354a-3dca-4e2e-a195-728e767b2933 container env-test: STEP: delete the pod May 29 14:17:32.970: INFO: Waiting for pod pod-configmaps-7fe4354a-3dca-4e2e-a195-728e767b2933 to disappear May 29 14:17:32.987: INFO: Pod pod-configmaps-7fe4354a-3dca-4e2e-a195-728e767b2933 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:17:32.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4891" for this suite. May 29 14:17:39.015: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:17:39.089: INFO: namespace secrets-4891 deletion completed in 6.098064002s • [SLOW TEST:10.291 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:17:39.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 29 14:17:39.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-888' May 29 14:17:39.249: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 29 14:17:39.249: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 May 29 14:17:39.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-888' May 29 14:17:39.370: INFO: stderr: "" May 29 14:17:39.370: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:17:39.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-888" for this suite. May 29 14:17:45.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:17:45.464: INFO: namespace kubectl-888 deletion completed in 6.090658161s • [SLOW TEST:6.374 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:17:45.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container May 29 14:17:50.097: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9263 pod-service-account-ce0adfd6-96e5-4772-b0c4-8aec0b5bc8ee -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 29 14:17:50.338: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9263 pod-service-account-ce0adfd6-96e5-4772-b0c4-8aec0b5bc8ee -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 29 14:17:50.561: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9263 pod-service-account-ce0adfd6-96e5-4772-b0c4-8aec0b5bc8ee -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:17:50.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9263" for this suite. May 29 14:17:56.842: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:17:56.923: INFO: namespace svcaccounts-9263 deletion completed in 6.097311665s • [SLOW TEST:11.459 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:17:56.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 29 14:17:56.994: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:18:01.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7431" for this suite. May 29 14:18:43.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:18:43.175: INFO: namespace pods-7431 deletion completed in 42.112218905s • [SLOW TEST:46.251 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:18:43.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium May 29 14:18:43.244: INFO: Waiting up to 5m0s for pod "pod-3739a3e7-011a-433c-ab9c-cbd246ccd518" in namespace "emptydir-2772" to be "success or failure" May 29 14:18:43.247: INFO: Pod "pod-3739a3e7-011a-433c-ab9c-cbd246ccd518": Phase="Pending", Reason="", readiness=false. Elapsed: 3.147129ms May 29 14:18:45.289: INFO: Pod "pod-3739a3e7-011a-433c-ab9c-cbd246ccd518": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045518081s May 29 14:18:47.294: INFO: Pod "pod-3739a3e7-011a-433c-ab9c-cbd246ccd518": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049956689s STEP: Saw pod success May 29 14:18:47.294: INFO: Pod "pod-3739a3e7-011a-433c-ab9c-cbd246ccd518" satisfied condition "success or failure" May 29 14:18:47.297: INFO: Trying to get logs from node iruya-worker pod pod-3739a3e7-011a-433c-ab9c-cbd246ccd518 container test-container: STEP: delete the pod May 29 14:18:47.470: INFO: Waiting for pod pod-3739a3e7-011a-433c-ab9c-cbd246ccd518 to disappear May 29 14:18:47.595: INFO: Pod pod-3739a3e7-011a-433c-ab9c-cbd246ccd518 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:18:47.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2772" for this suite. May 29 14:18:53.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:18:53.696: INFO: namespace emptydir-2772 deletion completed in 6.097548429s • [SLOW TEST:10.521 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:18:53.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:18:57.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1806" for this suite. May 29 14:19:43.868: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:19:43.947: INFO: namespace kubelet-test-1806 deletion completed in 46.106044117s • [SLOW TEST:50.251 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:19:43.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 May 29 14:19:44.001: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 29 14:19:44.037: INFO: Waiting for terminating namespaces to be deleted... May 29 14:19:44.040: INFO: Logging pods the kubelet thinks is on node iruya-worker before test May 29 14:19:44.046: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 29 14:19:44.046: INFO: Container kube-proxy ready: true, restart count 0 May 29 14:19:44.046: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 29 14:19:44.046: INFO: Container kindnet-cni ready: true, restart count 2 May 29 14:19:44.046: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test May 29 14:19:44.053: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) May 29 14:19:44.053: INFO: Container kube-proxy ready: true, restart count 0 May 29 14:19:44.053: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) May 29 14:19:44.053: INFO: Container kindnet-cni ready: true, restart count 2 May 29 14:19:44.053: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) May 29 14:19:44.053: INFO: Container coredns ready: true, restart count 0 May 29 14:19:44.053: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) May 29 14:19:44.053: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-356f66bc-a78a-4eec-9c5e-7d5960739ca4 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-356f66bc-a78a-4eec-9c5e-7d5960739ca4 off the node iruya-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-356f66bc-a78a-4eec-9c5e-7d5960739ca4 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:19:52.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4117" for this suite. May 29 14:20:04.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:20:04.326: INFO: namespace sched-pred-4117 deletion completed in 12.132693225s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:20.378 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:20:04.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 29 14:20:04.371: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:20:08.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6305" for this suite. May 29 14:20:46.557: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:20:46.654: INFO: namespace pods-6305 deletion completed in 38.113415412s • [SLOW TEST:42.328 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:20:46.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs May 29 14:20:46.731: INFO: Waiting up to 5m0s for pod "pod-6794284d-d492-421e-8e29-77012b9739b3" in namespace "emptydir-6997" to be "success or failure" May 29 14:20:46.734: INFO: Pod "pod-6794284d-d492-421e-8e29-77012b9739b3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.078239ms May 29 14:20:48.739: INFO: Pod "pod-6794284d-d492-421e-8e29-77012b9739b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007938608s May 29 14:20:50.744: INFO: Pod "pod-6794284d-d492-421e-8e29-77012b9739b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013577799s STEP: Saw pod success May 29 14:20:50.744: INFO: Pod "pod-6794284d-d492-421e-8e29-77012b9739b3" satisfied condition "success or failure" May 29 14:20:50.749: INFO: Trying to get logs from node iruya-worker pod pod-6794284d-d492-421e-8e29-77012b9739b3 container test-container: STEP: delete the pod May 29 14:20:50.801: INFO: Waiting for pod pod-6794284d-d492-421e-8e29-77012b9739b3 to disappear May 29 14:20:50.812: INFO: Pod pod-6794284d-d492-421e-8e29-77012b9739b3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:20:50.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6997" for this suite. May 29 14:20:56.827: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:20:56.905: INFO: namespace emptydir-6997 deletion completed in 6.089253155s • [SLOW TEST:10.250 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:20:56.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-aa441f6c-0234-4b3e-956b-cf8191ef82d3 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:20:56.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4845" for this suite. May 29 14:21:02.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:21:03.072: INFO: namespace secrets-4845 deletion completed in 6.126283635s • [SLOW TEST:6.166 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:21:03.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 29 14:21:03.142: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:21:04.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3268" for this suite. May 29 14:21:10.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:21:10.350: INFO: namespace custom-resource-definition-3268 deletion completed in 6.09950756s • [SLOW TEST:7.278 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:21:10.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 29 14:21:10.450: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0365252e-682b-4b2c-8cb5-fe5a3ff4f2e1" in namespace "downward-api-108" to be "success or failure" May 29 14:21:10.459: INFO: Pod "downwardapi-volume-0365252e-682b-4b2c-8cb5-fe5a3ff4f2e1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.969327ms May 29 14:21:12.477: INFO: Pod "downwardapi-volume-0365252e-682b-4b2c-8cb5-fe5a3ff4f2e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026531491s May 29 14:21:14.481: INFO: Pod "downwardapi-volume-0365252e-682b-4b2c-8cb5-fe5a3ff4f2e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030811289s STEP: Saw pod success May 29 14:21:14.481: INFO: Pod "downwardapi-volume-0365252e-682b-4b2c-8cb5-fe5a3ff4f2e1" satisfied condition "success or failure" May 29 14:21:14.483: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-0365252e-682b-4b2c-8cb5-fe5a3ff4f2e1 container client-container: STEP: delete the pod May 29 14:21:14.606: INFO: Waiting for pod downwardapi-volume-0365252e-682b-4b2c-8cb5-fe5a3ff4f2e1 to disappear May 29 14:21:14.705: INFO: Pod downwardapi-volume-0365252e-682b-4b2c-8cb5-fe5a3ff4f2e1 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:21:14.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-108" for this suite. May 29 14:21:20.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:21:20.835: INFO: namespace downward-api-108 deletion completed in 6.125910482s • [SLOW TEST:10.485 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:21:20.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 29 14:21:20.937: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 29 14:21:20.944: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 14:21:20.949: INFO: Number of nodes with available pods: 0 May 29 14:21:20.949: INFO: Node iruya-worker is running more than one daemon pod May 29 14:21:21.954: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 14:21:21.956: INFO: Number of nodes with available pods: 0 May 29 14:21:21.956: INFO: Node iruya-worker is running more than one daemon pod May 29 14:21:23.095: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 14:21:23.097: INFO: Number of nodes with available pods: 0 May 29 14:21:23.097: INFO: Node iruya-worker is running more than one daemon pod May 29 14:21:24.094: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 14:21:24.097: INFO: Number of nodes with available pods: 0 May 29 14:21:24.098: INFO: Node iruya-worker is running more than one daemon pod May 29 14:21:24.954: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 14:21:24.958: INFO: Number of nodes with available pods: 0 May 29 14:21:24.958: INFO: Node iruya-worker is running more than one daemon pod May 29 14:21:25.954: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 14:21:25.957: INFO: Number of nodes with available pods: 2 May 29 14:21:25.957: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 29 14:21:26.000: INFO: Wrong image for pod: daemon-set-fqwwh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 29 14:21:26.000: INFO: Wrong image for pod: daemon-set-spzgb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 29 14:21:26.010: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 14:21:27.015: INFO: Wrong image for pod: daemon-set-fqwwh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 29 14:21:27.015: INFO: Wrong image for pod: daemon-set-spzgb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 29 14:21:27.019: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 14:21:28.071: INFO: Wrong image for pod: daemon-set-fqwwh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 29 14:21:28.071: INFO: Wrong image for pod: daemon-set-spzgb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 29 14:21:28.075: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 14:21:29.015: INFO: Wrong image for pod: daemon-set-fqwwh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 29 14:21:29.015: INFO: Wrong image for pod: daemon-set-spzgb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 29 14:21:29.019: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 14:21:30.015: INFO: Wrong image for pod: daemon-set-fqwwh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 29 14:21:30.015: INFO: Wrong image for pod: daemon-set-spzgb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 29 14:21:30.015: INFO: Pod daemon-set-spzgb is not available May 29 14:21:30.034: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 14:21:31.016: INFO: Pod daemon-set-87dcj is not available May 29 14:21:31.016: INFO: Wrong image for pod: daemon-set-fqwwh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 29 14:21:31.020: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 14:21:32.014: INFO: Pod daemon-set-87dcj is not available May 29 14:21:32.014: INFO: Wrong image for pod: daemon-set-fqwwh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 29 14:21:32.019: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 14:21:33.015: INFO: Pod daemon-set-87dcj is not available May 29 14:21:33.015: INFO: Wrong image for pod: daemon-set-fqwwh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 29 14:21:33.019: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 14:21:34.015: INFO: Wrong image for pod: daemon-set-fqwwh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 29 14:21:34.019: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 14:21:35.014: INFO: Wrong image for pod: daemon-set-fqwwh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 29 14:21:35.018: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 14:21:36.015: INFO: Wrong image for pod: daemon-set-fqwwh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 29 14:21:36.015: INFO: Pod daemon-set-fqwwh is not available May 29 14:21:36.019: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 14:21:37.015: INFO: Wrong image for pod: daemon-set-fqwwh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 29 14:21:37.015: INFO: Pod daemon-set-fqwwh is not available May 29 14:21:37.019: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 14:21:38.015: INFO: Wrong image for pod: daemon-set-fqwwh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 29 14:21:38.015: INFO: Pod daemon-set-fqwwh is not available May 29 14:21:38.019: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 14:21:39.015: INFO: Wrong image for pod: daemon-set-fqwwh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 29 14:21:39.015: INFO: Pod daemon-set-fqwwh is not available May 29 14:21:39.020: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 14:21:40.017: INFO: Wrong image for pod: daemon-set-fqwwh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 29 14:21:40.017: INFO: Pod daemon-set-fqwwh is not available May 29 14:21:40.020: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 14:21:41.015: INFO: Wrong image for pod: daemon-set-fqwwh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 29 14:21:41.015: INFO: Pod daemon-set-fqwwh is not available May 29 14:21:41.020: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 14:21:42.014: INFO: Pod daemon-set-96tlp is not available May 29 14:21:42.017: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 29 14:21:42.020: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 14:21:42.023: INFO: Number of nodes with available pods: 1 May 29 14:21:42.023: INFO: Node iruya-worker2 is running more than one daemon pod May 29 14:21:43.029: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 14:21:43.032: INFO: Number of nodes with available pods: 1 May 29 14:21:43.032: INFO: Node iruya-worker2 is running more than one daemon pod May 29 14:21:44.028: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 14:21:44.032: INFO: Number of nodes with available pods: 1 May 29 14:21:44.032: INFO: Node iruya-worker2 is running more than one daemon pod May 29 14:21:45.041: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 14:21:45.044: INFO: Number of nodes with available pods: 1 May 29 14:21:45.044: INFO: Node iruya-worker2 is running more than one daemon pod May 29 14:21:46.028: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 14:21:46.032: INFO: Number of nodes with available pods: 2 May 29 14:21:46.032: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-703, will wait for the garbage collector to delete the pods May 29 14:21:46.107: INFO: Deleting DaemonSet.extensions daemon-set took: 7.125244ms May 29 14:21:46.407: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.298384ms May 29 14:21:52.211: INFO: Number of nodes with available pods: 0 May 29 14:21:52.211: INFO: Number of running nodes: 0, number of available pods: 0 May 29 14:21:52.214: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-703/daemonsets","resourceVersion":"13560078"},"items":null} May 29 14:21:52.216: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-703/pods","resourceVersion":"13560078"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:21:52.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-703" for this suite. May 29 14:21:58.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:21:58.369: INFO: namespace daemonsets-703 deletion completed in 6.138652782s • [SLOW TEST:37.533 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:21:58.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 29 14:21:58.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-7736' May 29 14:22:01.268: INFO: stderr: "" May 29 14:22:01.268: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 May 29 14:22:01.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-7736' May 29 14:22:05.377: INFO: stderr: "" May 29 14:22:05.377: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:22:05.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7736" for this suite. May 29 14:22:11.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:22:11.477: INFO: namespace kubectl-7736 deletion completed in 6.096428188s • [SLOW TEST:13.107 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:22:11.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-52314a04-dde0-4860-a113-1baec0377614 STEP: Creating configMap with name cm-test-opt-upd-58c61806-b7b7-4c8a-b0a9-c15d90284df8 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-52314a04-dde0-4860-a113-1baec0377614 STEP: Updating configmap cm-test-opt-upd-58c61806-b7b7-4c8a-b0a9-c15d90284df8 STEP: Creating configMap with name cm-test-opt-create-c8e521d6-502b-4d43-9cc3-a248555ec660 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:23:48.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-520" for this suite. May 29 14:24:10.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:24:10.220: INFO: namespace projected-520 deletion completed in 22.119106156s • [SLOW TEST:118.744 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:24:10.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-d2fc STEP: Creating a pod to test atomic-volume-subpath May 29 14:24:10.307: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-d2fc" in namespace "subpath-5497" to be "success or failure" May 29 14:24:10.311: INFO: Pod "pod-subpath-test-configmap-d2fc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.80857ms May 29 14:24:12.315: INFO: Pod "pod-subpath-test-configmap-d2fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007680976s May 29 14:24:14.320: INFO: Pod "pod-subpath-test-configmap-d2fc": Phase="Running", Reason="", readiness=true. Elapsed: 4.012212938s May 29 14:24:16.324: INFO: Pod "pod-subpath-test-configmap-d2fc": Phase="Running", Reason="", readiness=true. Elapsed: 6.016393279s May 29 14:24:18.328: INFO: Pod "pod-subpath-test-configmap-d2fc": Phase="Running", Reason="", readiness=true. Elapsed: 8.020935303s May 29 14:24:20.334: INFO: Pod "pod-subpath-test-configmap-d2fc": Phase="Running", Reason="", readiness=true. Elapsed: 10.026032688s May 29 14:24:22.337: INFO: Pod "pod-subpath-test-configmap-d2fc": Phase="Running", Reason="", readiness=true. Elapsed: 12.029682019s May 29 14:24:24.342: INFO: Pod "pod-subpath-test-configmap-d2fc": Phase="Running", Reason="", readiness=true. Elapsed: 14.034382309s May 29 14:24:26.347: INFO: Pod "pod-subpath-test-configmap-d2fc": Phase="Running", Reason="", readiness=true. Elapsed: 16.039245549s May 29 14:24:28.351: INFO: Pod "pod-subpath-test-configmap-d2fc": Phase="Running", Reason="", readiness=true. Elapsed: 18.043576687s May 29 14:24:30.355: INFO: Pod "pod-subpath-test-configmap-d2fc": Phase="Running", Reason="", readiness=true. Elapsed: 20.047875678s May 29 14:24:32.359: INFO: Pod "pod-subpath-test-configmap-d2fc": Phase="Running", Reason="", readiness=true. Elapsed: 22.051533422s May 29 14:24:34.365: INFO: Pod "pod-subpath-test-configmap-d2fc": Phase="Running", Reason="", readiness=true. Elapsed: 24.057009764s May 29 14:24:36.369: INFO: Pod "pod-subpath-test-configmap-d2fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.061755551s STEP: Saw pod success May 29 14:24:36.369: INFO: Pod "pod-subpath-test-configmap-d2fc" satisfied condition "success or failure" May 29 14:24:36.373: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-d2fc container test-container-subpath-configmap-d2fc: STEP: delete the pod May 29 14:24:36.392: INFO: Waiting for pod pod-subpath-test-configmap-d2fc to disappear May 29 14:24:36.395: INFO: Pod pod-subpath-test-configmap-d2fc no longer exists STEP: Deleting pod pod-subpath-test-configmap-d2fc May 29 14:24:36.395: INFO: Deleting pod "pod-subpath-test-configmap-d2fc" in namespace "subpath-5497" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:24:36.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5497" for this suite. May 29 14:24:42.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:24:42.518: INFO: namespace subpath-5497 deletion completed in 6.118344425s • [SLOW TEST:32.297 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:24:42.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8491.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-8491.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8491.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8491.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-8491.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8491.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 29 14:24:48.666: INFO: DNS probes using dns-8491/dns-test-a786ab19-32ce-4beb-83a2-e69f08c183b9 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:24:48.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8491" for this suite. May 29 14:24:54.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:24:54.824: INFO: namespace dns-8491 deletion completed in 6.108166479s • [SLOW TEST:12.306 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:24:54.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:24:59.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8778" for this suite. May 29 14:25:22.004: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:25:22.075: INFO: namespace replication-controller-8778 deletion completed in 22.094954734s • [SLOW TEST:27.250 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:25:22.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-45e09234-3905-4502-9643-e968c2ce0d4b May 29 14:25:22.141: INFO: Pod name my-hostname-basic-45e09234-3905-4502-9643-e968c2ce0d4b: Found 0 pods out of 1 May 29 14:25:27.175: INFO: Pod name my-hostname-basic-45e09234-3905-4502-9643-e968c2ce0d4b: Found 1 pods out of 1 May 29 14:25:27.175: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-45e09234-3905-4502-9643-e968c2ce0d4b" are running May 29 14:25:27.178: INFO: Pod "my-hostname-basic-45e09234-3905-4502-9643-e968c2ce0d4b-6d4cg" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-29 14:25:22 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-29 14:25:24 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-29 14:25:24 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-29 14:25:22 +0000 UTC Reason: Message:}]) May 29 14:25:27.178: INFO: Trying to dial the pod May 29 14:25:32.210: INFO: Controller my-hostname-basic-45e09234-3905-4502-9643-e968c2ce0d4b: Got expected result from replica 1 [my-hostname-basic-45e09234-3905-4502-9643-e968c2ce0d4b-6d4cg]: "my-hostname-basic-45e09234-3905-4502-9643-e968c2ce0d4b-6d4cg", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:25:32.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3745" for this suite. May 29 14:25:38.251: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:25:38.329: INFO: namespace replication-controller-3745 deletion completed in 6.114584918s • [SLOW TEST:16.253 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:25:38.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 29 14:25:38.395: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-6661,SelfLink:/api/v1/namespaces/watch-6661/configmaps/e2e-watch-test-watch-closed,UID:a3a14749-f34f-4814-a6d1-6485687bfa24,ResourceVersion:13560742,Generation:0,CreationTimestamp:2020-05-29 14:25:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 29 14:25:38.395: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-6661,SelfLink:/api/v1/namespaces/watch-6661/configmaps/e2e-watch-test-watch-closed,UID:a3a14749-f34f-4814-a6d1-6485687bfa24,ResourceVersion:13560743,Generation:0,CreationTimestamp:2020-05-29 14:25:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 29 14:25:38.456: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-6661,SelfLink:/api/v1/namespaces/watch-6661/configmaps/e2e-watch-test-watch-closed,UID:a3a14749-f34f-4814-a6d1-6485687bfa24,ResourceVersion:13560744,Generation:0,CreationTimestamp:2020-05-29 14:25:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 29 14:25:38.457: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-6661,SelfLink:/api/v1/namespaces/watch-6661/configmaps/e2e-watch-test-watch-closed,UID:a3a14749-f34f-4814-a6d1-6485687bfa24,ResourceVersion:13560745,Generation:0,CreationTimestamp:2020-05-29 14:25:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:25:38.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6661" for this suite. May 29 14:25:44.481: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:25:44.557: INFO: namespace watch-6661 deletion completed in 6.085681211s • [SLOW TEST:6.228 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:25:44.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-b6e1bd63-c5b3-4b31-a9b3-b03525f7b380 in namespace container-probe-2741 May 29 14:25:48.648: INFO: Started pod test-webserver-b6e1bd63-c5b3-4b31-a9b3-b03525f7b380 in namespace container-probe-2741 STEP: checking the pod's current state and verifying that restartCount is present May 29 14:25:48.651: INFO: Initial restart count of pod test-webserver-b6e1bd63-c5b3-4b31-a9b3-b03525f7b380 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:29:49.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2741" for this suite. May 29 14:29:55.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:29:55.391: INFO: namespace container-probe-2741 deletion completed in 6.11840153s • [SLOW TEST:250.834 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:29:55.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-9119 STEP: creating a selector STEP: Creating the service pods in kubernetes May 29 14:29:55.463: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 29 14:30:23.586: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.92 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9119 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 29 14:30:23.586: INFO: >>> kubeConfig: /root/.kube/config I0529 14:30:23.620110 7 log.go:172] (0xc000d16fd0) (0xc0023c92c0) Create stream I0529 14:30:23.620146 7 log.go:172] (0xc000d16fd0) (0xc0023c92c0) Stream added, broadcasting: 1 I0529 14:30:23.623616 7 log.go:172] (0xc000d16fd0) Reply frame received for 1 I0529 14:30:23.623659 7 log.go:172] (0xc000d16fd0) (0xc0017a0fa0) Create stream I0529 14:30:23.623673 7 log.go:172] (0xc000d16fd0) (0xc0017a0fa0) Stream added, broadcasting: 3 I0529 14:30:23.624605 7 log.go:172] (0xc000d16fd0) Reply frame received for 3 I0529 14:30:23.624654 7 log.go:172] (0xc000d16fd0) (0xc0023c9400) Create stream I0529 14:30:23.624673 7 log.go:172] (0xc000d16fd0) (0xc0023c9400) Stream added, broadcasting: 5 I0529 14:30:23.625762 7 log.go:172] (0xc000d16fd0) Reply frame received for 5 I0529 14:30:24.702532 7 log.go:172] (0xc000d16fd0) Data frame received for 3 I0529 14:30:24.702576 7 log.go:172] (0xc0017a0fa0) (3) Data frame handling I0529 14:30:24.702594 7 log.go:172] (0xc0017a0fa0) (3) Data frame sent I0529 14:30:24.702863 7 log.go:172] (0xc000d16fd0) Data frame received for 5 I0529 14:30:24.702892 7 log.go:172] (0xc0023c9400) (5) Data frame handling I0529 14:30:24.703298 7 log.go:172] (0xc000d16fd0) Data frame received for 3 I0529 14:30:24.703327 7 log.go:172] (0xc0017a0fa0) (3) Data frame handling I0529 14:30:24.705603 7 log.go:172] (0xc000d16fd0) Data frame received for 1 I0529 14:30:24.705639 7 log.go:172] (0xc0023c92c0) (1) Data frame handling I0529 14:30:24.705796 7 log.go:172] (0xc0023c92c0) (1) Data frame sent I0529 14:30:24.706022 7 log.go:172] (0xc000d16fd0) (0xc0023c92c0) Stream removed, broadcasting: 1 I0529 14:30:24.706128 7 log.go:172] (0xc000d16fd0) Go away received I0529 14:30:24.706234 7 log.go:172] (0xc000d16fd0) (0xc0023c92c0) Stream removed, broadcasting: 1 I0529 14:30:24.706266 7 log.go:172] (0xc000d16fd0) (0xc0017a0fa0) Stream removed, broadcasting: 3 I0529 14:30:24.706284 7 log.go:172] (0xc000d16fd0) (0xc0023c9400) Stream removed, broadcasting: 5 May 29 14:30:24.706: INFO: Found all expected endpoints: [netserver-0] May 29 14:30:24.710: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.219 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9119 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 29 14:30:24.710: INFO: >>> kubeConfig: /root/.kube/config I0529 14:30:24.750335 7 log.go:172] (0xc000af9810) (0xc0023688c0) Create stream I0529 14:30:24.750363 7 log.go:172] (0xc000af9810) (0xc0023688c0) Stream added, broadcasting: 1 I0529 14:30:24.753403 7 log.go:172] (0xc000af9810) Reply frame received for 1 I0529 14:30:24.753460 7 log.go:172] (0xc000af9810) (0xc002d45180) Create stream I0529 14:30:24.753627 7 log.go:172] (0xc000af9810) (0xc002d45180) Stream added, broadcasting: 3 I0529 14:30:24.755157 7 log.go:172] (0xc000af9810) Reply frame received for 3 I0529 14:30:24.755224 7 log.go:172] (0xc000af9810) (0xc0023c9540) Create stream I0529 14:30:24.755246 7 log.go:172] (0xc000af9810) (0xc0023c9540) Stream added, broadcasting: 5 I0529 14:30:24.756454 7 log.go:172] (0xc000af9810) Reply frame received for 5 I0529 14:30:25.846837 7 log.go:172] (0xc000af9810) Data frame received for 3 I0529 14:30:25.846869 7 log.go:172] (0xc002d45180) (3) Data frame handling I0529 14:30:25.846883 7 log.go:172] (0xc002d45180) (3) Data frame sent I0529 14:30:25.846951 7 log.go:172] (0xc000af9810) Data frame received for 5 I0529 14:30:25.846974 7 log.go:172] (0xc0023c9540) (5) Data frame handling I0529 14:30:25.846999 7 log.go:172] (0xc000af9810) Data frame received for 3 I0529 14:30:25.847021 7 log.go:172] (0xc002d45180) (3) Data frame handling I0529 14:30:25.848872 7 log.go:172] (0xc000af9810) Data frame received for 1 I0529 14:30:25.848909 7 log.go:172] (0xc0023688c0) (1) Data frame handling I0529 14:30:25.848944 7 log.go:172] (0xc0023688c0) (1) Data frame sent I0529 14:30:25.848971 7 log.go:172] (0xc000af9810) (0xc0023688c0) Stream removed, broadcasting: 1 I0529 14:30:25.849005 7 log.go:172] (0xc000af9810) Go away received I0529 14:30:25.849274 7 log.go:172] (0xc000af9810) (0xc0023688c0) Stream removed, broadcasting: 1 I0529 14:30:25.849307 7 log.go:172] (0xc000af9810) (0xc002d45180) Stream removed, broadcasting: 3 I0529 14:30:25.849326 7 log.go:172] (0xc000af9810) (0xc0023c9540) Stream removed, broadcasting: 5 May 29 14:30:25.849: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:30:25.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9119" for this suite. May 29 14:30:47.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:30:48.019: INFO: namespace pod-network-test-9119 deletion completed in 22.165990378s • [SLOW TEST:52.627 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:30:48.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:30:52.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5539" for this suite. May 29 14:30:58.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:30:58.346: INFO: namespace emptydir-wrapper-5539 deletion completed in 6.093255805s • [SLOW TEST:10.327 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:30:58.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9848.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9848.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 29 14:31:04.504: INFO: DNS probes using dns-9848/dns-test-4d3f342a-719d-443e-a411-ce51079744c2 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:31:04.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9848" for this suite. May 29 14:31:10.641: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:31:10.724: INFO: namespace dns-9848 deletion completed in 6.143323038s • [SLOW TEST:12.378 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:31:10.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 29 14:31:10.809: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:31:18.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7945" for this suite. May 29 14:31:24.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:31:24.597: INFO: namespace init-container-7945 deletion completed in 6.092443161s • [SLOW TEST:13.873 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:31:24.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 29 14:31:46.676: INFO: Container started at 2020-05-29 14:31:27 +0000 UTC, pod became ready at 2020-05-29 14:31:46 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:31:46.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-396" for this suite. May 29 14:32:08.692: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:32:08.766: INFO: namespace container-probe-396 deletion completed in 22.08608542s • [SLOW TEST:44.168 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:32:08.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-1801882d-30e9-408d-8f59-f9fff3cc9bdb STEP: Creating a pod to test consume secrets May 29 14:32:08.871: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-367f51ae-f605-40d1-a8eb-c9bcb20bbb7a" in namespace "projected-6194" to be "success or failure" May 29 14:32:08.886: INFO: Pod "pod-projected-secrets-367f51ae-f605-40d1-a8eb-c9bcb20bbb7a": Phase="Pending", Reason="", readiness=false. Elapsed: 15.788854ms May 29 14:32:10.890: INFO: Pod "pod-projected-secrets-367f51ae-f605-40d1-a8eb-c9bcb20bbb7a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019562119s May 29 14:32:12.895: INFO: Pod "pod-projected-secrets-367f51ae-f605-40d1-a8eb-c9bcb20bbb7a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024515375s STEP: Saw pod success May 29 14:32:12.895: INFO: Pod "pod-projected-secrets-367f51ae-f605-40d1-a8eb-c9bcb20bbb7a" satisfied condition "success or failure" May 29 14:32:12.898: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-367f51ae-f605-40d1-a8eb-c9bcb20bbb7a container projected-secret-volume-test: STEP: delete the pod May 29 14:32:13.027: INFO: Waiting for pod pod-projected-secrets-367f51ae-f605-40d1-a8eb-c9bcb20bbb7a to disappear May 29 14:32:13.055: INFO: Pod pod-projected-secrets-367f51ae-f605-40d1-a8eb-c9bcb20bbb7a no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:32:13.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6194" for this suite. May 29 14:32:19.077: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:32:19.184: INFO: namespace projected-6194 deletion completed in 6.126131935s • [SLOW TEST:10.418 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:32:19.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-809b414b-6911-4ca3-8db4-079a5f4ed65f STEP: Creating a pod to test consume secrets May 29 14:32:19.294: INFO: Waiting up to 5m0s for pod "pod-secrets-7bce86f6-df99-4adf-b51f-b74904d13136" in namespace "secrets-6677" to be "success or failure" May 29 14:32:19.296: INFO: Pod "pod-secrets-7bce86f6-df99-4adf-b51f-b74904d13136": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212037ms May 29 14:32:21.299: INFO: Pod "pod-secrets-7bce86f6-df99-4adf-b51f-b74904d13136": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005711703s May 29 14:32:23.303: INFO: Pod "pod-secrets-7bce86f6-df99-4adf-b51f-b74904d13136": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009838222s STEP: Saw pod success May 29 14:32:23.304: INFO: Pod "pod-secrets-7bce86f6-df99-4adf-b51f-b74904d13136" satisfied condition "success or failure" May 29 14:32:23.307: INFO: Trying to get logs from node iruya-worker pod pod-secrets-7bce86f6-df99-4adf-b51f-b74904d13136 container secret-volume-test: STEP: delete the pod May 29 14:32:23.332: INFO: Waiting for pod pod-secrets-7bce86f6-df99-4adf-b51f-b74904d13136 to disappear May 29 14:32:23.336: INFO: Pod pod-secrets-7bce86f6-df99-4adf-b51f-b74904d13136 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:32:23.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6677" for this suite. May 29 14:32:29.352: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:32:29.425: INFO: namespace secrets-6677 deletion completed in 6.085559313s • [SLOW TEST:10.240 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:32:29.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-e8b5fbbe-179e-4618-90fe-43f1d5a888a0 STEP: Creating a pod to test consume configMaps May 29 14:32:29.501: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-25e1aa11-aaac-4598-9449-e662f1be7d90" in namespace "projected-8528" to be "success or failure" May 29 14:32:29.504: INFO: Pod "pod-projected-configmaps-25e1aa11-aaac-4598-9449-e662f1be7d90": Phase="Pending", Reason="", readiness=false. Elapsed: 3.609271ms May 29 14:32:31.508: INFO: Pod "pod-projected-configmaps-25e1aa11-aaac-4598-9449-e662f1be7d90": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00781939s May 29 14:32:33.513: INFO: Pod "pod-projected-configmaps-25e1aa11-aaac-4598-9449-e662f1be7d90": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012305089s STEP: Saw pod success May 29 14:32:33.513: INFO: Pod "pod-projected-configmaps-25e1aa11-aaac-4598-9449-e662f1be7d90" satisfied condition "success or failure" May 29 14:32:33.516: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-25e1aa11-aaac-4598-9449-e662f1be7d90 container projected-configmap-volume-test: STEP: delete the pod May 29 14:32:33.535: INFO: Waiting for pod pod-projected-configmaps-25e1aa11-aaac-4598-9449-e662f1be7d90 to disappear May 29 14:32:33.539: INFO: Pod pod-projected-configmaps-25e1aa11-aaac-4598-9449-e662f1be7d90 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:32:33.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8528" for this suite. May 29 14:32:39.570: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:32:39.643: INFO: namespace projected-8528 deletion completed in 6.100583102s • [SLOW TEST:10.217 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:32:39.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-f34d3c6f-f193-4484-81b1-26dbc4798352 STEP: Creating a pod to test consume configMaps May 29 14:32:39.781: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4773048f-a9c4-452a-add6-4ce6bb81deaa" in namespace "projected-9196" to be "success or failure" May 29 14:32:39.864: INFO: Pod "pod-projected-configmaps-4773048f-a9c4-452a-add6-4ce6bb81deaa": Phase="Pending", Reason="", readiness=false. Elapsed: 82.676973ms May 29 14:32:41.870: INFO: Pod "pod-projected-configmaps-4773048f-a9c4-452a-add6-4ce6bb81deaa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089015932s May 29 14:32:43.874: INFO: Pod "pod-projected-configmaps-4773048f-a9c4-452a-add6-4ce6bb81deaa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.092553911s STEP: Saw pod success May 29 14:32:43.874: INFO: Pod "pod-projected-configmaps-4773048f-a9c4-452a-add6-4ce6bb81deaa" satisfied condition "success or failure" May 29 14:32:43.984: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-4773048f-a9c4-452a-add6-4ce6bb81deaa container projected-configmap-volume-test: STEP: delete the pod May 29 14:32:44.033: INFO: Waiting for pod pod-projected-configmaps-4773048f-a9c4-452a-add6-4ce6bb81deaa to disappear May 29 14:32:44.036: INFO: Pod pod-projected-configmaps-4773048f-a9c4-452a-add6-4ce6bb81deaa no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:32:44.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9196" for this suite. May 29 14:32:50.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:32:50.156: INFO: namespace projected-9196 deletion completed in 6.116289962s • [SLOW TEST:10.512 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:32:50.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments May 29 14:32:50.237: INFO: Waiting up to 5m0s for pod "client-containers-1f7f9a49-db73-4391-a04d-361a68003bbd" in namespace "containers-3247" to be "success or failure" May 29 14:32:50.247: INFO: Pod "client-containers-1f7f9a49-db73-4391-a04d-361a68003bbd": Phase="Pending", Reason="", readiness=false. Elapsed: 9.893837ms May 29 14:32:52.252: INFO: Pod "client-containers-1f7f9a49-db73-4391-a04d-361a68003bbd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014497139s May 29 14:32:54.256: INFO: Pod "client-containers-1f7f9a49-db73-4391-a04d-361a68003bbd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018502359s STEP: Saw pod success May 29 14:32:54.256: INFO: Pod "client-containers-1f7f9a49-db73-4391-a04d-361a68003bbd" satisfied condition "success or failure" May 29 14:32:54.259: INFO: Trying to get logs from node iruya-worker2 pod client-containers-1f7f9a49-db73-4391-a04d-361a68003bbd container test-container: STEP: delete the pod May 29 14:32:54.293: INFO: Waiting for pod client-containers-1f7f9a49-db73-4391-a04d-361a68003bbd to disappear May 29 14:32:54.302: INFO: Pod client-containers-1f7f9a49-db73-4391-a04d-361a68003bbd no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:32:54.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3247" for this suite. May 29 14:33:00.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:33:00.394: INFO: namespace containers-3247 deletion completed in 6.088815443s • [SLOW TEST:10.238 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:33:00.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 29 14:33:00.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 29 14:33:00.568: INFO: stderr: "" May 29 14:33:00.568: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.11\", GitCommit:\"d94a81c724ea8e1ccc9002d89b7fe81d58f89ede\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:43Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:28:37Z\", GoVersion:\"go1.12.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:33:00.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5128" for this suite. May 29 14:33:06.617: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:33:06.721: INFO: namespace kubectl-5128 deletion completed in 6.147806371s • [SLOW TEST:6.327 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:33:06.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-c02e7671-95a1-49e1-ac65-c4f8fb6ea539 STEP: Creating a pod to test consume secrets May 29 14:33:06.776: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fd6d191b-4da1-4eaa-820b-18a5d1ba41f6" in namespace "projected-1140" to be "success or failure" May 29 14:33:06.781: INFO: Pod "pod-projected-secrets-fd6d191b-4da1-4eaa-820b-18a5d1ba41f6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.278646ms May 29 14:33:08.785: INFO: Pod "pod-projected-secrets-fd6d191b-4da1-4eaa-820b-18a5d1ba41f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008733904s May 29 14:33:10.823: INFO: Pod "pod-projected-secrets-fd6d191b-4da1-4eaa-820b-18a5d1ba41f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047204696s STEP: Saw pod success May 29 14:33:10.823: INFO: Pod "pod-projected-secrets-fd6d191b-4da1-4eaa-820b-18a5d1ba41f6" satisfied condition "success or failure" May 29 14:33:10.826: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-fd6d191b-4da1-4eaa-820b-18a5d1ba41f6 container secret-volume-test: STEP: delete the pod May 29 14:33:10.976: INFO: Waiting for pod pod-projected-secrets-fd6d191b-4da1-4eaa-820b-18a5d1ba41f6 to disappear May 29 14:33:11.020: INFO: Pod pod-projected-secrets-fd6d191b-4da1-4eaa-820b-18a5d1ba41f6 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:33:11.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1140" for this suite. May 29 14:33:17.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:33:17.120: INFO: namespace projected-1140 deletion completed in 6.096119438s • [SLOW TEST:10.399 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:33:17.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-7846e178-4132-4f68-8bb6-bd3ad77aaaec STEP: Creating a pod to test consume configMaps May 29 14:33:17.226: INFO: Waiting up to 5m0s for pod "pod-configmaps-6e6291b9-6f47-4417-ba84-972497f11d2a" in namespace "configmap-2042" to be "success or failure" May 29 14:33:17.235: INFO: Pod "pod-configmaps-6e6291b9-6f47-4417-ba84-972497f11d2a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.097987ms May 29 14:33:19.239: INFO: Pod "pod-configmaps-6e6291b9-6f47-4417-ba84-972497f11d2a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012769035s May 29 14:33:21.244: INFO: Pod "pod-configmaps-6e6291b9-6f47-4417-ba84-972497f11d2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017755197s STEP: Saw pod success May 29 14:33:21.244: INFO: Pod "pod-configmaps-6e6291b9-6f47-4417-ba84-972497f11d2a" satisfied condition "success or failure" May 29 14:33:21.246: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-6e6291b9-6f47-4417-ba84-972497f11d2a container configmap-volume-test: STEP: delete the pod May 29 14:33:21.328: INFO: Waiting for pod pod-configmaps-6e6291b9-6f47-4417-ba84-972497f11d2a to disappear May 29 14:33:21.350: INFO: Pod pod-configmaps-6e6291b9-6f47-4417-ba84-972497f11d2a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:33:21.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2042" for this suite. May 29 14:33:27.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:33:27.482: INFO: namespace configmap-2042 deletion completed in 6.12804911s • [SLOW TEST:10.362 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:33:27.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-5f09d6e4-f726-424f-9fdd-86b32bdf1b4a STEP: Creating a pod to test consume configMaps May 29 14:33:27.542: INFO: Waiting up to 5m0s for pod "pod-configmaps-78bf0f3a-0f69-4c7c-87b2-3c3e3b39739d" in namespace "configmap-7956" to be "success or failure" May 29 14:33:27.547: INFO: Pod "pod-configmaps-78bf0f3a-0f69-4c7c-87b2-3c3e3b39739d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.456824ms May 29 14:33:29.552: INFO: Pod "pod-configmaps-78bf0f3a-0f69-4c7c-87b2-3c3e3b39739d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009517375s May 29 14:33:31.557: INFO: Pod "pod-configmaps-78bf0f3a-0f69-4c7c-87b2-3c3e3b39739d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014721779s STEP: Saw pod success May 29 14:33:31.557: INFO: Pod "pod-configmaps-78bf0f3a-0f69-4c7c-87b2-3c3e3b39739d" satisfied condition "success or failure" May 29 14:33:31.561: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-78bf0f3a-0f69-4c7c-87b2-3c3e3b39739d container configmap-volume-test: STEP: delete the pod May 29 14:33:31.609: INFO: Waiting for pod pod-configmaps-78bf0f3a-0f69-4c7c-87b2-3c3e3b39739d to disappear May 29 14:33:31.613: INFO: Pod pod-configmaps-78bf0f3a-0f69-4c7c-87b2-3c3e3b39739d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:33:31.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7956" for this suite. May 29 14:33:37.629: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:33:37.707: INFO: namespace configmap-7956 deletion completed in 6.090611317s • [SLOW TEST:10.224 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:33:37.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 29 14:33:45.846: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 29 14:33:45.849: INFO: Pod pod-with-prestop-exec-hook still exists May 29 14:33:47.849: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 29 14:33:47.853: INFO: Pod pod-with-prestop-exec-hook still exists May 29 14:33:49.849: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 29 14:33:49.853: INFO: Pod pod-with-prestop-exec-hook still exists May 29 14:33:51.849: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 29 14:33:51.853: INFO: Pod pod-with-prestop-exec-hook still exists May 29 14:33:53.849: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 29 14:33:53.854: INFO: Pod pod-with-prestop-exec-hook still exists May 29 14:33:55.849: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 29 14:33:55.853: INFO: Pod pod-with-prestop-exec-hook still exists May 29 14:33:57.849: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 29 14:33:57.854: INFO: Pod pod-with-prestop-exec-hook still exists May 29 14:33:59.849: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 29 14:33:59.853: INFO: Pod pod-with-prestop-exec-hook still exists May 29 14:34:01.849: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 29 14:34:01.855: INFO: Pod pod-with-prestop-exec-hook still exists May 29 14:34:03.849: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 29 14:34:03.853: INFO: Pod pod-with-prestop-exec-hook still exists May 29 14:34:05.849: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 29 14:34:05.854: INFO: Pod pod-with-prestop-exec-hook still exists May 29 14:34:07.849: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 29 14:34:07.853: INFO: Pod pod-with-prestop-exec-hook still exists May 29 14:34:09.849: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 29 14:34:09.853: INFO: Pod pod-with-prestop-exec-hook still exists May 29 14:34:11.849: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 29 14:34:11.853: INFO: Pod pod-with-prestop-exec-hook still exists May 29 14:34:13.849: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 29 14:34:13.852: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:34:13.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-182" for this suite. May 29 14:34:35.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:34:35.949: INFO: namespace container-lifecycle-hook-182 deletion completed in 22.088862026s • [SLOW TEST:58.242 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:34:35.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-5d04cef4-9500-4308-909d-68d1d8cefd6c STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:34:40.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1681" for this suite. May 29 14:35:02.253: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:35:02.333: INFO: namespace configmap-1681 deletion completed in 22.143157358s • [SLOW TEST:26.384 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:35:02.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode May 29 14:35:02.425: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-1836" to be "success or failure" May 29 14:35:02.435: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 9.928312ms May 29 14:35:04.597: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.171939657s May 29 14:35:06.602: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.176505845s May 29 14:35:08.607: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.181573647s STEP: Saw pod success May 29 14:35:08.607: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 29 14:35:08.610: INFO: Trying to get logs from node iruya-worker pod pod-host-path-test container test-container-1: STEP: delete the pod May 29 14:35:08.645: INFO: Waiting for pod pod-host-path-test to disappear May 29 14:35:08.657: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:35:08.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-1836" for this suite. May 29 14:35:14.690: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:35:14.769: INFO: namespace hostpath-1836 deletion completed in 6.107320846s • [SLOW TEST:12.435 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:35:14.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-38821bf8-16ae-4be9-b378-ad20a0aa4a51 in namespace container-probe-8232 May 29 14:35:18.836: INFO: Started pod busybox-38821bf8-16ae-4be9-b378-ad20a0aa4a51 in namespace container-probe-8232 STEP: checking the pod's current state and verifying that restartCount is present May 29 14:35:18.839: INFO: Initial restart count of pod busybox-38821bf8-16ae-4be9-b378-ad20a0aa4a51 is 0 May 29 14:36:06.997: INFO: Restart count of pod container-probe-8232/busybox-38821bf8-16ae-4be9-b378-ad20a0aa4a51 is now 1 (48.157434928s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:36:07.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8232" for this suite. May 29 14:36:13.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:36:13.145: INFO: namespace container-probe-8232 deletion completed in 6.126415194s • [SLOW TEST:58.375 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:36:13.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 29 14:36:18.278: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:36:19.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-455" for this suite. May 29 14:36:41.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:36:41.450: INFO: namespace replicaset-455 deletion completed in 22.148737141s • [SLOW TEST:28.305 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:36:41.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 29 14:36:49.577: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 29 14:36:49.581: INFO: Pod pod-with-poststart-http-hook still exists May 29 14:36:51.582: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 29 14:36:51.586: INFO: Pod pod-with-poststart-http-hook still exists May 29 14:36:53.582: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 29 14:36:53.585: INFO: Pod pod-with-poststart-http-hook still exists May 29 14:36:55.582: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 29 14:36:55.585: INFO: Pod pod-with-poststart-http-hook still exists May 29 14:36:57.582: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 29 14:36:57.585: INFO: Pod pod-with-poststart-http-hook still exists May 29 14:36:59.582: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 29 14:36:59.587: INFO: Pod pod-with-poststart-http-hook still exists May 29 14:37:01.582: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 29 14:37:01.586: INFO: Pod pod-with-poststart-http-hook still exists May 29 14:37:03.582: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 29 14:37:03.586: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:37:03.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4601" for this suite. May 29 14:37:25.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:37:25.691: INFO: namespace container-lifecycle-hook-4601 deletion completed in 22.100082371s • [SLOW TEST:44.240 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:37:25.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0529 14:37:26.800778 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 29 14:37:26.800: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:37:26.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5067" for this suite. May 29 14:37:32.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:37:32.897: INFO: namespace gc-5067 deletion completed in 6.093269948s • [SLOW TEST:7.205 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:37:32.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 29 14:37:37.058: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:37:37.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9860" for this suite. May 29 14:37:43.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:37:43.224: INFO: namespace container-runtime-9860 deletion completed in 6.117850191s • [SLOW TEST:10.327 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:37:43.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy May 29 14:37:43.303: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix267549127/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:37:43.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-242" for this suite. May 29 14:37:49.419: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:37:49.492: INFO: namespace kubectl-242 deletion completed in 6.090099772s • [SLOW TEST:6.268 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:37:49.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 29 14:37:49.563: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:37:57.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6742" for this suite. May 29 14:38:19.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:38:19.460: INFO: namespace init-container-6742 deletion completed in 22.110961994s • [SLOW TEST:29.966 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:38:19.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 29 14:38:19.505: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.178255ms) May 29 14:38:19.528: INFO: (1) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 23.098496ms) May 29 14:38:19.532: INFO: (2) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.708022ms) May 29 14:38:19.535: INFO: (3) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.409769ms) May 29 14:38:19.539: INFO: (4) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.559407ms) May 29 14:38:19.542: INFO: (5) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.337287ms) May 29 14:38:19.546: INFO: (6) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.230608ms) May 29 14:38:19.549: INFO: (7) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.412263ms) May 29 14:38:19.552: INFO: (8) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.261209ms) May 29 14:38:19.556: INFO: (9) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.970562ms) May 29 14:38:19.560: INFO: (10) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.317757ms) May 29 14:38:19.563: INFO: (11) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.585304ms) May 29 14:38:19.567: INFO: (12) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.477861ms) May 29 14:38:19.570: INFO: (13) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.370872ms) May 29 14:38:19.573: INFO: (14) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.846652ms) May 29 14:38:19.576: INFO: (15) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.837259ms) May 29 14:38:19.579: INFO: (16) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.157395ms) May 29 14:38:19.582: INFO: (17) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.994629ms) May 29 14:38:19.585: INFO: (18) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.071988ms) May 29 14:38:19.588: INFO: (19) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.697115ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:38:19.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-2253" for this suite. May 29 14:38:25.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:38:25.704: INFO: namespace proxy-2253 deletion completed in 6.112909623s • [SLOW TEST:6.243 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:38:25.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium May 29 14:38:25.788: INFO: Waiting up to 5m0s for pod "pod-236423bc-8fe4-4f05-b825-42d0a0529df1" in namespace "emptydir-9330" to be "success or failure" May 29 14:38:25.799: INFO: Pod "pod-236423bc-8fe4-4f05-b825-42d0a0529df1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.728654ms May 29 14:38:27.803: INFO: Pod "pod-236423bc-8fe4-4f05-b825-42d0a0529df1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015127943s May 29 14:38:29.807: INFO: Pod "pod-236423bc-8fe4-4f05-b825-42d0a0529df1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019163115s STEP: Saw pod success May 29 14:38:29.807: INFO: Pod "pod-236423bc-8fe4-4f05-b825-42d0a0529df1" satisfied condition "success or failure" May 29 14:38:29.810: INFO: Trying to get logs from node iruya-worker pod pod-236423bc-8fe4-4f05-b825-42d0a0529df1 container test-container: STEP: delete the pod May 29 14:38:29.873: INFO: Waiting for pod pod-236423bc-8fe4-4f05-b825-42d0a0529df1 to disappear May 29 14:38:29.941: INFO: Pod pod-236423bc-8fe4-4f05-b825-42d0a0529df1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:38:29.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9330" for this suite. May 29 14:38:35.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:38:36.034: INFO: namespace emptydir-9330 deletion completed in 6.089957381s • [SLOW TEST:10.330 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:38:36.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 29 14:38:36.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-692' May 29 14:38:38.615: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 29 14:38:38.615: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 May 29 14:38:38.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-692' May 29 14:38:38.744: INFO: stderr: "" May 29 14:38:38.744: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:38:38.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-692" for this suite. May 29 14:38:44.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:38:44.839: INFO: namespace kubectl-692 deletion completed in 6.091639582s • [SLOW TEST:8.805 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:38:44.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 29 14:38:50.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-44abb75d-581f-4c60-87f9-fa7f7065f168 -c busybox-main-container --namespace=emptydir-9278 -- cat /usr/share/volumeshare/shareddata.txt' May 29 14:38:51.210: INFO: stderr: "I0529 14:38:51.103390 3739 log.go:172] (0xc000806370) (0xc00020ca00) Create stream\nI0529 14:38:51.103457 3739 log.go:172] (0xc000806370) (0xc00020ca00) Stream added, broadcasting: 1\nI0529 14:38:51.105978 3739 log.go:172] (0xc000806370) Reply frame received for 1\nI0529 14:38:51.106045 3739 log.go:172] (0xc000806370) (0xc000900000) Create stream\nI0529 14:38:51.106077 3739 log.go:172] (0xc000806370) (0xc000900000) Stream added, broadcasting: 3\nI0529 14:38:51.107247 3739 log.go:172] (0xc000806370) Reply frame received for 3\nI0529 14:38:51.107295 3739 log.go:172] (0xc000806370) (0xc00020caa0) Create stream\nI0529 14:38:51.107312 3739 log.go:172] (0xc000806370) (0xc00020caa0) Stream added, broadcasting: 5\nI0529 14:38:51.108548 3739 log.go:172] (0xc000806370) Reply frame received for 5\nI0529 14:38:51.201617 3739 log.go:172] (0xc000806370) Data frame received for 5\nI0529 14:38:51.201646 3739 log.go:172] (0xc00020caa0) (5) Data frame handling\nI0529 14:38:51.201663 3739 log.go:172] (0xc000806370) Data frame received for 3\nI0529 14:38:51.201674 3739 log.go:172] (0xc000900000) (3) Data frame handling\nI0529 14:38:51.201690 3739 log.go:172] (0xc000900000) (3) Data frame sent\nI0529 14:38:51.201700 3739 log.go:172] (0xc000806370) Data frame received for 3\nI0529 14:38:51.201706 3739 log.go:172] (0xc000900000) (3) Data frame handling\nI0529 14:38:51.203301 3739 log.go:172] (0xc000806370) Data frame received for 1\nI0529 14:38:51.203333 3739 log.go:172] (0xc00020ca00) (1) Data frame handling\nI0529 14:38:51.203350 3739 log.go:172] (0xc00020ca00) (1) Data frame sent\nI0529 14:38:51.203384 3739 log.go:172] (0xc000806370) (0xc00020ca00) Stream removed, broadcasting: 1\nI0529 14:38:51.203412 3739 log.go:172] (0xc000806370) Go away received\nI0529 14:38:51.204813 3739 log.go:172] (0xc000806370) (0xc00020ca00) Stream removed, broadcasting: 1\nI0529 14:38:51.204843 3739 log.go:172] (0xc000806370) (0xc000900000) Stream removed, broadcasting: 3\nI0529 14:38:51.204873 3739 log.go:172] (0xc000806370) (0xc00020caa0) Stream removed, broadcasting: 5\n" May 29 14:38:51.210: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:38:51.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9278" for this suite. May 29 14:38:57.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:38:57.375: INFO: namespace emptydir-9278 deletion completed in 6.160392589s • [SLOW TEST:12.535 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 29 14:38:57.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 29 14:38:57.445: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 29 14:38:59.603: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 29 14:39:00.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7177" for this suite. May 29 14:39:06.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 29 14:39:07.077: INFO: namespace replication-controller-7177 deletion completed in 6.461317477s • [SLOW TEST:9.702 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMay 29 14:39:07.078: INFO: Running AfterSuite actions on all nodes May 29 14:39:07.078: INFO: Running AfterSuite actions on node 1 May 29 14:39:07.078: INFO: Skipping dumping logs from cluster Ran 215 of 4412 Specs in 6192.269 seconds SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped PASS